Marten Feldtmann wrote:
>
> Throw away all the "hardwired"-stuff and do it with software. I
> once described an algorithm in one of this lists how to create
> unique values for clients without minimum interaction with the
> database.
>
> The result: query once in the beginning of your application,
> generate your id's "offline" at the maximum speed you may
> have and store your last generated id when your client
> finished. Superior to all the "hardwired"-database solutions !
Yes, but...
(1) The application I have is composed of about 50 processes running on 3 different OS/architectures (Linux/intel,
Solaris/sparc,and VxWorks/ppc). The IDs I need must be unique across all processes (I suppose one solution would
beto provide each ID with a unique prefix based on the process that is running, but...)
(2) Some of these systems are real-time boxes that might get rebooted at any moment, or might hang for
hardware-related reasons [I'd like to able to say that all of the processes could detect imminent failure, but
unfortunately,I can't]. So determining when a client "finishes" is not always possible, which prevents (he claims)
theabove solution from claiming ID uniqueness.
However, it might be sufficient to provide a process on the
postgres DB machine (if *that* machine dies, *everything* stops...)
that serves IDs via CORBA to all the other applications and
(internally) uses the "software" approach given above. This
process could "sync" with the database every N seconds or so
(where N might be < 1.0). This, while still not guaranteeing
uniqueness, would at least come pretty close... It would still be
nice to avoid having to VACUUM ANALYZE this table, though, and it
"feels" as though it is duplicating functionality already provided
by postgres DB backends.
I'll think about this solution - thanks!
--
Steve Wampler- SOLIS Project, National Solar Observatory
swampler@noao.edu