Thread: Does setval(nextval()+N) generate unique blocks of IDs?
Is seq.setval() "non transactional" in the same sense as seq.nextval() is? More specifically, suppose I sometimes want to get IDs one-by-one using nextval(), but sometimes I want a block of a thousand IDs. To get the latter, I want to do this: select setval('object_id_seq', nextval('object_id_seq') + 1000, false); Now suppose two processes do this simultaneously. Maybe they're in transactions, maybe they're not. Are they guaranteed to get distinct blocks of IDs? Or is it possible that each will execute nextval() and get N and N+1 respectively, and then do setval() to N+1000 and N+1001, resulting in two overlapping blocks. If the answer is, "This won't work," then what's a better way to do this? Thanks, Craig
Craig James <cjames@emolecules.com> writes: > I want to do this: > select setval('object_id_seq', nextval('object_id_seq') + 1000, false); > Now suppose two processes do this simultaneously. Maybe they're in > transactions, maybe they're not. Are they guaranteed to get distinct > blocks of IDs? No, because the setval and the nextval are not indivisible. > Or is it possible that each will execute nextval() and > get N and N+1 respectively, and then do setval() to N+1000 and N+1001, > resulting in two overlapping blocks. Exactly. > If the answer is, "This won't work," then what's a better way to do this? AFAIK the only way at the moment is * acquire some advisory lock that by convention you use for this sequence * advance the sequence * release advisory lock There have been previous discussions of this type of problem, eg http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php but the topic doesn't seem to have come up quite often enough to motivate anybody to do anything about it. Your particular case could be handled by a variant of nextval() with a number-of-times-to-advance argument, but I'm not sure if that's enough for other scenarios. regards, tom lane
On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Craig James <cjames@emolecules.com> writes: >> I want to do this: > >> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); > >> Now suppose two processes do this simultaneously. Maybe they're in >> transactions, maybe they're not. Are they guaranteed to get distinct >> blocks of IDs? > > No, because the setval and the nextval are not indivisible. > >> Or is it possible that each will execute nextval() and >> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >> resulting in two overlapping blocks. > > Exactly. > >> If the answer is, "This won't work," then what's a better way to do this? > > AFAIK the only way at the moment is > > * acquire some advisory lock that by convention you use for this sequence > * advance the sequence > * release advisory lock > > There have been previous discussions of this type of problem, eg > http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php > but the topic doesn't seem to have come up quite often enough to > motivate anybody to do anything about it. Your particular case could be > handled by a variant of nextval() with a number-of-times-to-advance > argument, but I'm not sure if that's enough for other scenarios. If the OP could live with large gaps in his sequence, he could set it to advance by say 1000 at a time, and then use the numbers in that gap freely. Just a thought.
On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Craig James <cjames@emolecules.com> writes: >>> I want to do this: >> >>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >> >>> Now suppose two processes do this simultaneously. Maybe they're in >>> transactions, maybe they're not. Are they guaranteed to get distinct >>> blocks of IDs? >> >> No, because the setval and the nextval are not indivisible. >> >>> Or is it possible that each will execute nextval() and >>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>> resulting in two overlapping blocks. >> >> Exactly. >> >>> If the answer is, "This won't work," then what's a better way to do this? >> >> AFAIK the only way at the moment is >> >> * acquire some advisory lock that by convention you use for this sequence >> * advance the sequence >> * release advisory lock >> >> There have been previous discussions of this type of problem, eg >> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php >> but the topic doesn't seem to have come up quite often enough to >> motivate anybody to do anything about it. Your particular case could be >> handled by a variant of nextval() with a number-of-times-to-advance >> argument, but I'm not sure if that's enough for other scenarios. > > If the OP could live with large gaps in his sequence, he could set it > to advance by say 1000 at a time, and then use the numbers in that gap > freely. Just a thought. Better yet set cache = 1000; here's an example: create sequence a cache 1000; T1: select nextval('a'); 1 T2: select nextval('a'); 1001 T1: select nextval('a'); 2 T2: select nextval('a'); 1002 and so on. Now can he just select nextval('a'); 1000 times in a loop? Or would he prefer another method. I guess I'm kind of wondering which problem he's trying to solve.
On Mon, Aug 20, 2012 at 7:06 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Craig James <cjames@emolecules.com> writes: >>>> I want to do this: >>> >>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >>> >>>> Now suppose two processes do this simultaneously. Maybe they're in >>>> transactions, maybe they're not. Are they guaranteed to get distinct >>>> blocks of IDs? >>> >>> No, because the setval and the nextval are not indivisible. >>> >>>> Or is it possible that each will execute nextval() and >>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>>> resulting in two overlapping blocks. >>> >>> Exactly. >>> >>>> If the answer is, "This won't work," then what's a better way to do this? >>> >>> AFAIK the only way at the moment is >>> >>> * acquire some advisory lock that by convention you use for this sequence >>> * advance the sequence >>> * release advisory lock >>> >>> There have been previous discussions of this type of problem, eg >>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php >>> but the topic doesn't seem to have come up quite often enough to >>> motivate anybody to do anything about it. Your particular case could be >>> handled by a variant of nextval() with a number-of-times-to-advance >>> argument, but I'm not sure if that's enough for other scenarios. >> >> If the OP could live with large gaps in his sequence, he could set it >> to advance by say 1000 at a time, and then use the numbers in that gap >> freely. Just a thought. > > Better yet set cache = 1000; here's an example: > > create sequence a cache 1000; > T1: select nextval('a'); > 1 > T2: select nextval('a'); > 1001 > T1: select nextval('a'); > 2 > T2: select nextval('a'); > 1002 > > and so on. > > Now can he just select nextval('a'); 1000 times in a loop? Or would > he prefer another method. > > I guess I'm kind of wondering which problem he's trying to solve. Made a sequence: create sequence a; then ran a one line select nextval('a'); against it 1000 times from bash, i.e. the worst vase performance scenario: time for ((i=0;i<1000;i++));do psql -f t1 > /dev/null;done real 1m1.978s user 0m41.999s sys 0m12.277s then I ran it a singe time on a file with 1000 select nextvals: time psql -f t1000 > /dev/null real 0m0.486s user 0m0.112s sys 0m0.036s Then I recreated sequence a: create sequence a cache 1000; and ran it again: time psql -f t1000 > /dev/null real 0m0.293s user 0m0.120s sys 0m0.024s I'd imagine in a real programming oangua
On Tue, Aug 21, 2012 at 1:41 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Mon, Aug 20, 2012 at 7:06 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >>> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>>> Craig James <cjames@emolecules.com> writes: >>>>> I want to do this: >>>> >>>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >>>> >>>>> Now suppose two processes do this simultaneously. Maybe they're in >>>>> transactions, maybe they're not. Are they guaranteed to get distinct >>>>> blocks of IDs? >>>> >>>> No, because the setval and the nextval are not indivisible. >>>> >>>>> Or is it possible that each will execute nextval() and >>>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>>>> resulting in two overlapping blocks. >>>> >>>> Exactly. >>>> >>>>> If the answer is, "This won't work," then what's a better way to do this? >>>> >>>> AFAIK the only way at the moment is >>>> >>>> * acquire some advisory lock that by convention you use for this sequence >>>> * advance the sequence >>>> * release advisory lock >>>> >>>> There have been previous discussions of this type of problem, eg >>>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php >>>> but the topic doesn't seem to have come up quite often enough to >>>> motivate anybody to do anything about it. Your particular case could be >>>> handled by a variant of nextval() with a number-of-times-to-advance >>>> argument, but I'm not sure if that's enough for other scenarios. >>> >>> If the OP could live with large gaps in his sequence, he could set it >>> to advance by say 1000 at a time, and then use the numbers in that gap >>> freely. Just a thought. >> >> Better yet set cache = 1000; here's an example: >> >> create sequence a cache 1000; >> T1: select nextval('a'); >> 1 >> T2: select nextval('a'); >> 1001 >> T1: select nextval('a'); >> 2 >> T2: select nextval('a'); >> 1002 >> >> and so on. >> >> Now can he just select nextval('a'); 1000 times in a loop? Or would >> he prefer another method. >> >> I guess I'm kind of wondering which problem he's trying to solve. > > Made a sequence: > create sequence a; > > then ran a one line > select nextval('a'); > against it 1000 times from bash, i.e. the worst vase performance scenario: > > time for ((i=0;i<1000;i++));do psql -f t1 > /dev/null;done > > real 1m1.978s > user 0m41.999s > sys 0m12.277s > > then I ran it a singe time on a file with 1000 select nextvals: > > time psql -f t1000 > /dev/null > > real 0m0.486s > user 0m0.112s > sys 0m0.036s > > Then I recreated sequence a: > > create sequence a cache 1000; > > and ran it again: > > time psql -f t1000 > /dev/null > > real 0m0.293s > user 0m0.120s > sys 0m0.024s > > I'd imagine in a real programming oangua sometimes I hate my laptops touchpad. Ran something similar in php got similar performance. By comparison, running select 1 instead of nextval() took ~0.160s to run.
On Tue, Aug 21, 2012 at 2:45 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > sometimes I hate my laptops touchpad. Ran something similar in php > got similar performance. By comparison, running select 1 instead of > nextval() took ~0.160s to run. you're mostly measuring client overhead i think: postgres=# explain analyze select nextval('s') from generate_series(1,1000); explain analyze select nextval('s') from generate_series(1,1000); QUERY PLAN --------------------------------------------------------------------------------------------------------------------- Function Scan on generate_series (cost=0.00..12.50 rows=1000 width=0) (actual time=0.149..1.320 rows=1000 loops=1) Total runtime: 1.806 ms postgres=# do $$ begin for x in 1..1000 loop perform nextval('s'); end loop; end; $$ language plpgsql; DO Time: 4.333 ms Anyways, the only reason to do advisory locking is if you a) strictly need contiguous blocks of ids and b) are worried about concurrency and the id is fetched early in a non-trivial transaction If a) isn't true, it's better to do looped nextval, and if b) isn't true, IMO it's better to maintain a value in a table and let mvcc handle things. Being able to grab sequences in a block without manual locking would be a nice feature but only if it could be done without adding an iota of overhead to standard usage :-). merlin
On Mon, Aug 20, 2012 at 6:06 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Craig James <cjames@emolecules.com> writes: >>>> I want to do this: >>> >>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >>> >>>> Now suppose two processes do this simultaneously. Maybe they're in >>>> transactions, maybe they're not. Are they guaranteed to get distinct >>>> blocks of IDs? >>> >>> No, because the setval and the nextval are not indivisible. >>> >>>> Or is it possible that each will execute nextval() and >>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>>> resulting in two overlapping blocks. >>> >>> Exactly. >>> >>>> If the answer is, "This won't work," then what's a better way to do this? --- snip --- >> If the OP could live with large gaps in his sequence, he could set it >> to advance by say 1000 at a time, and then use the numbers in that gap >> freely. Just a thought. > > Better yet set cache = 1000; here's an example: > > create sequence a cache 1000; > T1: select nextval('a'); > 1 > T2: select nextval('a'); > 1001 > T1: select nextval('a'); > 2 > T2: select nextval('a'); > 1002 > > and so on. > > Now can he just select nextval('a'); 1000 times in a loop? Or would > he prefer another method. > > I guess I'm kind of wondering which problem he's trying to solve. I thought of that, but I can't live with large gaps in the sequence. It's used for 32-bit keys, and at a maximum rate of use we have about 30 years before we run out of numbers. If I start using 1000-item blocks, we could run out in a few months even at today's usage. Besides which, it doesn't solve the problem, because what do I do when an application asks for a block of 1001 items? It's also inefficient to call nextval() 10, 100 (or 10000, or 100000) times in a row just to get a guaranteed-unique block of identifiers. Thanks, Craig
On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Craig James <cjames@emolecules.com> writes: >> I want to do this: > >> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); > >> Now suppose two processes do this simultaneously. Maybe they're in >> transactions, maybe they're not. Are they guaranteed to get distinct >> blocks of IDs? > > No, because the setval and the nextval are not indivisible. > >> Or is it possible that each will execute nextval() and >> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >> resulting in two overlapping blocks. > > Exactly. > >> If the answer is, "This won't work," then what's a better way to do this? > > AFAIK the only way at the moment is > > * acquire some advisory lock that by convention you use for this sequence > * advance the sequence > * release advisory lock > > There have been previous discussions of this type of problem, eg > http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php > but the topic doesn't seem to have come up quite often enough to > motivate anybody to do anything about it. Your particular case could be > handled by a variant of nextval() with a number-of-times-to-advance > argument, but I'm not sure if that's enough for other scenarios. > > regards, tom lane So here's what I came up with. I'm no PLPGSQL guru, but it seemed pretty straightforward. create or replace function nextval_block(bsize integer default 1) returns bigint as $nextval_block$ declare bstart bigint; begin perform pg_advisory_lock(1); select into bstart nextval('my_seq'); perform setval('my_seq', bstart + bsize, false); perform pg_advisory_unlock(1); return bstart; end; $nextval_block$ language plpgsql; As long as I ensure that every application uses nextval_block() instead of nextval() to access this sequence, I think this will do what I want. testdb=> select nextval_block(); nextval_block --------------- 1 (1 row) testdb=> select nextval_block(); nextval_block --------------- 2 (1 row) testdb=> select nextval_block(1000); nextval_block --------------- 3 (1 row) testdb=> select nextval_block(1000); nextval_block --------------- 1003 (1 row) testdb=> select nextval_block(1000); nextval_block --------------- 2003 (1 row) Using pgsql's \timing directive, it says it's roughly 0.45 msec per request with the client and server are on the same machines, and 0.55 msec per request when the client and server are different machines. Not bad. Thanks for your help! Craig
On Tue, Aug 21, 2012 at 10:32 AM, Craig James <cjames@emolecules.com> wrote: > On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Craig James <cjames@emolecules.com> writes: >>> I want to do this: >> >>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >> >>> Now suppose two processes do this simultaneously. Maybe they're in >>> transactions, maybe they're not. Are they guaranteed to get distinct >>> blocks of IDs? >> >> No, because the setval and the nextval are not indivisible. >> >>> Or is it possible that each will execute nextval() and >>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>> resulting in two overlapping blocks. >> >> Exactly. >> >>> If the answer is, "This won't work," then what's a better way to do this? >> >> AFAIK the only way at the moment is >> >> * acquire some advisory lock that by convention you use for this sequence >> * advance the sequence >> * release advisory lock >> >> There have been previous discussions of this type of problem, eg >> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php >> but the topic doesn't seem to have come up quite often enough to >> motivate anybody to do anything about it. Your particular case could be >> handled by a variant of nextval() with a number-of-times-to-advance >> argument, but I'm not sure if that's enough for other scenarios. >> >> regards, tom lane > > So here's what I came up with. I'm no PLPGSQL guru, but it seemed > pretty straightforward. > > create or replace function nextval_block(bsize integer default 1) > returns bigint as $nextval_block$ > declare > bstart bigint; > begin > perform pg_advisory_lock(1); > select into bstart nextval('my_seq'); > perform setval('my_seq', bstart + bsize, false); > perform pg_advisory_unlock(1); > return bstart; > end; > $nextval_block$ language plpgsql; > > As long as I ensure that every application uses nextval_block() > instead of nextval() to access this sequence, I think this will do > what I want. > > testdb=> select nextval_block(); > nextval_block > --------------- > 1 > (1 row) > > testdb=> select nextval_block(); > nextval_block > --------------- > 2 > (1 row) > > > testdb=> select nextval_block(1000); > nextval_block > --------------- > 3 > (1 row) > > testdb=> select nextval_block(1000); > nextval_block > --------------- > 1003 > (1 row) > > testdb=> select nextval_block(1000); > nextval_block > --------------- > 2003 > (1 row) > > Using pgsql's \timing directive, it says it's roughly 0.45 msec per > request with the client and server are on the same machines, and 0.55 > msec per request when the client and server are different machines. > Not bad. If you also need to get only 1 id, in those cases you can sharelock instead of full lock -- you can treat the case of blocksize=1 specially. merlin
On Tue, Aug 21, 2012 at 9:32 AM, Craig James <cjames@emolecules.com> wrote: > On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Craig James <cjames@emolecules.com> writes: >>> I want to do this: >> >>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >> >>> Now suppose two processes do this simultaneously. Maybe they're in >>> transactions, maybe they're not. Are they guaranteed to get distinct >>> blocks of IDs? >> >> No, because the setval and the nextval are not indivisible. >> >>> Or is it possible that each will execute nextval() and >>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>> resulting in two overlapping blocks. >> >> Exactly. >> >>> If the answer is, "This won't work," then what's a better way to do this? >> >> AFAIK the only way at the moment is >> >> * acquire some advisory lock that by convention you use for this sequence >> * advance the sequence >> * release advisory lock >> >> There have been previous discussions of this type of problem, eg >> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php >> but the topic doesn't seem to have come up quite often enough to >> motivate anybody to do anything about it. Your particular case could be >> handled by a variant of nextval() with a number-of-times-to-advance >> argument, but I'm not sure if that's enough for other scenarios. >> >> regards, tom lane > > So here's what I came up with. I'm no PLPGSQL guru, but it seemed > pretty straightforward. > > create or replace function nextval_block(bsize integer default 1) > returns bigint as $nextval_block$ > declare > bstart bigint; > begin > perform pg_advisory_lock(1); > select into bstart nextval('my_seq'); > perform setval('my_seq', bstart + bsize, false); > perform pg_advisory_unlock(1); > return bstart; > end; > $nextval_block$ language plpgsql; That seems unnecessarily complex. how about this: create sequence s; select array_agg (a.b) from (select nextval('s') as b from generate_series(1,1000)) as a; Then you just iterate that array for the ids you need.
On Tue, Aug 21, 2012 at 2:03 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Tue, Aug 21, 2012 at 9:32 AM, Craig James <cjames@emolecules.com> wrote: >> On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Craig James <cjames@emolecules.com> writes: >>>> I want to do this: >>> >>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false); >>> >>>> Now suppose two processes do this simultaneously. Maybe they're in >>>> transactions, maybe they're not. Are they guaranteed to get distinct >>>> blocks of IDs? >>> >>> No, because the setval and the nextval are not indivisible. >>> >>>> Or is it possible that each will execute nextval() and >>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001, >>>> resulting in two overlapping blocks. >>> >>> Exactly. >>> >>>> If the answer is, "This won't work," then what's a better way to do this? >>> >>> AFAIK the only way at the moment is >>> >>> * acquire some advisory lock that by convention you use for this sequence >>> * advance the sequence >>> * release advisory lock >>> >>> There have been previous discussions of this type of problem, eg >>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php >>> but the topic doesn't seem to have come up quite often enough to >>> motivate anybody to do anything about it. Your particular case could be >>> handled by a variant of nextval() with a number-of-times-to-advance >>> argument, but I'm not sure if that's enough for other scenarios. >>> >>> regards, tom lane >> >> So here's what I came up with. I'm no PLPGSQL guru, but it seemed >> pretty straightforward. >> >> create or replace function nextval_block(bsize integer default 1) >> returns bigint as $nextval_block$ >> declare >> bstart bigint; >> begin >> perform pg_advisory_lock(1); >> select into bstart nextval('my_seq'); >> perform setval('my_seq', bstart + bsize, false); >> perform pg_advisory_unlock(1); >> return bstart; >> end; >> $nextval_block$ language plpgsql; > > That seems unnecessarily complex. how about this: > > create sequence s; > select array_agg (a.b) from (select nextval('s') as b from > generate_series(1,1000)) as a; > > Then you just iterate that array for the ids you need. If you want it in a comma delimited formate: select array_to_string(array_agg (a.b),',') from (select nextval('s') as b from generate_series(1,1000)) as a;
On Tue, Aug 21, 2012 at 1:03 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > That seems unnecessarily complex. how about this: > > create sequence s; > select array_agg (a.b) from (select nextval('s') as b from > generate_series(1,1000)) as a; > > Then you just iterate that array for the ids you need. For brevity I didn't explain the use-case in detail. I need a series of IDs that are unique across a cluster of servers and across time (years and decades). The blocksize might be anywhere from 1 to 100000. One server is the master and issues all IDs. I don't want to iterate over an array to get the values because it's inefficient: if the blocksize is large (say, 100000 items), it will require 100000 select() statements. The solution using an advisory lock along with setvalue() is nice because the application only makes one select() statement and gets a block of IDs that are guaranteed to be unique across the cluster. Craig
On Tue, Aug 21, 2012 at 2:59 PM, Craig James <cjames@emolecules.com> wrote: > On Tue, Aug 21, 2012 at 1:03 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> That seems unnecessarily complex. how about this: >> >> create sequence s; >> select array_agg (a.b) from (select nextval('s') as b from >> generate_series(1,1000)) as a; >> >> Then you just iterate that array for the ids you need. > > For brevity I didn't explain the use-case in detail. I need a series > of IDs that are unique across a cluster of servers and across time > (years and decades). The blocksize might be anywhere from 1 to > 100000. One server is the master and issues all IDs. > > I don't want to iterate over an array to get the values because it's > inefficient: if the blocksize is large (say, 100000 items), it will > require 100000 select() statements. The solution using an advisory > lock along with setvalue() is nice because the application only makes > one select() statement and gets a block of IDs that are guaranteed to > be unique across the cluster. Ahhh ok. Yeah that's why I said early on I wasn't really sure of your use case, cause that really can make all the difference. Good to know.