My application needs a counter that's guaranteed to not lose any values
even in case of rollbacks (so sequences are out), so I have a singleton
table that keeps track of where we are in the pseudo-sequence. The
table is very simple:
CREATE TABLE t ( next BIGINT );
INSERT INTO t ( next ) VALUES ( 0 );
During the course of things, we do a bulk-load-ish operation (but not
just a bulk load) that executes the following code many times in a
single transaction:
SELECT next FROM t;
-- Then increment t. Then...
UPDATE t SET next = $1; -- $1 = next+1
The problem is that the performance of those operations is pretty slow
and seems to degrade as more are performed. For example, doing the
SELECT and UPDATE in a loop has degrading performance that looks like
this:
1000: 891ms
2000: 1296ms
3000: 1735ms
4000: 2312ms
5000: 2844ms
6000: 3328ms
7000: 3875ms
8000: 4531ms
9000: 4875ms
Not very fast, and we're losing 4-500ms per thousand, which is pretty
painful.
I've tried various things like indexes and where clauses and an
additional UID column and stuff, but nothing helps much.
It seems to me that it ought to be possible to update a singleton row
without it costing so much, and without the speed degrading as it goes.
Does anyone know why I'm seeing what I'm seeing or have suggestion on
how to correct it?
Thanks in advance.
Adam Pritchard