Tom Lane wrote:
> "Michael Paesold" <mpaesold@gmx.at> writes:
> > Only vacuum will reset the insert times to the lowest possible!
> > What does the vacuum code do?? :-]
>
> It removes dead tuples. Dead tuples can only arise from update or
> delete operations ... so you have not been telling us the whole
> truth. An insert-only test would not have this sort of behavior.
>
> regards, tom lane
Sleeping is good. When I woke up this morning I had an idea of
what is causing these problems; and you are right. I had used a
self-written sequence system for the invoice_ids -- I can't use a
sequence because sequence values can skip.
So inserting an invoice would also do an update on a single row
of the cs_sequence table, which cause the problems.
Now, with a normal sequence, it works like a charm.
17 sec. for 10000 rows and 2-3 sec. for commit.
But why is performance so much degrading? After 10000 updates
on a row, the row seems to be unusable without vacuum! I hope
the currently discussed autovacuum daemon will help in such a
situation.
So I think I will have to look for another solution. It would be
nice if one could lock a sequence! That would solve all my
troubles,...
<dreaming>
BEGIN;
LOCK SEQUENCE invoice_id_seq;
-- now only this connection can get nextval(), all others will block
INSERT INTO invoice VALUES (nextval('invoice_id_seq'), ...);
INSERT INTO invoice VALUES (nextval('invoice_id_seq'), ...);
...
COMMIT;
-- now this only helps if sequences could be rolled back -- wake up!
</dreaming>
What could you recommend? Locking the table and selecting
max(invoice_id) wouldn't really be much faster, with max(invoice_id)
not using an index...
Best Regards,
Michael Paesold