Re: Insert Performance - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Insert Performance
Date
Msg-id 3674.1033049111@sss.pgh.pa.us
Whole thread Raw
In response to Re: Insert Performance  ("Michael Paesold" <mpaesold@gmx.at>)
List pgsql-hackers
"Michael Paesold" <mpaesold@gmx.at> writes:
> So inserting an invoice would also do an update on a single row
> of the cs_sequence table, which cause the problems.

> Now, with a normal sequence, it works like a charm.
> 17 sec. for 10000 rows and 2-3 sec. for commit.

> But why is performance so much degrading? After 10000 updates
> on a row, the row seems to be unusable without vacuum!

Probably, because the table contains 10000 dead tuples and one live one.
The system is scanning all 10001 tuples looking for the one to UPDATE.

In 7.3 it might help a little to create an index on the table.  But
really this is one of the reasons that SEQUENCEs were invented ---
you have no alternative but to do frequent vacuums, if you repeatedly
update the same row of a table.  You might consider issuing a selective
"VACUUM cs_sequence" command every so often (ideally every few hundred
updates).

> I hope the currently discussed autovacuum daemon will help in such a
> situation.

Probably, if we can teach it to recognize that such frequent vacuums are
needed.  In the meantime, cron is your friend ...
        regards, tom lane


pgsql-hackers by date:

Previous
From: Jan Wieck
Date:
Subject: Re: PGXLOG variable worthwhile?
Next
From: Tom Lane
Date:
Subject: Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not exist)