Re: Simulating sequences - Mailing list pgsql-general

From Bruno Wolff III
Subject Re: Simulating sequences
Date
Msg-id 20030818154246.GA20703@wolff.to
Whole thread Raw
In response to Simulating sequences  ("Vilson farias" <vilson.farias@digitro.com.br>)
List pgsql-general
On Mon, Aug 18, 2003 at 11:27:14 -0300,
  Vilson farias <vilson.farias@digitro.com.br> wrote:
>
> I tryied to fix this problem with a VACUUM and it was completly ineffective.
> After execution the problem was still there. Later, after execution of every
> kind of vacuum I knew (with and without ANALYZE, especific for that table,
> vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY
> FROM. At this time, the performance problem was fixed.

Did you try VACUUM FULL?

If you are doing just a normal vacuum and waited until there were over
a million tuples in the table, your FSM setting probably wasn't high
enough to let you recover the space.

> What can I do to solve this problem without table reconstruction? Is there a
> performance degradation in PostgreSQL that can't be solved? If a have a huge
> table with millions of data being update constantly, what can I do to keep a
> good performance if vacuum isn't 100%?

You want to vacuum the table a lot more often. I remember a post (that should
be in the archives) where someone calculated how many updates you could go
before the dead tuples took up more than one block. The suggestion was that
that was the point where you want to vacuum the table.

> Does PostgreSQL sequences deal with these kinds performance questions? Is it
> clean, fast and secury to use sequences? Maybe I still can make up my boss
> mind about this subject if I get good arguments here to justify the use of
> sequences.

Besides solving a dead tuple problem, using sequences also avoids contention
by not having to hold locks for the duration of a transaction.


pgsql-general by date:

Previous
From: Jan Wieck
Date:
Subject: Re: Simulating sequences
Next
From: Bruno Wolff III
Date:
Subject: Re: Hour difference?