Markus Benne <thing@m-bass.com> writes:
> We have a highly active table that has virtually all
> entries updated every 5 minutes. Typical size of the
> table is 50,000 entries, and entries have grown fat.
> We are currently vaccuming hourly, and towards the end
> of the hour we are seeing degradation, when compared
> to the top of the hour.
On something like this, you really need to be vacuuming more often
not less so; I'd think about how to do it every five or ten minutes
rather than backing off. With only 50K rows it should really not take
more than a couple of seconds to do the vacuum. When you wait till
there are 600K dead rows, it's going to take awhile, plus you are
suffering across-the-board performance degradation from all the dead
rows.
If you are using PG 8.0, there are some "vacuum cost" knobs you can
fiddle with to slow down vacuum so it doesn't impose as much I/O load.
Ideally you could get it to where you could run vacuum as often as
you need to without noticing much impact on foreground processing.
If you're not using 8.0 ... maybe it's time to update.
Another thing you might want to do is look at "vacuum verbose" output,
which will give you some idea of the time spent in each step. It might
be there are specific aspects that could be improved.
> We are thinking of splitting the table in two: the
> part the updates often, and the part the updates
> infrequently as we suspect that record size impacts
> vacuum.
You just said that virtually all rows update constantly --- where's
the "infrequent" part?
regards, tom lane