Re: autovacuum strategy / parameters - Mailing list pgsql-performance

From Robert Haas
Subject Re: autovacuum strategy / parameters
Date
Msg-id AANLkTinaUgBnN67hwCmWmdlKk3LP6snNSTCgwJyXMkuW@mail.gmail.com
Whole thread Raw
In response to Re: autovacuum strategy / parameters  (Greg Smith <greg@2ndquadrant.com>)
List pgsql-performance
On Sat, May 1, 2010 at 1:11 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> Robert Haas wrote:
>>
>> I don't have a stake in the ground on what the right settings are, but
>> I think it's fair to say that if you vacuum OR analyze much less
>> frequently than what we recommend my default, it might break.
>>
>
> I think the default settings are essentially minimum recommended
> frequencies.  They aren't too terrible for the giant data warehouse case
> Josh was suggesting they came from--waiting until there's 20% worth of dead
> stuff before kicking off an intensive vacuum is OK when vacuum is expensive
> and you're mostly running big queries anyway.  And for smaller tables, the
> threshold helps it kick in a little earlier.  It's unlikely anyone wants to
> *increase* those, so that autovacuum runs even less; out of the box it's not
> tuned to run very often at all.
>
> If anything, I'd expect people to want to increase how often it runs, for
> tables where much less than 20% dead is a problem.  The most common
> situation I've seen where that's the case is when you have a hotspot of
> heavily updated rows in a large table, and this may match some of the
> situations that Robert was alluding to seeing.  Let's say you have a big
> table where 0.5% of the users each update their respective records heavily,
> averaging 30 times each.  That's only going to result in 15% dead rows, so
> no autovacuum.  But latency for those users will suffer greatly, because
> they might have to do lots of seeking around to get their little slice of
> the data.

For me it's more that my applications are typically really fast, and
when they run at half-speed people think "oh, it's slow today" but
they can still work and attribute the problem to their computer, or
the network, or something.  When they slow down by like 10x then they
file a bug.  I'm typically dealing with a situation where the whole
database can be easily cached in RAM and the CPU is typically 90%
idle, which cushions the blow quite a bit.

A few months ago someone reported that "the portal was slow" and the
problem turned out to be that the database was bloated by in excess of
a factor a factor of 10 due to having blown out the free space map.  I
wasn't a regular user of that system at that time so hadn't had the
opportunity to notice myself.

...Robert

pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: autovacuum strategy / parameters
Next
From: Richard Yen
Date:
Subject: partioning tips?