Thread: Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

From
Vincent de Phily
Date:
Hi list,

I've got a heavily-updated table, and about 30 customers on the same system
each with his own version of the table. The 3 configured autovacuum workers
take turns vacuuming the table in each customer db; autovacuum is never idle
and takes a large part of the available IO.

Fearing that vacuuming might accumulate lateness and hoping to see the system
idle every now and then, I increased autovacuum_vacuum_cost_limit to 500 and
decreased autovacuum_vacuum_cost_delay to 10. First question : is it an
intelligent thing to do or am I better off ignoring the constant vacuuming and
trusting that things will get done in time ? With the new settings, autovacuum
is still constant (even though each one takes less time), but I'm wary of
making autovacuum even less "io-nice".

Second thing : the vacuumed tables+indexes taken together are bigger than the
available OS disk cache. Does vacuuming them fill the cache, or is there some
kind of O_DIRECT in use ? I have a feeling (very un-verified) that this is not
the most usefull data I could have in my cache.

This is all on PG 8.3. I know upgrading would improve things (particularly
since a large percentage of the table remains static between vacuums), but
we're still too busy for that right now (unless you tell me I'm going to see a
night-and-day difference regarding this particular issue).


Thanks.
--
Vincent de Phily

Re: Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

From
Andrew Sullivan
Date:
On Thu, Oct 06, 2011 at 11:58:15AM +0200, Vincent de Phily wrote:
> Fearing that vacuuming might accumulate lateness and hoping to see the system
> idle every now and then,

Why is your goal to see the system idle every now and then?  It's not
going to get tired if it keeps working, and if you have a lot of work
and can spin out that work so that the system always has a little bit
of work to do, then you use your resources more efficiently.

Normally, one likes to see some idle time because it is evidence of
"headroom" -- that you have more capacity than you actually need.  If
that's the reason you want to see the idle times, then surely you
don't want to tune the system with the goal of causing idleness.  You
want to tune the system so that the work gets done in as smooth and
fast a way possible.  So I would aim for maximum throughput (including
but not exclusively complete table maintenance) and then check whether
you're getting any idle time.  Off the cuff, though, it sounds to me
like you need more capacity than you have.

> This is all on PG 8.3. I know upgrading would improve things
> (particularly since a large percentage of the table remains static
> between vacuums), but we're still too busy for that right now
> (unless you tell me I'm going to see a night-and-day difference
> regarding this particular issue).

I think it might be more "dusk and day", but I have had very
impressive performance from 9.0.  Haven't tried 9.1.

A

--
Andrew Sullivan
ajs@crankycanuck.ca

Re: Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

From
Vincent de Phily
Date:
On Thursday 06 October 2011 07:00:20 Andrew Sullivan wrote:
> On Thu, Oct 06, 2011 at 11:58:15AM +0200, Vincent de Phily wrote:
> > Fearing that vacuuming might accumulate lateness and hoping to see the
> > system idle every now and then,
>
> Why is your goal to see the system idle every now and then?  It's not
> going to get tired if it keeps working, and if you have a lot of work
> and can spin out that work so that the system always has a little bit
> of work to do, then you use your resources more efficiently.
>
> Normally, one likes to see some idle time because it is evidence of
> "headroom" -- that you have more capacity than you actually need.  If
> that's the reason you want to see the idle times, then surely you
> don't want to tune the system with the goal of causing idleness.  You
> want to tune the system so that the work gets done in as smooth and
> fast a way possible.  So I would aim for maximum throughput (including
> but not exclusively complete table maintenance) and then check whether
> you're getting any idle time.  Off the cuff, though, it sounds to me
> like you need more capacity than you have.

I agree idleness per se is not the goal, and whatever work needs to be done
might as well be spread smoothly over time. Idleness *is* however a measure of
the ressource headroom available, and that headroom was incomfortably small
(for IO) in this case. I was just groping for more performance out of the
system.

In the case of vacuuming however, I think there's a point to be made about
finishing fast when all vacuum workers are constantly busy : say the vacuum
daemon notices that there are 10 tables that need vacuuming now. It allocates
3 workers, but while they do their intentionally-slow work, the other 7 tables
keep creating more vacuumable tuples, so it'll be more work overall because
they're "late" in their "vacuum schedule". Does that make sense (I'm not sure
id does) ?


Anyway, my particular issue is solved for now : I realized those tables were
terribly bloated (often more than 99% slack), so I vacuum-fulled them and now
the autovacuums run very fast and the disk is 90% idle again. That slack
probably appeared at table initialization time because the fsm was not big
enough. I since raised the fsm, but I think it's big enough during normal
(non-init) usage anyway.

I'm still interested in more opinions about my two questions :
 * When does it make sense to make autovacuum more aggressive on IO, and by
   how much ?
 * Does vacuuming fill the OS's disk cache, and is it an issue if it does ?


Cheers.

--
Vincent de Phily

Re: Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

From
Andrew Sullivan
Date:
On Thu, Oct 06, 2011 at 03:39:02PM +0200, Vincent de Phily wrote:
> In the case of vacuuming however, I think there's a point to be made about
> finishing fast when all vacuum workers are constantly busy : say the vacuum
> daemon notices that there are 10 tables that need vacuuming now. It allocates
> 3 workers, but while they do their intentionally-slow work, the other 7 tables > keep creating more vacuumable
tuples,so it'll be more work overall because  
> they're "late" in their "vacuum schedule". Does that make sense (I'm not sure
> id does) ?

Yes, that's exactly the issue.  You need to balance the resource
you're depriving the "real" database transactions (i.e. the user ones)
against the cost of waiting, which waiting will probably cost those
user transactions in performance.  The reason there's no magic
solution is because much of this depends on your use patterns.

> Anyway, my particular issue is solved for now : I realized those tables were
> terribly bloated (often more than 99% slack), so I vacuum-fulled them and now
> the autovacuums run very fast and the disk is 90% idle again. That slack
> probably appeared at table initialization time because the fsm was not big
> enough. I since raised the fsm, but I think it's big enough during normal
> (non-init) usage anyway.

This is what you want to keep an eye on, then.  Why do you think it
came from "initialization time", though?  VACUUM only has work to do
when dead tuples show up (e.g. from DELETE or UPDATE), and normally
when you first populate a table you do a COPY, which isn't going to
create dead tuples.

> I'm still interested in more opinions about my two questions :
>  * When does it make sense to make autovacuum more aggressive on IO, and by
>    how much ?

At bottom, you don't want your tables to get so bloated that they
exhibit the problem you just saw, but you also don't want vacuum to be
taking so much I/O that your other tasks can't get done.  That's the
general principle; how it applies to your case depends rather on use
patters.  For instance, if you know that there will be at most 10%
churn on every table every day, but all transactions happen between
9:00 and 17:00 local time, then it's probably safe to allow that to
happen: as long as your FSM can keep track, it can all be recovered
every day after 17:00, so you might as well allow the work to build
up, & let the vacuums happen when they're not stealing any I/O from
user queries.  If, on the other hand, you get 100% churn on 50% of the
tables every day between 09:00 and 11:00, and the rest of the day is
mostly read-only traffic, with read-only traffic during all 24 hours
(don't scoff -- I had exactly this problem once) then you want to be
quite aggressive with the autovacuum settings, because keeping that
100% bloat down is going to pay off in a big way on the read-only
traffic.

>  * Does vacuuming fill the OS's disk cache, and is it an issue if it does ?

Well, it _affects_ the OS's disk cache.  Whether it fills it is
controlled by the cache algorithms and the amount of memory you have
devoted to cache.  Every time you touch the disk, you potentially
alter the cache in favour of what you just saw.

In the above artificial examples, the vacuums that run "after everyone
went home" will almost certainly end up taking over the cache, because
there's no other activity to keep other things in the disk cache.  In
the second example, though, with a lot of read-only activity all the
time, the things that are most popular are likely to remain in a
(modern) disk cache most of the time because they're called so often
that the vacuumed page doesn't end up being enough traffic to cause an
eviction (or, anyway, to evict for any significant time).

A

--
Andrew Sullivan
ajs@crankycanuck.ca

Re: Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

From
Vincent de Phily
Date:
On Thursday 06 October 2011 12:09:30 Andrew Sullivan wrote:
> On Thu, Oct 06, 2011 at 03:39:02PM +0200, Vincent de Phily wrote:
> > In the case of vacuuming however, I think there's a point to be made
> > about finishing fast when all vacuum workers are constantly busy : say
> > the vacuum daemon notices that there are 10 tables that need vacuuming
> > now. It allocates 3 workers, but while they do their intentionally-slow
> > work, the other 7 tables keep creating more vacuumable tuples, so
> > it'll be more work overall because they're "late" in their "vacuum
> > schedule". Does that make sense (I'm not sure id does) ?
>
> Yes, that's exactly the issue.  You need to balance the resource
> you're depriving the "real" database transactions (i.e. the user ones)
> against the cost of waiting, which waiting will probably cost those
> user transactions in performance.  The reason there's no magic
> solution is because much of this depends on your use patterns.

Ok, I'm glad my reasoning wasn't completely flawed :)

> > Anyway, my particular issue is solved for now : I realized those tables
> > were terribly bloated (often more than 99% slack), so I vacuum-fulled
> > them and now the autovacuums run very fast and the disk is 90% idle
> > again. That slack probably appeared at table initialization time
> > because the fsm was not big enough. I since raised the fsm, but I think
> > it's big enough during normal (non-init) usage anyway.
>
> This is what you want to keep an eye on, then.

Yup, watching that.

> Why do you think it
> came from "initialization time", though?  VACUUM only has work to do
> when dead tuples show up (e.g. from DELETE or UPDATE), and normally
> when you first populate a table you do a COPY, which isn't going to
> create dead tuples.

Those tables are a hand-made trigger-maintained "materialized view" created
about 2 months ago. Initializing them meant doing a full seqscan of the
reference table and doing one insert and 1-2 updates for each row in the MV
table. And the work was split in thousands of transactions with a load-
dependent sleep between them, in order to not impact user queries. Those
updates (and some inserts) still hapen during normal usage, but at a much
slower pace which autovacuum should have no trouble keeping up with.


> > I'm still interested in more opinions about my two questions :
> >  * When does it make sense to make autovacuum more aggressive on IO,
> >  and by>
> >    how much ?
>
> At bottom, you don't want your tables to get so bloated that they
> exhibit the problem you just saw, but you also don't want vacuum to be
> taking so much I/O that your other tasks can't get done.  That's the
> general principle; how it applies to your case depends rather on use
> patters.  For instance, if you know that there will be at most 10%
> churn on every table every day, but all transactions happen between
> 9:00 and 17:00 local time, then it's probably safe to allow that to
> happen: as long as your FSM can keep track, it can all be recovered
> every day after 17:00, so you might as well allow the work to build
> up, & let the vacuums happen when they're not stealing any I/O from
> user queries.  If, on the other hand, you get 100% churn on 50% of the
> tables every day between 09:00 and 11:00, and the rest of the day is
> mostly read-only traffic, with read-only traffic during all 24 hours
> (don't scoff -- I had exactly this problem once) then you want to be
> quite aggressive with the autovacuum settings, because keeping that
> 100% bloat down is going to pay off in a big way on the read-only
> traffic.

Interesting. Although if you have such well-defined churn times, it might be
better to vacuum from cron instead of from autovacuum ? You also don't want to
autovacuum now if you know your churn will be over in 15 min. Looks like it's
going to be hard to extract general rules.

One of my motivations to make autovaccum more aggresive was that my fsm was
too small and I didn't want a PG restart to take the new value into account
yet. So "finish this vacuum faster and get on to the next one" was a way to do
that "next one" before the fsm overflowed. But I now realize it's a very bad
kludge, and I should just have my fsm sized right (or sized automatically;
have I already said that I long to upgrade ? :p)

> >  * Does vacuuming fill the OS's disk cache, and is it an issue if it
> >  does ?
> Well, it _affects_ the OS's disk cache.  Whether it fills it is
> controlled by the cache algorithms and the amount of memory you have
> devoted to cache.  Every time you touch the disk, you potentially
> alter the cache in favour of what you just saw.
>
> In the above artificial examples, the vacuums that run "after everyone
> went home" will almost certainly end up taking over the cache, because
> there's no other activity to keep other things in the disk cache.  In
> the second example, though, with a lot of read-only activity all the
> time, the things that are most popular are likely to remain in a
> (modern) disk cache most of the time because they're called so often
> that the vacuumed page doesn't end up being enough traffic to cause an
> eviction (or, anyway, to evict for any significant time).

Ok, so say my churn happens only in the last 10 minutes of data and readonly
queries only look at the last 24 hours of data, if vacuuming is triggered
every 48 hours, that's 24 hours of data that will potentially get back into
the cache with no benefit (or if I'm not mistaken, with PG =< 8.3 it's much
more than 24 hours). Pity. Is there a counter-example where there is a caching
benefit to the current behaviour ? If not, that might be a low-hanging fruit
to improve postgres performance.



Thanks for your interesting replies.

--
Vincent de Phily

Re: Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

From
Cédric Villemain
Date:
2011/10/7 Vincent de Phily <vincent.dephily@mobile-devices.fr>:
> On Thursday 06 October 2011 12:09:30 Andrew Sullivan wrote:
>> On Thu, Oct 06, 2011 at 03:39:02PM +0200, Vincent de Phily wrote:
>> > In the case of vacuuming however, I think there's a point to be made
>> > about finishing fast when all vacuum workers are constantly busy : say
>> > the vacuum daemon notices that there are 10 tables that need vacuuming
>> > now. It allocates 3 workers, but while they do their intentionally-slow
>> > work, the other 7 tables keep creating more vacuumable tuples, so
>> > it'll be more work overall because they're "late" in their "vacuum
>> > schedule". Does that make sense (I'm not sure id does) ?
>>
>> Yes, that's exactly the issue.  You need to balance the resource
>> you're depriving the "real" database transactions (i.e. the user ones)
>> against the cost of waiting, which waiting will probably cost those
>> user transactions in performance.  The reason there's no magic
>> solution is because much of this depends on your use patterns.
>
> Ok, I'm glad my reasoning wasn't completely flawed :)
>
>> > Anyway, my particular issue is solved for now : I realized those tables
>> > were terribly bloated (often more than 99% slack), so I vacuum-fulled
>> > them and now the autovacuums run very fast and the disk is 90% idle
>> > again. That slack probably appeared at table initialization time
>> > because the fsm was not big enough. I since raised the fsm, but I think
>> > it's big enough during normal (non-init) usage anyway.
>>
>> This is what you want to keep an eye on, then.
>
> Yup, watching that.
>
>> Why do you think it
>> came from "initialization time", though?  VACUUM only has work to do
>> when dead tuples show up (e.g. from DELETE or UPDATE), and normally
>> when you first populate a table you do a COPY, which isn't going to
>> create dead tuples.
>
> Those tables are a hand-made trigger-maintained "materialized view" created
> about 2 months ago. Initializing them meant doing a full seqscan of the
> reference table and doing one insert and 1-2 updates for each row in the MV
> table. And the work was split in thousands of transactions with a load-
> dependent sleep between them, in order to not impact user queries. Those
> updates (and some inserts) still hapen during normal usage, but at a much
> slower pace which autovacuum should have no trouble keeping up with.
>
>
>> > I'm still interested in more opinions about my two questions :
>> >  * When does it make sense to make autovacuum more aggressive on IO,
>> >  and by>
>> >    how much ?
>>
>> At bottom, you don't want your tables to get so bloated that they
>> exhibit the problem you just saw, but you also don't want vacuum to be
>> taking so much I/O that your other tasks can't get done.  That's the
>> general principle; how it applies to your case depends rather on use
>> patters.  For instance, if you know that there will be at most 10%
>> churn on every table every day, but all transactions happen between
>> 9:00 and 17:00 local time, then it's probably safe to allow that to
>> happen: as long as your FSM can keep track, it can all be recovered
>> every day after 17:00, so you might as well allow the work to build
>> up, & let the vacuums happen when they're not stealing any I/O from
>> user queries.  If, on the other hand, you get 100% churn on 50% of the
>> tables every day between 09:00 and 11:00, and the rest of the day is
>> mostly read-only traffic, with read-only traffic during all 24 hours
>> (don't scoff -- I had exactly this problem once) then you want to be
>> quite aggressive with the autovacuum settings, because keeping that
>> 100% bloat down is going to pay off in a big way on the read-only
>> traffic.
>
> Interesting. Although if you have such well-defined churn times, it might be
> better to vacuum from cron instead of from autovacuum ? You also don't want to
> autovacuum now if you know your churn will be over in 15 min. Looks like it's
> going to be hard to extract general rules.
>
> One of my motivations to make autovaccum more aggresive was that my fsm was
> too small and I didn't want a PG restart to take the new value into account
> yet. So "finish this vacuum faster and get on to the next one" was a way to do
> that "next one" before the fsm overflowed. But I now realize it's a very bad
> kludge, and I should just have my fsm sized right (or sized automatically;
> have I already said that I long to upgrade ? :p)
>
>> >  * Does vacuuming fill the OS's disk cache, and is it an issue if it
>> >  does ?
>> Well, it _affects_ the OS's disk cache.  Whether it fills it is
>> controlled by the cache algorithms and the amount of memory you have
>> devoted to cache.  Every time you touch the disk, you potentially
>> alter the cache in favour of what you just saw.
>>
>> In the above artificial examples, the vacuums that run "after everyone
>> went home" will almost certainly end up taking over the cache, because
>> there's no other activity to keep other things in the disk cache.  In
>> the second example, though, with a lot of read-only activity all the
>> time, the things that are most popular are likely to remain in a
>> (modern) disk cache most of the time because they're called so often
>> that the vacuumed page doesn't end up being enough traffic to cause an
>> eviction (or, anyway, to evict for any significant time).
>
> Ok, so say my churn happens only in the last 10 minutes of data and readonly
> queries only look at the last 24 hours of data, if vacuuming is triggered
> every 48 hours, that's 24 hours of data that will potentially get back into
> the cache with no benefit (or if I'm not mistaken, with PG =< 8.3 it's much
> more than 24 hours). Pity. Is there a counter-example where there is a caching
> benefit to the current behaviour ? If not, that might be a low-hanging fruit
> to improve postgres performance.

Not a direct answer but some items after reading the thread:

 * 8.4 come with visibility map and it is nice to reduce IO usage
(without trouble anymore with FSM_ GUC)
 * postgresql cache got its own logic, DB oriented.
 * operating system cache is proud enough to not waste all of your
cache when reading one file sequentialy.
 * you may increase the number of autovacuum workers too, (depend of
your IO and CPU)
 * it is better to change the autovacuum settings via cron than running vacuum.

If you wonder, you can use pgfincore to track your OS cache usage per
table&index and the PostgreSQL cache with pg_buffercache.
Note that pgfincore is lock free, but pg_buffercache may impact your
performance (it is still interesting to use it to check how your
shared buffers are used and it can helps optimising your shared_memory
size)

--
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation