On Thu, Jul 20, 2017 at 8:24 AM, Stephen Frost <sfrost@snowman.net> wrote:
> * Tom Lane (tgl@sss.pgh.pa.us) wrote:
>> "Joshua D. Drake" <jd@commandprompt.com> writes:
>> > At PGConf US Philly last week I was talking with Jim and Jan about
>> > performance. One of the items that came up is that PostgreSQL can't run
>> > full throttle for long periods of time. The long and short is that no
>> > matter what, autovacuum can't keep up. This is what I have done:
>>
>> Try reducing autovacuum_vacuum_cost_delay more, and/or increasing
>> autovacuum_vacuum_cost_limit.
>
> Or get rid of the cost delay entirely and let autovacuum actually go as
> fast as it can when it's run. The assertion that it can't keep up is
> still plausible, but configuring autovacuum to sleep regularly and then
> complaining that it's not able to keep up doesn't make sense.
>
> Reducing the nap time might also be helpful if autovacuum is going as
> fast as it can and it's able to clear a table in less than a minute.
>
> There have been discussions on this list about parallel vacuum of a
> particular table as well; to address this issue I'd encourage reviewing
> those discussions and looking at writing a patch to implement that
> feature as that would address the case where the table is large enough
> that autovacuum simply can't get through all of it before the other
> backends have used all space available and then substantially increased
> the size of the relation (leading to vacuum on the table running for
> longer).
Yeah, the parallel vacuum of a particular table might help this issue
unless disk I/O is bottle-neck. I'm planning work on that.
Regards,
--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center