Re: Experimental patch for inter-page delay in VACUUM - Mailing list pgsql-hackers

From Jan Wieck
Subject Re: Experimental patch for inter-page delay in VACUUM
Date
Msg-id 3FA72AE9.9090903@Yahoo.com
Whole thread Raw
In response to Re: Experimental patch for inter-page delay in VACUUM  (Ang Chin Han <angch@bytecraft.com.my>)
Responses Re: Experimental patch for inter-page delay in VACUUM
Re: Experimental patch for inter-page delay in VACUUM
List pgsql-hackers
Ang Chin Han wrote:
> Christopher Browne wrote:
>> Centuries ago, Nostradamus foresaw when "Stephen" <jleelim@xxxxxxx.com> would write:
>> 
>>>As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s)
>>>to complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10,
>>>both VACUUMs completed in 18m3s (1080 sec). A factor of 13 times! 
>>>This is for a single 350 MB table.
>> 
>> 
>> While it is unfortunate that the minimum quanta seems to commonly be
>> 10ms, it doesn't strike me as an enormous difficulty from a practical
>> perspective.
> 
> If we can't lower the minimum quanta, we could always vacuum 2 pages 
> before sleeping 10ms, effectively sleeping 5ms.
> 
> Say,
> vacuum_page_per_delay = 2
> vacuum_time_per_delay = 10

That's exactly what I did ... look at the combined experiment posted 
under subject "Experimental ARC implementation". The two parameters are 
named vacuum_page_groupsize and vacuum_page_delay.

> 
> What would be interesting would be pg_autovacuum changing these values 
> per table, depending on current I/O load.
> 
> Hmmm. Looks like there's a lot of interesting things pg_autovacuum can do:
> 1. When on low I/O load, running multiple vacuums on different, smaller 
> tables on full speed, careful to note that these vacuums will increase 
> the I/O load as well.
> 2. When on high I/O load, vacuum big, busy tables slowly.
> 
From what I see here the two parameters above together with the ARC 
scan resistance and with the changed strategy where to place pages 
faulted in by vacuum, I think one can pretty good handle that now. It's 
certainly much better than before.

What still needs to be addressed is the IO storm cause by checkpoints. I 
see it much relaxed when stretching out the BufferSync() over most of 
the time until the next one should occur. But the kernel sync at it's 
end still pushes the system hard against the wall.


Jan

-- 
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #



pgsql-hackers by date:

Previous
From: Neil Conway
Date:
Subject: bufmgr code question
Next
From: Peter Eisentraut
Date:
Subject: Re: 7.4RC1 tag'd, branched and bundled ...