Thread: Re: Migration study, step 1: bulk write performanceoptimization

Re: Migration study, step 1: bulk write performanceoptimization

From
"Mikael Carneholm"
Date:
>>On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:

>> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), compared to the raw dd result (~75.5Mb/s).
>>
>> I assume this difference is due to:
>> - simultaneous WAL write activity (assumed: for each byte written to the table, at least one byte is also written to
WAL,in effect: 10Gb data inserted in the table equals 20Gb written to disk) 
>> - lousy test method (it is done using a function => the transaction size is 10Gb, and 10Gb will *not* fit in
wal_buffers:) ) 
>> - poor config

>> checkpoint_segments = 3

>With those settings, you'll be checkpointing every 48 Mb, which will be
>every about once per second. Since the checkpoint will take a reasonable
>amount of time, even with fsync off, you'll be spending most of your
>time checkpointing. bgwriter will just be slowing you down too because
>you'll always have more clean buffers than you can use, since you have
>132MB of shared_buffers, yet flushing all of them every checkpoint.

>Please read you're logfile, which should have relevant WARNING messages.

It does ("LOG: checkpoints are occurring too frequently (2 seconds apart)")  However, I tried increasing
checkpoint_segmentsto 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate
thanwith checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64,
whatwould a reasonable bgwriter setup be? I still need to improve my understanding of the relations between
checkpoint_segments<-> shared_buffers <-> bgwriter...  :/ 

- Mikael


Re: Migration study, step 1: bulk write

From
Simon Riggs
Date:
On Wed, 2006-03-22 at 10:04 +0100, Mikael Carneholm wrote:
> but that gave a more uneven insert rate

Not sure what you mean, but happy to review test results.

You should be able to tweak other parameters from here as you had been
trying. Your bgwriter will be of some benefit now if you set it
aggressively enough to keep up.

Your thoughts on this process are welcome...

Best Regards, Simon Riggs


Re: Migration study, step 1: bulk write performanceoptimization

From
"Jim C. Nasby"
Date:
On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
> It does ("LOG: checkpoints are occurring too frequently (2 seconds apart)")  However, I tried increasing
checkpoint_segmentsto 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate
thanwith checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64,
whatwould a reasonable bgwriter setup be? I still need to improve my understanding of the relations between
checkpoint_segments<-> shared_buffers <-> bgwriter...  :/ 

Probably the easiest way is to set checkpoint_segments to something like
128 or 256 (or possibly higher), and then make bg_writer more aggressive
by increasing bgwriter_*_maxpages dramatically (maybe start with 200).
You might want to up lru_percent as well, otherwise it will take a
minimum of 20 seconds to fully scan.

Basically, slowly start increasing settings until performance smooths
out.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: Migration study, step 1: bulk write performanceoptimization

From
Tom Lane
Date:
"Jim C. Nasby" <jnasby@pervasive.com> writes:
> On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
>> It does ("LOG: checkpoints are occurring too frequently (2 seconds apart)")  However, I tried increasing
checkpoint_segmentsto 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate
thanwith checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64,
whatwould a reasonable bgwriter setup be? I still need to improve my understanding of the relations between
checkpoint_segments<-> shared_buffers <-> bgwriter...  :/ 

> Probably the easiest way is to set checkpoint_segments to something like
> 128 or 256 (or possibly higher), and then make bg_writer more aggressive
> by increasing bgwriter_*_maxpages dramatically (maybe start with 200).

Definitely.  You really don't want checkpoints happening oftener than
once per several minutes (five or ten if possible).  Push
checkpoint_segments as high as you need to make that happen, and then
experiment with making the bgwriter parameters more aggressive in order
to smooth out the disk write behavior.  Letting the physical writes
happen via bgwriter is WAY cheaper than checkpointing.

bgwriter parameter tuning is still a bit of a black art, so we'd be
interested to hear what works well for you.

            regards, tom lane