Re: Load Distributed Checkpoints, take 3 - Mailing list pgsql-patches

From Heikki Linnakangas
Subject Re: Load Distributed Checkpoints, take 3
Date
Msg-id 467A8AE5.1050806@enterprisedb.com
Whole thread Raw
In response to Re: Load Distributed Checkpoints, take 3  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Load Distributed Checkpoints, take 3  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-patches
Tom Lane wrote:
> Heikki Linnakangas <heikki@enterprisedb.com> writes:
>> Tom Lane wrote:
>>> I tend to agree with whoever said upthread that the combination of GUC
>>> variables proposed here is confusing and ugly.  It'd make more sense to
>>> have min and max checkpoint rates in KB/s, with the max checkpoint rate
>>> only honored when we are predicting we'll finish before the next
>>> checkpoint time.
>
>> Really? I thought everyone is happy with the current combination, and
>> that it was just the old trio of parameters controlling the write, nap
>> and sync phases that was ugly. One particularly nice thing about
>> defining the duration as a fraction of checkpoint interval is that we
>> can come up with a reasonable default value that doesn't depend on your
>> hardware.
>
> That argument would hold some water if you weren't introducing a
> hardware-dependent min rate in the same patch.  Do we need the min rate
> at all?  If so, why can't it be in the same units as the max (ie, a
> fraction of checkpoint)?
>
>> How would a min and max rate work?
>
> Pretty much the same as the code does now, no?  You either delay, or not.

I don't think you understand how the settings work. Did you read the
documentation? If you did, it's apparently not adequate.

The main tuning knob is checkpoint_smoothing, which is defined as a
fraction of the checkpoint interval (both checkpoint_timeout and
checkpoint_segments are taken into account). Normally, the write phase
of a checkpoint takes exactly that much time.

So the length of a checkpoint stays the same regardless of how many
dirty buffers there is (assuming you don't exceed the bandwidth of your
hardware), but the I/O rate varies.

There's another possible strategy: keep the I/O rate constant, but vary
the length of the checkpoint. checkpoint_rate allows you to do that.

I'm envisioning we set the defaults so that checkpoint_smoothing is the
effective control in a relatively busy system, and checkpoint_rate
ensures that we don't unnecessarily prolong checkpoints on an idle system.

Now how would you replace checkpoint_smoothing with a max I/O rate? The
only real alternative way of defining it is directly as a time and/or
segments, similar to checkpoint_timeout and checkpoint_segments, but
then we'd really need two knobs instead of one.

Though maybe we could just hard-code it to 0.8, for example, and tell
people to tune checkpoint_rate if they want more aggressive checkpoints.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

pgsql-patches by date:

Previous
From: Tom Lane
Date:
Subject: Re: Load Distributed Checkpoints, take 3
Next
From: Tom Lane
Date:
Subject: Re: Load Distributed Checkpoints, take 3