Re: Spread checkpoint sync - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Spread checkpoint sync
Date
Msg-id AANLkTimZR4qEmao7m1i+FAvv-z41ZC1nK0oUzxnRuA3G@mail.gmail.com
Whole thread Raw
In response to Re: Spread checkpoint sync  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: Spread checkpoint sync
List pgsql-hackers
On Sun, Jan 16, 2011 at 7:32 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> But since you already wrote a patch to do the whole thing, I figured
> I'd time it.

Thanks!

> I arranged to test an instrumented version of your patch under large
> shared_buffers of 4GB, conditions that would maximize the opportunity
> for it to take a long time.  Running your compaction to go from 524288
> to a handful (14 to 29, depending on run) took between 36 and 39
> milliseconds.
>
> For comparison, doing just the memcpy part of AbsorbFsyncRequest on
> a full queue took from 24 to 27 milliseconds.
>
> They are close enough to each other that I am no longer interested in
> partial deduplication.  But both are long enough that I wonder if
> having a hash table in shared memory that is kept unique automatically
> at each update might not be worthwhile.

There are basically three operations that we care about here: (1) time
to add an fsync request to the queue, (2) time to absorb requests from
the queue, and (3) time to compact the queue.  The first is by far the
most common, and at least in any situation that anyone's analyzed so
far, the second will be far more common than the third.  Therefore, it
seems unwise to accept any slowdown in #1 to speed up either #2 or #3,
and a hash table probe is definitely going to be slower than what's
required to add an element under the status quo.

We could perhaps mitigate this by partitioning the hash table.
Alternatively, we could split the queue in half and maintain a global
variable - protected by the same lock - indicating which half is
currently open for insertions.  The background writer would grab the
lock, flip the global, release the lock, and then drain the half not
currently open to insertions; the next iteration would flush the other
half.  However, it's unclear to me that either of these things has any
value.  I can't remember any reports of contention on the
BgWriterCommLock, so it seems like changing the logic as minimally as
possible as the way to go.

(In contrast, note that the WAL insert lock, proc array lock, and lock
manager/buffer manager partition locks are all known to be heavily
contended in certain workloads.)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: limiting hint bit I/O
Next
From: Robert Haas
Date:
Subject: Re: limiting hint bit I/O