Group Commit - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Group Commit
Date
Msg-id 460B9A5F.1090708@enterprisedb.com
Whole thread Raw
Responses Re: Group Commit  (Heikki Linnakangas <heikki@enterprisedb.com>)
Re: Group Commit  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Group Commit  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
I've been working on the patch to enhance our group commit behavior. The 
patch is a dirty hack at the moment, but I'm settled on the algorithm 
I'm going to use and I know the issues involved.

Here's the patch as it is if you want to try it out:
http://community.enterprisedb.com/groupcommit-pghead-2.patch

but it needs a rewrite before being accepted. It'll only work on systems 
that use sysv semaphores, I needed to add a function to acquire a 
semaphore with timeout and I only did it for sysv_sema.c for now.


What are the chances of getting this in 8.3, assuming that I rewrite and 
submit a patch within the next week or two?


Algorithm
---------

Instead of starting a WAL flush immediately after a commit record is 
inserted, we wait a while to give other backends a chance to finish 
their transactions and have them flushed by the same fsync call. There's 
two things we can control: how many commits to wait for (commit group 
size), and for how long (timeout).

We try to estimate the optimal commit group size. The estimate is

commit group size = (# of commit records flushed + # of commit records 
arrived while fsyncing).

This is a relatively simple estimate that works reasonably well with 
very short transactions, and the timeout limits the damage when the 
estimate is not working.

There's a lot more factors we could take into account in the estimate, 
for example:
- # of backends and their states (affects how many are likely to commit 
soon)
- amount of WAL written since last XLogFlush (affects the duration of fsync)
- when exaclty the commit records arrive (we don't want to wait 10 ms to 
get one more commit record in, when an fsync takes 11 ms)

but I wanted to keep this simple for now.

The timeout is currently hard-coded at 1 ms. I wanted to keep it short 
compared to the time it takes to fsync (somewhere in the 5-15 ms 
depending on hardware), to limit the damage when the algorithm isn't 
getting the estimate right. We could also vary the timeout, but I'm not 
sure how to calculate the optimal value and the real granularity will 
depend on the system anyhow.

Implementation
--------------

To count the # of commits since last XLogFlush, I added a new 
XLogCtlCommit struct in shared memory:

typedef struct XLogCtlCommit
{    slock_t    commit_lock;   /* protects the struct */    int           commitCount;   /* # of commit records
insertedsince 
 
XLogFlush */    int           groupSize;     /* current commit group size */    XLogRecPtr lastCommitPtr; /* location
ofthe latest commit record */    PGPROC    *waiter;        /* process to signal when groupSize is 
 
reached */
} XLogCtlCommit;

Whenever a commit record is inserted in XLogInsert, commitCount is 
incremented and lastCommitPtr is updated.
When it reaches groupSize, the waiter-process is woken up.

In XLogFlush, after acquiring WALWriteLock, we wait until groupSize is 
reached (or timeout expires) before doing the flush.

Instead of the current logic to flush as much WAL as possible, we flush 
up to the last commit record. Flushing any more wouldn't save us an 
fsync later on, but might make the current fsync take longer. By doing 
that, we avoid the conditional acquire of the WALInsertLock that's in 
there currently. We make note of commitCount before starting the fsync; 
that's the # of commit records that arrived in time so that the fsync 
will flush them. Let's call that value "intime".

After the fsync is finished, we update the groupSize for the next round. 
The new groupSize is the current commitCount after the fsync, IOW the 
number of commit records arrived after the previous XLogFlush, including 
the time it took to do the fsync. We update the commitCount by 
decrementing it by "intime".

Now we're ready for the next round, and we can release WALWriteLock.

WALWriteLock
------------

The above would work nicely, except that a normal lwlock doesn't play 
nicely. You can release and reacquire a lightwait lock in the same time 
slice even when there's other backends queuing for the lock, effectively 
cutting the queue.

Here's what sometimes happens, with 2 clients:

Client 1               Client 2
do work                do work
insert commit record   insert commit record
acquire WALWriteLock                       try to acquire WALWriteLock, blocks
fsync
release WALWriteLock
begin new transaction
do work
insert commit record
reacquire WALWriteLock
wait for 2nd commit to arrive

Client 1 will eventually time out and commit just its own commit record. 
Client 2 should be released immediately after client 1 releases the 
WALWriteLock. It only needs to observe that its commit record has 
already been flushed and doesn't need to do anything.

To fix the above, and other race conditions like that, we need a 
specialized WALWriteLock that orders the waiters by the commit record 
XLogRecPtrs. WALWriteLockRelease wakes up all waiters that have their 
commit record already flushed. They will just fall through without 
acquiring the lock.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: "Pavan Deolasee"
Date:
Subject: Re: CREATE INDEX and HOT - revised design
Next
From: Heikki Linnakangas
Date:
Subject: Re: Group Commit