Thread: More benchmarking of wal_buffers

More benchmarking of wal_buffers

From
"Christopher Kings-Lynne"
Date:
Hi Everyone,

I've just spent the last day and a half trying to benchmark our new database
installation to find a good value for wal_buffers.  The quick answer - there
isn't, just leave it on the default of 8.

The numbers just swing up and down so much it's impossible to say that one
setting is better than another.  I've attached an openoffice doc with my old
shared_buffers tests plus the wal_buffers tests.  The wal results are a bit
deceptive as the results I've included are really what I consider the
'average' results.  Just occasionally, I'd get a spike that I could never
repeat...

Even if you look at the attached charts and you think that 128 buffers are
better than 8, think again - there's nothing in it.  Next time I run that
benchmark it could be the same, lower or higher.  And the difference between
the worst and best results is less than 3 TPS - ie. nothing.

One proof that has come out of this is that wal_buffers does not affect
SELECT only performance in any way.  So, for websites where the
select/update ratio is very high, wal_buffers is almost an irrelevant
optimisation.

Even massively heavy sites where you are getting write transactions
continuously by 64 simultaneous people, I was unable to prove that any
setting other than the default helped.  In this situation, probably the
commit_delay and commit_siblings variables will give you the best gains.

I'm not sure what I could test next.  Does FreeBSD support anything other
than fsync?  eg. fdatasync, etc.  I can't see it in the man pages...

Chris

ps. I don't think the attachments are too large, but if they annoy anyone,
tell me.  Also, I've cross posted to make sure people who read my previous
benchmark, see this one also.

Attachment

Re: More benchmarking of wal_buffers

From
Bruce Momjian
Date:
Christopher Kings-Lynne wrote:
> I'm not sure what I could test next.  Does FreeBSD support anything other
> than fsync?  eg. fdatasync, etc.  I can't see it in the man pages...

You are already getting the best default for your OS.  It say 'fsync'
for default, but the comment says the default is OS-specific.  The only
thing you can compare there is open_fdatasync vs fdatasync.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: More benchmarking of wal_buffers

From
Neil Conway
Date:
On Thu, 2003-02-13 at 00:16, Christopher Kings-Lynne wrote:
> Even if you look at the attached charts and you think that 128 buffers are
> better than 8, think again - there's nothing in it.  Next time I run that
> benchmark it could be the same, lower or higher.  And the difference between
> the worst and best results is less than 3 TPS - ie. nothing.

One could conclude that this a result of the irrelevancy of wal_buffers;
another possible conclusion is that the testing tool (pgbench) is not a
particularly good database benchmark, as it tends to be very difficult
to use it to reproduceable results. Alternatively, it's possible that
the limited set of test-cases you've used doesn't happen to include any
circumstances in which wal_buffers is useful.

We definitely need some better benchmarking tools for PostgreSQL (and
no, OSDB does not cut it, IMHO). I've been thinking of taking a look at
improving this, but I can't promise I'll get the time or inclination to
actually do anything about it :-)

Cheers,

Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC




Re: More benchmarking of wal_buffers

From
Tom Lane
Date:
"Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> I've just spent the last day and a half trying to benchmark our new database
> installation to find a good value for wal_buffers.  The quick answer - there
> isn't, just leave it on the default of 8.

I don't think this is based on a useful test for wal_buffers.  The
wal_buffers setting only has to be large enough for the maximum amount
of WAL log data that your system emits between commits, because a commit
(from anyone) is going to flush the WAL data to disk (for everyone).
So a benchmark based on short transactions is just not going to show
any benefit to increasing the setting.

Benchmarking, say, the speed of massive COPY IN operations might show
some advantage to larger wal_buffers.  Although I'm not real sure that
it'll make any difference for any single-backend test.  It's really just
the case where you have concurrent transactions that all make lots of
updates before committing that's likely to show a win.

> One proof that has come out of this is that wal_buffers does not affect
> SELECT only performance in any way.

Coulda told you that without testing ;-).  Read-only transactions emit
no WAL entries.

            regards, tom lane

Re: More benchmarking of wal_buffers

From
"Christopher Kings-Lynne"
Date:
> I don't think this is based on a useful test for wal_buffers.  The
> wal_buffers setting only has to be large enough for the maximum amount
> of WAL log data that your system emits between commits, because a commit
> (from anyone) is going to flush the WAL data to disk (for everyone).
> So a benchmark based on short transactions is just not going to show
> any benefit to increasing the setting.

Yes, I guess the TPC-B test does many, very short transactions.  Each
transaction bascially comprises a single update, so I guess it wouldn't
really test it.

> > One proof that has come out of this is that wal_buffers does not affect
> > SELECT only performance in any way.
>
> Coulda told you that without testing ;-).  Read-only transactions emit
> no WAL entries.

I knew that as well, that's why I said "proof" ;)

Chris


Re: More benchmarking of wal_buffers

From
"Christopher Kings-Lynne"
Date:
> I don't think this is based on a useful test for wal_buffers.  The
> wal_buffers setting only has to be large enough for the maximum amount
> of WAL log data that your system emits between commits, because a commit
> (from anyone) is going to flush the WAL data to disk (for everyone).
> So a benchmark based on short transactions is just not going to show
> any benefit to increasing the setting.

Here's a question then - what is the _drawback_ to having 1024 wal_buffers
as opposed to 8?

Chris


Re: More benchmarking of wal_buffers

From
Tom Lane
Date:
"Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> Here's a question then - what is the _drawback_ to having 1024 wal_buffers
> as opposed to 8?

Waste of RAM?  You'd be better off leaving that 8 meg available for use
as general-purpose buffers ...

            regards, tom lane

Re: [PERFORM] More benchmarking of wal_buffers

From
"Christopher Kings-Lynne"
Date:
> "Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> > Here's a question then - what is the _drawback_ to having 1024
> wal_buffers
> > as opposed to 8?
>
> Waste of RAM?  You'd be better off leaving that 8 meg available for use
> as general-purpose buffers ...

What I mean is say you have an enterprise server doing heaps of transactions
with lots of work.  If you have scads of RAM, could you just shove up
wal_buffers really high and assume it will improve performance?

Chris


Re: [PERFORM] More benchmarking of wal_buffers

From
Kevin Brown
Date:
Tom Lane wrote:
> "Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> > I've just spent the last day and a half trying to benchmark our new database
> > installation to find a good value for wal_buffers.  The quick answer - there
> > isn't, just leave it on the default of 8.
>
> I don't think this is based on a useful test for wal_buffers.  The
> wal_buffers setting only has to be large enough for the maximum amount
> of WAL log data that your system emits between commits, because a commit
> (from anyone) is going to flush the WAL data to disk (for everyone).

What happens when the only transaction running emits more WAL log data
than wal_buffers can handle?  A flush happens when the WAL buffers
fill up (that's what I'd expect)?  Didn't find much in the
documentation about it...


--
Kevin Brown                          kevin@sysexperts.com

Re: [PERFORM] More benchmarking of wal_buffers

From
Tom Lane
Date:
Kevin Brown <kevin@sysexperts.com> writes:
> What happens when the only transaction running emits more WAL log data
> than wal_buffers can handle?  A flush happens when the WAL buffers
> fill up (that's what I'd expect)?  Didn't find much in the
> documentation about it...

A write, not a flush (ie, we don't force an fsync).  Also, I think it
writes only a few blocks, not all the available data.  Don't recall the
details on that.

            regards, tom lane

Re: [PERFORM] More benchmarking of wal_buffers

From
Tom Lane
Date:
"Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> What I mean is say you have an enterprise server doing heaps of transactions
> with lots of work.  If you have scads of RAM, could you just shove up
> wal_buffers really high and assume it will improve performance?

There is no such thing as infinite RAM (or if there is, you paid *way*
too much for your database server).  My feeling is that it's a bad
idea to put more than you absolutely have to into single-use buffers.
Multi-purpose buffers are usually a better use of RAM.

            regards, tom lane

Re: [PERFORM] More benchmarking of wal_buffers

From
Curt Sampson
Date:
On Thu, 13 Feb 2003, Tom Lane wrote:

> "Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes:
> > What I mean is say you have an enterprise server doing heaps of transactions
> > with lots of work.  If you have scads of RAM, could you just shove up
> > wal_buffers really high and assume it will improve performance?
>
> There is no such thing as infinite RAM (or if there is, you paid *way*
> too much for your database server).  My feeling is that it's a bad
> idea to put more than you absolutely have to into single-use buffers.
> Multi-purpose buffers are usually a better use of RAM.

Well, yes, but he was talking about 8 MB of WAL buffers. On a machine
with, say, 2 GB of RAM, that's an insignificant amount (0.4% of your
memory), and so I would say that it basically can't hurt at all. If your
log is on the same disk as your data, the larger writes when doing a big
transaction, such as a copy, might be a noticable win, in fact.

(I was about to say that it would seem odd that someone would spend that
much on RAM and not splurge on an extra pair of disks to separate the
WAL log, but then I realized that we're only talking about $300 or so
worth of RAM....)

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC