Thread: Automatic adjustment of bgwriter_lru_maxpages (was: Dead Space Map version 2)

Automatic adjustment of bgwriter_lru_maxpages (was: Dead Space Map version 2)

From
ITAGAKI Takahiro
Date:
"Jim C. Nasby" <jim@nasby.net> wrote:

> > > Perhaps it would be better to have the bgwriter take a look at how many
> > > dead tuples (or how much space the dead tuples account for) when it
> > > writes a page out and adjust the DSM at that time.
> >
> > Yeah, I feel it is worth optimizable, too. One question is, how we treat
> > dirty pages written by backends not by bgwriter? If we want to add some
> > works in bgwriter, do we also need to make bgwriter to write almost of
> > dirty pages?
>
> IMO yes, we want the bgwriter to be the only process that's normally
> writing pages out. How close we are to that, I don't know...

I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.

The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
The counter are incremented per call and reset to zero at StrategySyncStart().


This patch alone is not so useful except for hiding hardly tunable parameters
from users. However, it would be a first step of allow bgwriters to do some
works before writing dirty buffers.

- [DSM] Pick out pages worth vaccuming and register them into DSM.
- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
- [TODO Item] Shrink expired COLD updated tuples to just their headers.
- Set commit hint bits to reduce subsequent writes of blocks.
        http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php


I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
I got an expected result as below. Over 75% of buffers are written by
bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
were much higher than the default value (5). It shows that the most suitable
values greatly depends on workloads.

 benchmark  | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
------------+------------+-----------+-------------+-----------------------
 default    |    300tps  |     100%  |      77.5%  |      120 pages/cycle
 with sleep |    150tps  |      50%  |      98.6%  |       70 pages/cycle


I hope that this patch will be a first step of the intelligent bgwriter.
Comments welcome.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


Attachment

Re: Automatic adjustment of bgwriter_lru_maxpages

From
ITAGAKI Takahiro
Date:
Sorry, I had a mistake in the patch I sent.
This is a fixed version.

I wrote:

> I'm working on making the bgwriter to write almost of dirty pages. This is
> the proposal for it using automatic adjustment of bgwriter_lru_maxpages.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


Attachment

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Bruce Momjian
Date:
Your patch has been added to the PostgreSQL unapplied patches list at:

    http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---------------------------------------------------------------------------


ITAGAKI Takahiro wrote:
> Sorry, I had a mistake in the patch I sent.
> This is a fixed version.
>
> I wrote:
>
> > I'm working on making the bgwriter to write almost of dirty pages. This is
> > the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
>
> Regards,
> ---
> ITAGAKI Takahiro
> NTT Open Source Software Center
>

[ Attachment, skipping... ]

>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faq

--
  Bruce Momjian  <bruce@momjian.us>          http://momjian.us
  EnterpriseDB                               http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Heikki Linnakangas
Date:
ITAGAKI Takahiro wrote:
> "Jim C. Nasby" <jim@nasby.net> wrote:
>
>>>> Perhaps it would be better to have the bgwriter take a look at how many
>>>> dead tuples (or how much space the dead tuples account for) when it
>>>> writes a page out and adjust the DSM at that time.
>>> Yeah, I feel it is worth optimizable, too. One question is, how we treat
>>> dirty pages written by backends not by bgwriter? If we want to add some
>>> works in bgwriter, do we also need to make bgwriter to write almost of
>>> dirty pages?
>> IMO yes, we want the bgwriter to be the only process that's normally
>> writing pages out. How close we are to that, I don't know...
>
> I'm working on making the bgwriter to write almost of dirty pages. This is
> the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
>
> The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
> of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
> The counter are incremented per call and reset to zero at StrategySyncStart().
>
>
> This patch alone is not so useful except for hiding hardly tunable parameters
> from users. However, it would be a first step of allow bgwriters to do some
> works before writing dirty buffers.
>
> - [DSM] Pick out pages worth vaccuming and register them into DSM.
> - [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
> - [TODO Item] Shrink expired COLD updated tuples to just their headers.
> - Set commit hint bits to reduce subsequent writes of blocks.
>         http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php
>
>
> I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
> I got an expected result as below. Over 75% of buffers are written by
> bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
> were much higher than the default value (5). It shows that the most suitable
> values greatly depends on workloads.
>
>  benchmark  | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
> ------------+------------+-----------+-------------+-----------------------
>  default    |    300tps  |     100%  |      77.5%  |      120 pages/cycle
>  with sleep |    150tps  |      50%  |      98.6%  |       70 pages/cycle
>
>
> I hope that this patch will be a first step of the intelligent bgwriter.
> Comments welcome.

The general approach looks good to me. I'm queuing some benchmarks to
see how effective it is with a fairly constant workload.

This change in bgwriter.c looks fishy:

*************** BackgroundWriterMain(void)
*** 484,491 ****
            *
            * We absorb pending requests after each short sleep.
            */
!         if ((bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0) ||
!             (bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0))
               udelay = BgWriterDelay * 1000L;
           else if (XLogArchiveTimeout > 0)
               udelay = 1000000L;    /* One second */
--- 484,490 ----
            *
            * We absorb pending requests after each short sleep.
            */
!         if (bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0)
               udelay = BgWriterDelay * 1000L;
           else if (XLogArchiveTimeout > 0)
               udelay = 1000000L;    /* One second */

Doesn't that mean that bgwriter only runs every 1 or 10 seconds,
regardless of bgwriter_delay, if bgwriter_all_* parameters are not set?

The algorithm used to update bgwriter_lru_maxpages needs some thought.
Currently, it's decreased by one when less clean pages were required by
backends than expected, and increased otherwise. Exponential smoothing
or something similar seems like the natural choice to me.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Greg Smith
Date:
Attached are two patches that try to recast the ideas of Itagaki
Takahiro's auto bgwriter_lru_maxpages patch in the direction I think this
code needs to move.  Epic-length commentary follows.

The original code came from before there was a pg_stat_bgwriter.  The
first patch (buf-alloc-stats) takes the two most interesting pieces of
data the original patch collected, the number of buffers allocated
recently and the number that the clients wrote out, and ties all that into
the new stats structure.  With this patch applied, you can get a feel for
things like churn/turnover in the buffer pool that were very hard to
quantify before.  Also, it makes it easy to measure how well your
background writer is doing at writing buffers so the clients don't have
to.  Applying this would complete one of my personal goals for the 8.3
release, which was having stats to track every type of buffer write.

I split this out because I think it's very useful to have regardless of
whether the automatic tuning portion is accepted, and I think these
smaller patches make the review easier.  The main thing I would recommend
someone check is how am_bg_writer is (mis?)used here.  I spliced some of
the debugging-only code from the original patch, and I can't tell if the
result is a robust enough approach to solving the problem of having every
client indirectly report their activity to the background writer.  Other
than that, I think this code is ready for review and potentially
comitting.

The second patch (limit-lru) adds on top of that a constraint of the LRU
writer so that it doesn't do any more work than it has to.  Note that I
left verbose debugging code in here because I'm much less confident this
patch is complete.

It predicts upcoming buffer allocations using a 16-period weighted moving
average of recent activity, which you can think of as the last 3.2 seconds
at the default interval.  After testing a few systems that seemed a decent
compromise of smoothing in both directions.  I found the 2X overallocation
fudge factor of the original patch way too aggressive, and just pick the
larger of the most recent allocation amount or the smoothed value.  The
main thing that throws off the allocation estimation is when you hit a
checkpoint, which can give a big spike after the background writer returns
to BgBufferSync and notices all the buffers that were allocated during the
checkpoint write; the code then tries to find more buffers it can recycle
than it needs to.  Since the checkpoint itself normally leaves a large
wake of reusable buffers behind it, I didn't find this to be a serious
problem.

There's another communication issue here, which is that SyncOneBuffer
needs to return more information about the buffer than it currently does
once it gets it locked.  The background writer needs to know more than
just if it was written to tune itself.  The original patch used a clever
trick for this which worked but I found confusing.  I happen to have a
bunch of other background writer tuning code I'm working on, and I had to
come up with a more robust way to communicate buffer internals back via
this channel.  I used that code here, it's a bitmask setup similar to how
flags like BM_DIRTY are used.  It's overkill for solving this particular
problem, but I think the interface is clean and it helps support future
enhancements in intelligent background writing.

Now we get to the controversial part.  The original patch removed the
bgwriter_lru_maxpages parameter and updated the documentation accordingly.
I didn't do that here.  The reason is that after playing around in this
area I'm not convinced yet I can satisfy all the tuning scenarios I'd like
to be able to handle that way.  I describe this patch as enforcing a
constraint instead; it allows you to set the LRU parameters much higher
than was reasonable before without having to be as concerned about the LRU
writer wasting resources.

I already brought up some issues in this area on -hackers (
http://archives.postgresql.org/pgsql-hackers/2007-04/msg00781.php ) but my
work hasn't advanced as fast as I'd hoped.  I wanted to submit what I've
finished anyway because I think any approach here is going to have cope
with the issues addressed in these two patches, and I'm happy now with how
they're solved here.  It's only a one-line delete to disable the LRU
limiting behavior of the second patch, at which point it's strictly
internals code with no expected functional impact that alternate
approaches might be built on.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Attachment

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Heikki Linnakangas
Date:
Greg Smith wrote:
> The original code came from before there was a pg_stat_bgwriter.  The
> first patch (buf-alloc-stats) takes the two most interesting pieces of
> data the original patch collected, the number of buffers allocated
> recently and the number that the clients wrote out, and ties all that
> into the new stats structure.  With this patch applied, you can get a
> feel for things like churn/turnover in the buffer pool that were very
> hard to quantify before.  Also, it makes it easy to measure how well
> your background writer is doing at writing buffers so the clients don't
> have to.  Applying this would complete one of my personal goals for the
> 8.3 release, which was having stats to track every type of buffer write.
>
> I split this out because I think it's very useful to have regardless of
> whether the automatic tuning portion is accepted, and I think these
> smaller patches make the review easier.  The main thing I would
> recommend someone check is how am_bg_writer is (mis?)used here.  I
> spliced some of the debugging-only code from the original patch, and I
> can't tell if the result is a robust enough approach to solving the
> problem of having every client indirectly report their activity to the
> background writer.  Other than that, I think this code is ready for
> review and potentially comitting.

This looks good to me in principle. StrategyReportWrite increments
numClientWrites without holding the BufFreeListLock, that's a race
condition. The terminology needs some adjustment; clients don't write
buffers, backends do.

Splitting the patch to two is a good idea.

> The second patch (limit-lru) adds on top of that a constraint of the LRU
> writer so that it doesn't do any more work than it has to.  Note that I
> left verbose debugging code in here because I'm much less confident this
> patch is complete.
>
> It predicts upcoming buffer allocations using a 16-period weighted
> moving average of recent activity, which you can think of as the last
> 3.2 seconds at the default interval.  After testing a few systems that
> seemed a decent compromise of smoothing in both directions.  I found the
> 2X overallocation fudge factor of the original patch way too aggressive,
> and just pick the larger of the most recent allocation amount or the
> smoothed value.  The main thing that throws off the allocation
> estimation is when you hit a checkpoint, which can give a big spike
> after the background writer returns to BgBufferSync and notices all the
> buffers that were allocated during the checkpoint write; the code then
> tries to find more buffers it can recycle than it needs to.  Since the
> checkpoint itself normally leaves a large wake of reusable buffers
> behind it, I didn't find this to be a serious problem.

Can you tell more about the tests you performed? That algorithm seems
decent, but I wonder why the simple fudge factor wasn't good enough? I
would've thought that a 2x or even bigger fudge factor would still be
only a tiny fraction of shared_buffers, and wouldn't really affect
performance.

The load distributed checkpoint patch should mitigate the checkpoint
spike problem by continuing the LRU scan throughout the checkpoint.

> There's another communication issue here, which is that SyncOneBuffer
> needs to return more information about the buffer than it currently does
> once it gets it locked.  The background writer needs to know more than
> just if it was written to tune itself.  The original patch used a clever
> trick for this which worked but I found confusing.  I happen to have a
> bunch of other background writer tuning code I'm working on, and I had
> to come up with a more robust way to communicate buffer internals back
> via this channel.  I used that code here, it's a bitmask setup similar
> to how flags like BM_DIRTY are used.  It's overkill for solving this
> particular problem, but I think the interface is clean and it helps
> support future enhancements in intelligent background writing.

Uh, that looks pretty ugly to me. The normal way to return multiple
values is to pass a pointer as an argument, though that can get ugly as
well if there's a lot of return values. What combinations of the flags
are valid? Would an enum be better? Or how about moving the checks for
dirty and pinned buffers from SyncOneBuffer to the callers?

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Greg Smith
Date:
On Sun, 13 May 2007, Heikki Linnakangas wrote:

> StrategyReportWrite increments numClientWrites without holding the
> BufFreeListLock, that's a race condition. The terminology needs some
> adjustment; clients don't write buffers, backends do.

That was another piece of debugging code I moved into the main path
without thinking too hard about it, good catch.  I have a
documentation/naming patch I've started on that revises a lot of the
pg_stat_bgwriter names to be more consistant and easier to understand (as
well as re-ordering the view); the underlying code is still fluid enough
that I was trying to nail that down first.

> That algorithm seems decent, but I wonder why the simple fudge factor
> wasn't good enough? I would've thought that a 2x or even bigger fudge
> factor would still be only a tiny fraction of shared_buffers, and
> wouldn't really affect performance.

I like the way the smoothing evens out the I/O rates.  I saw occasional
spots where the buffer allocations drop to 0 for a few intervals while
other stuff is going on everybody is waiting for, and I didn't want all
LRU cleanup come to halt just because there's a fraction of a second where
nothing happened in the middle of a very busy period.

As for why not overestimate, if you get into a situation where the buffer
cache is very dirty with much of the data being recently used (I normally
see this with bulk UPDATEs on indexed tables), you can end up scanning
many buffers for each one you find that can be written out.  In this kind
of situation, deciding that you actually need to write out twice as many
just because you don't trust your estimate is very inefficient.

I was able to simulate most of the bad behavior I look for with the
pgbench schema using "update accounts set abalance=abalance+1;".  To throw
some sample numbers out, on my test server I was just doing final work on
last night, I was seeing peaks of about 600-1200 buffers allocated per
200ms interval doing that simple UPDATE with shared_buffers=32768.

Let's call it 2% of the pool.  If 50% of the pool is either dirty or can't
be reused yet, that means I'll average having to scan 2%/50%=4% of the
pool to find enough buffers to reuse per interval.  I wouldn't describe
that as a tiny fraction, and doubling it is not an insignificant load
increase.  I'd like to be able to increase the LRU percentage scanned
without being concerned that I'm wasting resources because of this
situation.

The fact that this problem exists is what got me digging into the
background writer code in the first place, because it's way worse on my
production server than this example suggests.  The buffer cache is bigger,
but the ability of the server to dirty it under heavy load is far better.
Returning to the theme discussed in the -hackers thread I referenced:
you can't try to make the background writer LRU do all the writes without
exposing yourself to issues like this, because it doesn't touch the usage
counts.  Therefore it's vulnerable to breakdowns if your buffer pool
shifts toward dirty and non-reusable.

Having the background writer run amok when reusable buffers are rare can
really pull down the performance of the other backends (as well as delay
checkpoints), both in terms of CPU usage and locking issues.  I don't feel
it's a good idea to try and push it too hard unless some of these
underlying issues are fixed first; I'd rather err on the side of letting
it do less rather than more than it has to.

> The normal way to return multiple values is to pass a pointer as an
> argument, though that can get ugly as well if there's a lot of return
> values.

I'm open to better suggestions, but after tinkering with this interface
for over a month now--including pointers and enums--this is the first
implementation I was happy with.

There are four things I eventually need returned here, to support the
fully automatic BGW tuning. My 1st implementation passed in pointers, and
in addition to being ugly I found consistantly checking for null pointers
and data consistancy a drag, both from the coding and the overhead
perspective.

> What combinations of the flags are valid? Would an enum be better?

And my 2nd generation code used an enum.  There are five possible return
code states:

CLEAN + REUSABLE + !WRITTEN
CLEAN + !REUSABLE + !WRITTEN
!CLEAN + !REUSABLE + WRITTEN (all-scan only)
!CLEAN + !REUSABLE + !WRITTEN (rejected by skip)
!CLEAN + REUSABLE + WRITTEN

!CLEAN + REUSABLE + !WRITTEN isn't possible (all paths will write dirty
reusable buffers)

I found the enum-based code more confusing, both reading it and making
sure it was correct when writing it, than the current form.  Right now I
have lines like:

  if (buffer_state & BUF_REUSABLE)

With an enum this has to be something like

   if (buffer_state == BUF_CLEAN_REUSABLE || buffer_state ==
BUF_REUSABLE_WRITTEN)

And that was a pain all around; I kept having to stare at the table above
to make sure the code was correct.  Also, in order to pass back full
usage_count information I was back to either pointers or bitshifting
anyway.  While this particular patch doesn't need the usage count, the
later ones I'm working on do, and I'd like to get this interface complete
while it's being tinkered with anyway.

> Or how about moving the checks for dirty and pinned buffers from
> SyncOneBuffer to the callers?

There are 3 callers to SyncOneBuffer, and almost all the code is shared
between them.  Trying to push even just the dirty/pinned stuff back into
the callers would end up being a cut and paste job that would duplicate
many lines.  That's on top of the fact that the buffer is cleanly
locked/unlocked all in one section of code right now, and I didn't see how
to move any parts of that to the callers without disrupting that clean
interface.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Automatic adjustment of bgwriter_lru_maxpages

From
ITAGAKI Takahiro
Date:
Greg Smith <gsmith@gregsmith.com> wrote:

> The first patch (buf-alloc-stats) takes the two most interesting pieces of
> data the original patch collected, the number of buffers allocated
> recently and the number that the clients wrote out, and ties all that into
> the new stats structure.

> The second patch (limit-lru) adds on top of that a constraint of the LRU
> writer so that it doesn't do any more work than it has to.

Both patches look good.

> Now we get to the controversial part.  The original patch removed the
> bgwriter_lru_maxpages parameter and updated the documentation accordingly.
> I didn't do that here.  The reason is that after playing around in this
> area I'm not convinced yet I can satisfy all the tuning scenarios I'd like
> to be able to handle that way.  I describe this patch as enforcing a
> constraint instead; it allows you to set the LRU parameters much higher
> than was reasonable before without having to be as concerned about the LRU
> writer wasting resources.

I'm agreeable to the limiters of resource usage by bgwriter.
BTW, your patch will cut LRU writes short, but will not encourage to
do more works. So should set more aggressive values to bgwriter_lru_percent
and bgwriter_lru_maxpages as defaults? My original motivation was to enlarge
bgwriter_lru_maxpages automatically; the default bgwriter_lru_maxpages (=5)
seemed to be too small.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



Re: Automatic adjustment of bgwriter_lru_maxpages

From
Greg Smith
Date:
On Mon, 14 May 2007, ITAGAKI Takahiro wrote:

> BTW, your patch will cut LRU writes short, but will not encourage to
> do more works. So should set more aggressive values to bgwriter_lru_percent
> and bgwriter_lru_maxpages as defaults?

Setting a bigger default maximum is one possibility I was thinking about.
Since the whole background writer setup is kind of complicated, the other
thing I was working on is writing a guide on how to use the new
pg_stat_bgwriter information to figure out if you need to increase
bgwriter_[all|lru]_pages (and the other parameters too).  It makes it much
easier to write that if you can say "You can safely set
bgwriter_lru_maxpages high because it only writes what it needs to based
on your usage".

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Heikki Linnakangas
Date:
Greg Smith wrote:
> On Mon, 14 May 2007, ITAGAKI Takahiro wrote:
>
>> BTW, your patch will cut LRU writes short, but will not encourage to
>> do more works. So should set more aggressive values to
>> bgwriter_lru_percent
>> and bgwriter_lru_maxpages as defaults?
>
> Setting a bigger default maximum is one possibility I was thinking
> about. Since the whole background writer setup is kind of complicated,
> the other thing I was working on is writing a guide on how to use the
> new pg_stat_bgwriter information to figure out if you need to increase
> bgwriter_[all|lru]_pages (and the other parameters too).  It makes it
> much easier to write that if you can say "You can safely set
> bgwriter_lru_maxpages high because it only writes what it needs to based
> on your usage".

If it's safe to set it high, let's default it to infinity.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Tom Lane
Date:
Greg Smith <gsmith@gregsmith.com> writes:
> Since the whole background writer setup is kind of complicated, the other
> thing I was working on is writing a guide on how to use the new
> pg_stat_bgwriter information to figure out if you need to increase
> bgwriter_[all|lru]_pages (and the other parameters too).  It makes it much
> easier to write that if you can say "You can safely set
> bgwriter_lru_maxpages high because it only writes what it needs to based
> on your usage".

If you can write something like that, why do we need the parameter at all?

            regards, tom lane

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Greg Smith
Date:
On Mon, 14 May 2007, Heikki Linnakangas wrote:

> If it's safe to set it high, let's default it to infinity.

The maximum right now is 1000, and that would be a reasonable new default.
You really don't to write more than 1000 per interval anyway without
taking a break for checkpoints; the more writes you do at once, the higher
the chances are you'll have the whole thing stall because the OS makes you
wait for a write (this is not a theoretical comment; I've watched it
happen when I try to get the BGW doing too much).

If someone has so much activity that they're allocating more than that
during a period, they should shrink the delay instead.  The kinds of
systems where 1000 isn't high enough for bgwriter_lru_maxpages are going
to be compelled to adjust these parameters anyway for good performance.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Greg Smith
Date:
On Mon, 14 May 2007, Tom Lane wrote:

> If you can write something like that, why do we need the parameter at all?

Couple of reasons:

-As I already mentioned in my last message, I think it's unwise to let the
LRU writes go completely unbounded.  I still think there should be a
maximum, and if there is one it should be tunable.  You can get into
situations where the only way to get the LRU writer to work at all is to
set the % to scan fairly high, but that exposes you to way more writes
than you might want per interval in situations where buffers to write are
easy to find.

-There is considerable coupling between how the LRU and the all background
writers work.  There are workloads where the LRU writer is relatively
ineffective, and only the all one really works well.  If there is a
limiter on the writes from the all writer, but not on the LRU, admins may
not be able to get the balance between the two they want.  I know I
wouldn't.

-Just because I can advise what is generally the right move, that doesn't
mean it's always the right one.  Someone may notice that the maximum pages
written limit is being nailed and not care.

The last system I really got deep into the background writer mechanics on,
it could be very effective at improving performance and reducing
checkpoint spikes under low to medium loads.  But under heavy load, it
just got in the way of the individual backends running, which was
absolutely necessary in order to execute the LRU mechanics (usage_count--)
so less important buffers could be kicked out.  I would like people to
still be able to set a tuning such that the background writers were useful
under average loads, but didn't ever try to do too much.  It's much more
difficult to do that if bgwriter_lru_maxpages goes away.

I realized recently the task I should take on here is to run some more
experiments with the latest code and pass along suggested techniques for
producing/identifying the kind of problem conditions I've run into in the
past; then we can see if other people can reproduce them.  I got a new
8-core server I need to thrash anyway and will try and do just that
starting tomorrow.

For all I know my concerns are strictly a rare edge case.  But since the
final adjustments to things like whether there is an upper limit or not
are very small patches compared to what's already been done here, I sent
in what I thought was ready to go because I didn't want to hold up
reviewing the bulk of the code over some of these fine details.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Automatic adjustment of bgwriter_lru_maxpages

From
daveg
Date:
On Mon, May 14, 2007 at 11:19:23PM -0400, Greg Smith wrote:
> On Mon, 14 May 2007, Tom Lane wrote:
>
> >If you can write something like that, why do we need the parameter at all?
>
> Couple of reasons:
>
> -As I already mentioned in my last message, I think it's unwise to let the
> LRU writes go completely unbounded.  I still think there should be a
> maximum, and if there is one it should be tunable.  You can get into
> situations where the only way to get the LRU writer to work at all is to
> set the % to scan fairly high, but that exposes you to way more writes
> than you might want per interval in situations where buffers to write are
> easy to find.
>
> -There is considerable coupling between how the LRU and the all background
> writers work.  There are workloads where the LRU writer is relatively
> ineffective, and only the all one really works well.  If there is a
> limiter on the writes from the all writer, but not on the LRU, admins may
> not be able to get the balance between the two they want.  I know I
> wouldn't.
>
> -Just because I can advise what is generally the right move, that doesn't
> mean it's always the right one.  Someone may notice that the maximum pages
> written limit is being nailed and not care.
>
> The last system I really got deep into the background writer mechanics on,
> it could be very effective at improving performance and reducing
> checkpoint spikes under low to medium loads.  But under heavy load, it
> just got in the way of the individual backends running, which was
> absolutely necessary in order to execute the LRU mechanics (usage_count--)
> so less important buffers could be kicked out.  I would like people to
> still be able to set a tuning such that the background writers were useful
> under average loads, but didn't ever try to do too much.  It's much more
> difficult to do that if bgwriter_lru_maxpages goes away.
>
> I realized recently the task I should take on here is to run some more
> experiments with the latest code and pass along suggested techniques for
> producing/identifying the kind of problem conditions I've run into in the
> past; then we can see if other people can reproduce them.  I got a new
> 8-core server I need to thrash anyway and will try and do just that
> starting tomorrow.
>
> For all I know my concerns are strictly a rare edge case.  But since the
> final adjustments to things like whether there is an upper limit or not
> are very small patches compared to what's already been done here, I sent
> in what I thought was ready to go because I didn't want to hold up
> reviewing the bulk of the code over some of these fine details.

Apologies for asking this on the wrong list, but it is at least the right
thread.

What is the current thinking on bg_writer setttings for systems such as
4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?

-dg

--
David Gould                      daveg@sonic.net
If simplicity worked, the world would be overrun with insects.

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Heikki Linnakangas
Date:
Greg Smith wrote:
> I realized recently the task I should take on here is to run some more
> experiments with the latest code and pass along suggested techniques for
> producing/identifying the kind of problem conditions I've run into in
> the past; then we can see if other people can reproduce them.  I got a
> new 8-core server I need to thrash anyway and will try and do just that
> starting tomorrow.

Yes, please do that. I can't imagine a situation where a tunable maximum
would help, but you've clearly spent a lot more time experimenting with
it than me.

I have noticed that on a heavily (over)loaded system with fully
saturated I/O, bgwriter doesn't make any difference because all the
backends need to wait for writes anyway. But it doesn't hurt either.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: Automatic adjustment of bgwriter_lru_maxpages

From
"Jim C. Nasby"
Date:
Moving to -performance.

On Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:
> Apologies for asking this on the wrong list, but it is at least the right
> thread.
>
> What is the current thinking on bg_writer setttings for systems such as
> 4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?

It depends greatly on how much of your data tends to stay 'pinned' in
shared_buffers between checkpoints. In a case where the same data tends
to stay resident you're going to need to depend on the 'all' scan to
decrease the impact of checkpoints (though the load distributed
checkpoint patch will change that greatly).

Other than that tuning bgwriter boils down to your IO capability as well
as how often you're checkpointing.
--
Jim Nasby                                      decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: Automatic adjustment of bgwriter_lru_maxpages

From
Bruce Momjian
Date:
Your patch has been added to the PostgreSQL unapplied patches list at:

    http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---------------------------------------------------------------------------


Greg Smith wrote:
> Attached are two patches that try to recast the ideas of Itagaki
> Takahiro's auto bgwriter_lru_maxpages patch in the direction I think this
> code needs to move.  Epic-length commentary follows.
>
> The original code came from before there was a pg_stat_bgwriter.  The
> first patch (buf-alloc-stats) takes the two most interesting pieces of
> data the original patch collected, the number of buffers allocated
> recently and the number that the clients wrote out, and ties all that into
> the new stats structure.  With this patch applied, you can get a feel for
> things like churn/turnover in the buffer pool that were very hard to
> quantify before.  Also, it makes it easy to measure how well your
> background writer is doing at writing buffers so the clients don't have
> to.  Applying this would complete one of my personal goals for the 8.3
> release, which was having stats to track every type of buffer write.
>
> I split this out because I think it's very useful to have regardless of
> whether the automatic tuning portion is accepted, and I think these
> smaller patches make the review easier.  The main thing I would recommend
> someone check is how am_bg_writer is (mis?)used here.  I spliced some of
> the debugging-only code from the original patch, and I can't tell if the
> result is a robust enough approach to solving the problem of having every
> client indirectly report their activity to the background writer.  Other
> than that, I think this code is ready for review and potentially
> comitting.
>
> The second patch (limit-lru) adds on top of that a constraint of the LRU
> writer so that it doesn't do any more work than it has to.  Note that I
> left verbose debugging code in here because I'm much less confident this
> patch is complete.
>
> It predicts upcoming buffer allocations using a 16-period weighted moving
> average of recent activity, which you can think of as the last 3.2 seconds
> at the default interval.  After testing a few systems that seemed a decent
> compromise of smoothing in both directions.  I found the 2X overallocation
> fudge factor of the original patch way too aggressive, and just pick the
> larger of the most recent allocation amount or the smoothed value.  The
> main thing that throws off the allocation estimation is when you hit a
> checkpoint, which can give a big spike after the background writer returns
> to BgBufferSync and notices all the buffers that were allocated during the
> checkpoint write; the code then tries to find more buffers it can recycle
> than it needs to.  Since the checkpoint itself normally leaves a large
> wake of reusable buffers behind it, I didn't find this to be a serious
> problem.
>
> There's another communication issue here, which is that SyncOneBuffer
> needs to return more information about the buffer than it currently does
> once it gets it locked.  The background writer needs to know more than
> just if it was written to tune itself.  The original patch used a clever
> trick for this which worked but I found confusing.  I happen to have a
> bunch of other background writer tuning code I'm working on, and I had to
> come up with a more robust way to communicate buffer internals back via
> this channel.  I used that code here, it's a bitmask setup similar to how
> flags like BM_DIRTY are used.  It's overkill for solving this particular
> problem, but I think the interface is clean and it helps support future
> enhancements in intelligent background writing.
>
> Now we get to the controversial part.  The original patch removed the
> bgwriter_lru_maxpages parameter and updated the documentation accordingly.
> I didn't do that here.  The reason is that after playing around in this
> area I'm not convinced yet I can satisfy all the tuning scenarios I'd like
> to be able to handle that way.  I describe this patch as enforcing a
> constraint instead; it allows you to set the LRU parameters much higher
> than was reasonable before without having to be as concerned about the LRU
> writer wasting resources.
>
> I already brought up some issues in this area on -hackers (
> http://archives.postgresql.org/pgsql-hackers/2007-04/msg00781.php ) but my
> work hasn't advanced as fast as I'd hoped.  I wanted to submit what I've
> finished anyway because I think any approach here is going to have cope
> with the issues addressed in these two patches, and I'm happy now with how
> they're solved here.  It's only a one-line delete to disable the LRU
> limiting behavior of the second patch, at which point it's strictly
> internals code with no expected functional impact that alternate
> approaches might be built on.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Content-Description:

[ Attachment, skipping... ]

Content-Description:

[ Attachment, skipping... ]

>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings

--
  Bruce Momjian  <bruce@momjian.us>          http://momjian.us
  EnterpriseDB                               http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +