On Wed, 2010-08-04 at 15:16 -0400, Greg Smith wrote:
> Hannu Krosing wrote:
> > There was ample space for keeping the indexes in linux cache (it has 1GB
> > cached currently) though the system may have decided to start writing it
> > to disk, so I suspect that most of the time was spent copying random
> > index pages back and forth between shared buffers and disk cache.
> >
>
> Low shared_buffers settings will result in the same pages more often
> being written multiple times per checkpoint,
Do you mean "written to disk", or written out from shared_buffers to
disk cache ?
> particularly index pages,
> which is less efficient than keeping in the database cache and updating
> them there. This is a slightly different issue than just the overhead
> of copying them back and forth; by keeping them in cache, you actually
> reduce writes to the OS cache.
That's what I meant. Both writes to and read from the OS cache take a
significant amount of time once you are not doing real disk I/O.
> What I do to quantify that is...well,
> the attached shows it better than I can describe; only works on 9.0 or
> later as it depends on a feature I added for this purpose there. It
> measures exactly how much buffer cache churn happened during a test, in
> this case creating a pgbench database.
>
> --
> Greg Smith 2ndQuadrant US Baltimore, MD
> PostgreSQL Training, Services and Support
> greg@2ndQuadrant.com www.2ndQuadrant.us
>
>