On 08/12/2011 10:51 PM, Greg Stark wrote:
> If you execute a large batch delete or update or even just set lots of
> hint bits you'll dirty a lot of buffers. The ring buffer forces the
> query that is actually dirtying all these buffers to also do the i/o
> to write them out. Otherwise you leave them behind to slow down other
> queries. This was one of the problems with the old vacuum code which
> the ring buffer replaced. It left behind lots of dirtied buffers for
> other queries to do i/o on.
>
I ran into the other side of this when trying to use Linux's relatively
new dirty_background_bytes tunable to constrain the OS write cache.
When running with the current VACUUM ring buffer code, if there isn't
also a large OS write cache backing that, performance suffers quite a
bit. I've been adding test rigging to quantify this into pgbench-tools
recently, and I fear that one of the possible outcomes could pushback
pressure toward making the database's ring buffer bigger. Just a
theory--waiting on some numbers still.
Anyway, I think every idea thrown out here so far needs about an order
of magnitude more types of benchmarking test cases before it can be
evaluated at all. The case I just mentioned is a good example of why.
Every other test I ran showed a nice improvement with the kernel tuning
I tried. But VACUUM was massively detuned in the process.
I have an entire file folder filled with notes on way to reorganize the
buffer cache, from my background writer work for 8.3. In my mind
they're all sitting stuck behind the problem of getting enough
standardized benchmark workloads to have a performance regression
suite. The idea of touching any of this code without a look at a large
number of different tests is a bit optimistic. What I expect to happen
here that all initially proposed changes will end up tuning for one
workload at the expense of other, not measured ones.
--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us