Re: hint bit cache v5 - Mailing list pgsql-hackers

From Merlin Moncure
Subject Re: hint bit cache v5
Date
Msg-id BANLkTi=7mqJv10kf9j_qbfxLiYUWdpt8eA@mail.gmail.com
Whole thread Raw
In response to Re: hint bit cache v5  (Simon Riggs <simon@2ndquadrant.com>)
Responses Re: hint bit cache v5  (Merlin Moncure <mmoncure@gmail.com>)
Re: hint bit cache v5  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
On Tue, May 10, 2011 at 11:59 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On Mon, May 9, 2011 at 5:12 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
>
>> I'd like to know if this is a strategy that merits further work...If
>> anybody has time/interest that is.  It's getting close to the point
>> where I can just post it to the commit fest for review.  In
>> particular, I'm concerned if Tom's earlier objections can be
>> satisfied. If not, it's back to the drawing board...
>
> I'm interested in what you're doing here.
>
> From here, there's quite a lot of tuning possibilities. It would be
> very useful to be able to define some metrics we are interested in
> reducing and working out how to measure them.

Following are results that are fairly typical of the benefits you
might see when the optimization kicks in.  The attached benchmark just
creates a bunch of records in a random table and scans it.  This is
more or less the scenario that causes people to grip about hint bit
i/o, especially in systems that are already under moderate to heavy
i/o stress.  I'm gonna call it for 20%, although it could be less if
you have an i/o system that spanks the test (try cranking -c and the
creation # records in bench.sql in that case).  Anecdotal reports of
extreme duress caused by hint bit i/o suggest problematic or mixed use
(OLTP + OLAP) workloads might see even more benefit.  One thing I need
to test is how much benefit you'll see with wider records.

I think I'm gonna revert the change to cache invalid bits. I just
don't see hint bits as a major contributor to dead tuples following
epic rollbacks (really, the solution for that case is simply to try
and not get in that scenario if you can).  This will put the code back
into the cheaper and simpler bit per transaction addressing.  What I
do plan to do though, is to check and set xmax commit bits in the
cache...that way deleted tuples will see cache benefits.

[hbcache]
merlin@mmoncure-ubuntu:~$ time pgbench -c 4 -n -T 200 -f bench.sql
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
duration: 200 s
number of transactions actually processed: 8
tps = 0.037167 (including connections establishing)
tps = 0.037171 (excluding connections establishing)

real    3m35.549s
user    0m0.008s
sys     0m0.004s

[HEAD]
merlin@mmoncure-ubuntu:~$ time pgbench -c 4 -n -T 200 -f bench.sql
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
duration: 200 s
number of transactions actually processed: 8
tps = 0.030313 (including connections establishing)
tps = 0.030317 (excluding connections establishing)

real    4m24.216s
user    0m0.000s
sys     0m0.012s

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: PGC_S_DEFAULT is inadequate
Next
From: Joseph Adams
Date:
Subject: Re: VARIANT / ANYTYPE datatype