Re: Page Checksums + Double Writes - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Page Checksums + Double Writes
Date
Msg-id CA+TgmoZni9j_agpLttK45BLWN5eODvs4ebTpn+mJoLwZsRaDBQ@mail.gmail.com
Whole thread Raw
In response to Re: Page Checksums + Double Writes  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Fri, Dec 23, 2011 at 12:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> An obvious problem is that, if the abort rate is significantly
>> different from zero, and especially if the aborts are randomly mixed
>> in with commits rather than clustered together in small portions of
>> the XID space, the CLOG rollup data would become useless.
>
> Yeah, I'm afraid that with N large enough to provide useful
> acceleration, the cases where you'd actually get a win would be too thin
> on the ground to make it worth the trouble.

Well, I don't know: something like pgbench is certainly going to
benefit, because all the transactions commit.  I suspect that's true
for many benchmarks.  Whether it's true of real-life workloads is more
arguable, of course, but if the benchmarks aren't measuring things
that people really do with the database, then why are they designed
the way they are?

I've certainly written applications that relied on the database for
integrity checking, so rollbacks were an expected occurrence, but then
again those were very low-velocity systems where there wasn't going to
be enough CLOG contention to matter anyway.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: patch: bytea_agg
Next
From: Tomas Vondra
Date:
Subject: Re: WIP: explain analyze with 'rows' but not timing