On Wed, Dec 21, 2011 at 5:17 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> With the increased performance we have now, I don't think increasing
> that alone will be that useful since it doesn't solve all of the
> problems and (I am told) likely increases lookup speed.
I have benchmarks showing that it works, for whatever that's worth.
> The full list of clog problems I'm aware of is: raw lookup speed,
> multi-user contention, writes at checkpoint and new xid allocation.
What is the best workload to show a bottleneck on raw lookup speed?
I wouldn't expect writes at checkpoint to be a big problem because
it's so little data.
What's the problem with new XID allocation?
> Would it be better just to have multiple SLRUs dedicated to the clog?
> Simply partition things so we have 2^N sets of everything, and we look
> up the xid in partition (xid % (2^N)). That would overcome all of the
> problems, not just lookup, in exactly the same way that we partitioned
> the buffer and lock manager. We would use a graduated offset on the
> page to avoid zeroing pages at the same time. Clog size wouldn't
> increase, we'd have the same number of bits, just spread across 2^N
> files. We'd have more pages too, but that's not a bad thing since it
> spreads out the contention.
It seems that would increase memory requirements (clog1 through clog4
with 2 pages each doesn't sound workable). It would also break
on-disk compatibility for pg_upgrade. I'm still holding out hope that
we can find a simpler solution...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company