On Mon, 2006-11-27 at 13:44 -0500, Tom Lane wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> > Under specific conditions, I propose to replace the array with a hash
> > table, designed with a custom hash function that would map the pins held
> > onto just 16 hash buckets.
>
> > Comments?
>
> Most likely a waste of development effort --- have you got any evidence
> of a real effect here? With 200 max_connections the size of the arrays
> is still less than 10% of the space occupied by the buffers themselves,
> ergo there isn't going to be all that much cache-thrashing compared to
> what happens in the buffers themselves. You're going to be hard pressed
> to buy back the overhead of the hashing.
And at 2000 connections we waste RAM the size of shared_buffers... that
isn't something to easily ignore.
> It might be interesting to see whether we could shrink the refcount
> entries to int16 or int8. We'd need some scheme to deal with overflow,
> but given that the counts are now backed by ResourceOwner entries, maybe
> extra state could be kept in those entries to handle it.
int8 still seems like overkjll. When will the ref counts go above 2 on a
regular basis? Surely refcount=2 is just chance at the best of times.
Refcount -> 2 bits per value, plus a simple overflow list? That would
allow 0,1,2 ref counts plus 3 means look in hashtable to find real
refcount.
I'll see what test results I can arrange.
-- Simon Riggs EnterpriseDB http://www.enterprisedb.com