On Fri, 2006-04-21 at 17:38 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > The earlier lmgr lock partitioning had a hard-coded number of
> > partitions, which was sensible because of the reduced likelihood of
> > effectiveness beyond a certain number of partitions. That doesn't follow
> > here since the BufMappingLock contention will vary with the size of
> > shared_buffers and with the number of CPUs in use (for a given
> > workload). I'd like to see the partitioning calculated at server startup
> > either directly from shared_buffers or via a parameter. We may not be
> > restricted to only using a hash function as we were with lmgr, perhaps
> > using a simple range partitioning.
>
> I don't think any of that follows; and a large number of partitions is
> risky because it increases the probability of exhausting shared memory
> (due to transient variations in the actual size of the hashtables for
> different partitions).
lmgr partitioning uses either 4 or 16, restricted by the hash function,
for various reasons. I see no similar restriction on using a hash
function here - we could equally well use range partitioning. That
relieves the restriction on the number of partitions, allowing us either
more or less partitions, according to need. We can place a limit on that
if you see a problem - at what level do you see a problem?
-- Simon Riggs EnterpriseDB http://www.enterprisedb.com/