Re: SSI patch version 14 - Mailing list pgsql-hackers

From Kevin Grittner
Subject Re: SSI patch version 14
Date
Msg-id 4D525CA1020000250003A6CE@gw.wicourts.gov
Whole thread Raw
In response to Re: SSI patch version 14  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
List pgsql-hackers
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote:

>> (2)  The predicate lock and lock target initialization code was
>> initially copied and modified from the code for heavyweight
>> locks.  The heavyweight lock code adds 10% to the calculated
>> maximum size.  So I wound up doing that for
>> PredicateLockTargetHash and PredicateLockHash, but didn't do it
>> for SerializableXidHassh.  Should I eliminate this from the first
>> two, add it to the third, or leave it alone?
>
> I'm inclined to eliminate it from the first two. Even in
> LockShmemSize(), it seems a bit weird to add a safety margin, the
> sizes of the lock and proclock hashes are just rough estimates
> anyway.

I'm fine with that.  Trivial patch attached.

> * You missed that RWConflictPool is sized five times as large as
> SerializableXidHash, and
>
> * The allocation for RWConflictPool elements was wrong, while the
> estimate was correct.
>
> With these changes, the estimated and actual sizes match closely,
> so that actual hash table sizes are 50% of the estimated size as
> expected.
>
> I fixed those bugs

Thanks.  Sorry for missing them.

> but this doesn't help with the buildfarm members with limited
> shared memory yet.

Well, if dropping the 10% fudge factor on those two HTABs doesn't
bring it down far enough (which seems unlikely), what do we do?  We
could, as I said earlier, bring down the multiplier for the number
of transactions we track in SSI based on the maximum allowed
connections connections, but I would really want a GUC on it if we
do that.  We could bring down the default number of predicate locks
per transaction.  We could make the default configuration more
stingy about max_connections when memory is this tight.  Other
ideas?

I do think that anyone using SSI with a heavy workload will need
something like the current values to see decent performance, so it
would be good if there was some way to do this which would tend to
scale up as they increased something.  Wild idea: make the
multiplier equivalent to the bytes of shared memory divided by 100MB
clamped to a minimum of 2 and a maximum of 10?

-Kevin

Attachment

pgsql-hackers by date:

Previous
From: David Fetter
Date:
Subject: Re: SSI patch version 14
Next
From: Alexey Klyukin
Date:
Subject: Re: arrays as pl/perl input arguments [PATCH]