Re: Maximum number of exclusive locks - Mailing list pgsql-general

From Jeff Janes
Subject Re: Maximum number of exclusive locks
Date
Msg-id CAMkU=1zLT3uO3bu+86yLhP=SRTqyGcbx+4FmXjusn87Qa9LKAg@mail.gmail.com
Whole thread Raw
In response to Re: Maximum number of exclusive locks  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Maximum number of exclusive locks  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On Tue, Sep 13, 2016 at 6:21 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Daniel Verite" <daniel@manitou-mail.org> writes:
> Nothing to complain about, but why would the above formula
> underestimate the number of object locks actually available
> to a transaction? Isn't it supposed to be a hard cap for such
> locks?

No, it's a minimum not a maximum.  There's (intentionally) a fair amount
of slop in the initial shmem size request.  Once everything that's going
to be allocated has been allocated during postmaster startup, the rest is
available for growth of shared hash tables, which in practice means the
lock table; there aren't any other shared structures that grow at runtime.
So there's room for the lock table to grow a bit beyond its nominal
capacity.

Having said that, the amount of slop involved is only enough for a
few hundred lock entries.  Not sure how you're managing to get to
nearly 20000 extra entries.


The code assumes every locked object will have 2 processes that hold it (or wait for it).  If you actually only have one holder for each locked object, that frees up a lot of memory to hold more locked objects.

Cheers,


Jeff

pgsql-general by date:

Previous
From: Oleg Bartunov
Date:
Subject: Re: Predicting query runtime
Next
From: Tom Lane
Date:
Subject: Re: Maximum number of exclusive locks