Re: Assistance with an out of shared memory error - Mailing list pgsql-general

From Tom Lane
Subject Re: Assistance with an out of shared memory error
Date
Msg-id 3297903.1642116955@sss.pgh.pa.us
Whole thread Raw
In response to Assistance with an out of shared memory error  (Gautam Bellary <gautam@pulsasensors.com>)
Responses Re: Assistance with an out of shared memory error
Re: Assistance with an out of shared memory error
List pgsql-general
Gautam Bellary <gautam@pulsasensors.com> writes:
> I've got a PL/pgSQL function (regenerate_gateway_last_seen, attached) that
> loops through all partitions of 2 tables ("Measure" and "ValuelessMeasure",
> schema attached) selecting data from each into another table
> ("GatewayLastSeenNew"). Occasionally the function runs to completion, but
> usually it errors with the message copied below. I'd appreciate any advice
> to help understand why I'm seeing this and if increasing
> max_locks_per_transaction, changing another configuration value, or
> changing how the function works would improve reliability.

Yes, you ought to raise max_locks_per_transaction ...

> 2. max_locks_per_transaction is being hit in the transaction - this also
> seems unlikely because max_locks_per_transaction is set to the default
> value of 64, but there are ~8000 partitions to consider and I expect the
> resulting GatewayLastSeenNew table to have thousands of rows.

... or else reduce the number of partitions you're using.  (People
frequently seem to think that more partitions are free.  That is
extremely not true.  I generally think that if you're using more
than a few dozen partitions per table, you are making a mistake.)

> If I was
> taking locks that would contribute towards that ceiling of 64 I'd expect
> this to fail every time, instead of failing often but not always as I
> observe.

You misinterpret what that parameter does: it is not a hard per-session
limit.  This error appears when the shared lock-table pool overflows, so
you can (usually) take a lot more than 64 locks before running into
trouble.  It matters what else is going on in the database.

> 3. The max size of the lock table is being exceeded - my understanding is
> that the lock table has room for max_locks_per_transaction *
> (max_connections + max_prepared_transactions) locks, which would be 64 *
> (200 + 0) = 12800 for my current configuration. I used 'SELECT COUNT(*)
> FROM PG_LOCKS' while the function was running and observe values as high as
> 21758, so if this is the issue it seems like I might not be estimating the
> max size of the lock table correctly or only specific locks contribute to
> that.

I don't recall for sure, but I think that the lock table has one entry per
lockable object, while the pg_locks view shows separate entries for
different lock modes on the same object.

            regards, tom lane



pgsql-general by date:

Previous
From: Gautam Bellary
Date:
Subject: Assistance with an out of shared memory error
Next
From: "David G. Johnston"
Date:
Subject: Re: How can a Postgres SQL script be automatically run when a new table turns up?