Thread: ERROR: too many dynamic shared memory segment

ERROR: too many dynamic shared memory segment

From
Ben Kanouse
Date:
Hello,

My database is experiencing a 'ERROR: too many dynamic shared memory
segment' error from time to time. It seems to happen most when traffic
is high, and it happens with semi-simple select statements that run a
parallel query plan with a parallel bitmap heap scan. It is currently
on version 12.4.

Here is an example query: https://explain.depesz.com/s/aatA

I did find a message in the archive on this topic and it seems the
resolution was to increase the max connection configuration. Since I
am running my database on heroku I do not have access to this
configuration. It also seems like a bit of an unrelated configuration
change in order to not run into this issue.

Link to archived discussion:
https://www.postgresql.org/message-id/CAEepm%3D2RcEWgES-f%2BHyg4931bOa0mbJ2AwrmTrabz6BKiAp%3DsQ%40mail.gmail.com

I think this block of code is determining the size for the control segment:
https://github.com/postgres/postgres/blob/REL_13_1/src/backend/storage/ipc/dsm.c#L157-L159

maxitems  = 64 + 2 * MaxBackends

It seems like the reason increasing the max connections helps is
because that number is part of the equation for determining the max
segment size.
I noticed in version 13.1 there is a commit that changes the
multiplier from 2 to 5:
https://github.com/postgres/postgres/commit/d061ea21fc1cc1c657bb5c742f5c4a1564e82ee2

maxitems  = 64 + 5 * MaxBackends

Should this commit be back-ported to earlier versions of postgres to
prevent this error in other versions?

Thank you,
Ben



Re: ERROR: too many dynamic shared memory segment

From
Thomas Munro
Date:
On Wed, Nov 18, 2020 at 4:21 AM Ben Kanouse <kanobt61@gmail.com> wrote:
> My database is experiencing a 'ERROR: too many dynamic shared memory
> segment' error from time to time. It seems to happen most when traffic
> is high, and it happens with semi-simple select statements that run a
> parallel query plan with a parallel bitmap heap scan. It is currently
> on version 12.4.
>
> Here is an example query: https://explain.depesz.com/s/aatA

Hmm, not sure how this plan could cause the problem, even if running
many copies of it.  In the past, this problem has been reported from
single queries that had a very high number of separate Gather nodes,
or very very large parallel hash joins, or, once you've hit the error
a few times those ways, also due to an ancient DSM leak that was fixed
in 93745f1e (fix present in 12.4).

> I noticed in version 13.1 there is a commit that changes the
> multiplier from 2 to 5:
> https://github.com/postgres/postgres/commit/d061ea21fc1cc1c657bb5c742f5c4a1564e82ee2
>
> maxitems  = 64 + 5 * MaxBackends
>
> Should this commit be back-ported to earlier versions of postgres to
> prevent this error in other versions?

Yeah, that seems like a good idea anyway.  I will do that tomorrow,
barring objections.



Re: ERROR: too many dynamic shared memory segment

From
Thomas Munro
Date:
On Wed, Nov 18, 2020 at 10:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:
> a few times those ways, also due to an ancient DSM leak that was fixed
> in 93745f1e (fix present in 12.4).

I take that bit back -- I misremembered -- that was a leak of the
actual memory once slots were exhausted, not a leak of the slot.



Re: ERROR: too many dynamic shared memory segment

From
Thomas Munro
Date:
On Wed, Nov 18, 2020 at 10:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:
> > Should this commit be back-ported to earlier versions of postgres to
> > prevent this error in other versions?
>
> Yeah, that seems like a good idea anyway.  I will do that tomorrow,
> barring objections.

Done, for 10, 11 and 12.