Re: ERROR: too many dynamic shared memory segments - Mailing list pgsql-general

From Jakub Glapa
Subject Re: ERROR: too many dynamic shared memory segments
Date
Msg-id CAJk1zg2TKp0mN-DY-ZdAopR9c+mhZ3tEmaN=ZX9fZBYHkCjHnQ@mail.gmail.com
Whole thread Raw
In response to Re: ERROR: too many dynamic shared memory segments  (Thomas Munro <thomas.munro@enterprisedb.com>)
List pgsql-general
Thank You Thomas!



--
regards,
Jakub Glapa

On Thu, Dec 7, 2017 at 10:30 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
On Tue, Dec 5, 2017 at 1:18 AM, Jakub Glapa <jakub.glapa@gmail.com> wrote:
> I see that the segfault is under active discussion but just wanted to ask if
> increasing the max_connections to mitigate the DSM slots shortage is the way
> to go?

Hi Jakub,

Yes.  In future releases this situation will improve (maybe we'll
figure out how to use one DSM segment for all the gather nodes in your
query plan, and maybe it'll be moot anyway because maybe we'll be able
to use a Parallel Append for queries like yours so that it uses the
same set of workers over all the child plans instead of the
fork()-fest you're presumably seeing).  For now your only choice, if
you want that plan to run, is to crank up max_connections so that the
total number of concurrently executing Gather nodes is less than about
64 + 2 * max_connections.  There is also a crash bug right now in the
out-of-slots case as discussed, fixed in the next point release, but
even with that fix in place you'll still need a high enough
max_connections setting to be sure to be able to complete the query
without an error.

Thanks for the report!

pgsql-general by date:

Previous
From: Thomas Munro
Date:
Subject: Re: ERROR: too many dynamic shared memory segments
Next
From: Bruyninckx Kristof
Date:
Subject: RE: replication terminated by primary server