Re: Error: "could not fork new process for connection: Cannot allocate memory" - Mailing list pgsql-admin

From frank picabia
Subject Re: Error: "could not fork new process for connection: Cannot allocate memory"
Date
Msg-id CA+AKB6ECguuUFGsm-m4dZLwaULPn0DXQ6_H3Bu7RRX7-v=21qA@mail.gmail.com
Whole thread Raw
In response to Re: Error: "could not fork new process for connection: Cannot allocate memory"  (Laurenz Albe <laurenz.albe@cybertec.at>)
List pgsql-admin
Is that a guess?

Isn't there a different error for number of files open?  I think maybe you answered the question
for the error "out of file descriptors: Too many open files in system".


On Mon, Dec 21, 2020 at 9:35 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:
On Mon, 2020-12-21 at 09:17 -0400, frank picabia wrote:
> This error occurred a couple of days ago and it is unlikely to repeat as
> it was a series of exams being done in moodle with over 500 students.
> However I'd like to tune Postgres to see if it can be prevented.
>
> First of all, we did not run out of memory within the OS.  This system is monitored by cacti and
> Real Used memory never exceeded 30 GB on a server having 64 GB ram plus some swap.
>
> I can also see in the messages log, the Linux kernel OOM killer was never triggered.
>
> The postgres user running the DB has no ulimits on memory which might apply.
>
> Here is the entry in the Postgres log:
>
> <2020-12-19 13:19:29 GMT>LOG: could not fork new process for connection: Cannot allocate memory
> TopMemoryContext: 136704 total in 13 blocks; 6688 free (4 chunks); 130016 used
> smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks); 10672 used
> TopTransactionContext: 8192 total in 1 blocks; 6152 free (5 chunks); 2040 used
> <2020-12-19 13:19:29 GMT>LOG: could not fork new process for connection: Cannot allocate memory
> TransactionAbortContext: 32768 total in 1 blocks; 32728 free (0 chunks); 40 used
> Portal hash: 8192 total in 1 blocks; 1672 free (0 chunks); 6520 used
> PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>
> My conclusion is this memory error must be related to limits within Postgres tunables, or perhaps
>  something like SHM memory, which we have not set up in the sysctl.conf , so this is just a default amount.
>
> The values we have in postgres.conf are mostly suggested by pgtune script.  The one exception being
>  max_connections=500, which was increased to try to avoid bottlenecks being reached with Apache client
>  numbers, and gradually over time as we saw we still had lots of memory available with a higher connection limit.
>
> Which tunables do you need to see?

You probably have to increase the operating system limit for open files
for the "postgres" user (ulimit -n).

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

pgsql-admin by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: Error: "could not fork new process for connection: Cannot allocate memory"
Next
From: Tom Lane
Date:
Subject: Re: Error: "could not fork new process for connection: Cannot allocate memory"