Thread: LOG: could not fork new process for connection: Cannot allocate memory

LOG: could not fork new process for connection: Cannot allocate memory

From
Ahsan Ali
Date:
Hi Support

I having my production server service intruption because of this "Cannot allocate memory" error in the db log. 

because of this error even DBA's cant login to the db server unless we kill one of the existing sessions. Please help me resolve this issue.

  1. $ grep Commit /proc/meminfo
  2. CommitLimit:    133625220 kB
  3. Committed_AS:   82635628 kB
  4.  
  5.  
  6. $ cat /proc/sys/vm/overcommit_memory
  7. 2
  8.  
  9.  
  10.  
  11. $ free -m -g
  12.              total       used       free     shared    buffers     cached
  13. Mem:           252        238         13          0          0        175
  14. -/+ buffers/cache:         62        189
  15. Swap:            7          0          7

Re: LOG: could not fork new process for connection: Cannot allocate memory

From
John R Pierce
Date:
On 8/25/2016 11:49 AM, Ahsan Ali wrote:
>
> I having my production server service intruption because of this
> "Cannot allocate memory" error in the db log.

could you paste the whole error message?

what version of postgres is this?

what OS (if linux, distribution) version is this?


older versions of postgres require kernel.shmmax and some other settings
to be increased if you request larger shared_memory settings.   this is
likely what you're running into.


--
john r pierce, recycling bits in santa cruz



Hi John,

Thanks for replying. Below are all the details

I am using psql (PostgreSQL) 9.5.2

***  Error we got in postgresql log

2016-08-25 11:15:55 PDT [60739]: [10282-1] LOG:  could not fork new process for connection: Cannot allocate memory
2016-08-25 11:15:55 PDT [60739]: [10283-1] LOG:  could not fork new process for connection: Cannot allocate memory
2016-08-25 11:15:55 PDT [60739]: [10284-1] LOG:  could not fork new process for connection: Cannot allocate memory
2016-08-25 11:15:55 PDT [60739]: [10285-1] LOG:  could not fork new process for connection: Cannot allocate memory
2016-08-25 11:15:55 PDT [60739]: [10286-1] LOG:  could not fork new process for connection: Cannot allocate memory
2016-08-25 11:15:55 PDT [60739]: [10287-1] LOG:  could not fork new process for connection: Cannot allocate memory

*** OS

Red Hat Enterprise Linux Server release 6.3 (Santiago)
I dont see any errors in /var/log/messages

*** OS Configuration

-bash-4.1$ cat /etc/sysctl.conf
#ipv4 definitions

net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 1 
net.ipv4.conf.default.rp_filter = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

#begin=

vm.swappiness=10
vm.overcommit_memory=2
vm.overcommit_ratio=85
vm.dirty_background_ratio=1
vm.dirty_ratio=20
vm.dirty_expire_centisecs=500
vm.dirty_writeback_centisecs=100
vm.zone_reclaim_mode=0
vm.nr_hugepages=25600
vm.hugetlb_shm_group=26
vm.nr_overcommit_hugepages=512
vm.dirty_background_ratio=1

kernel.shmmax=214748364800
kernel.shmall=52428800
kernel.sem = 4010 1002500 4010 350

fs.file-max=1000000
net.ipv4.conf.all.log_martians = 0

*** Memory output

-bash-4.1$ cat /proc/meminfo
MemTotal:       264493868 kB
MemFree:         6158268 kB
Buffers:           64584 kB
Cached:         170097488 kB
SwapCached:       928152 kB
Active:         164001880 kB
Inactive:       57038336 kB
Active(anon):   80960992 kB
Inactive(anon):  3445472 kB
Active(file):   83040888 kB
Inactive(file): 53592864 kB
Unevictable:        4996 kB
Mlocked:            4996 kB
SwapTotal:       8388592 kB
SwapFree:        5251268 kB
Dirty:            253228 kB
Writeback:          5116 kB
AnonPages:      49991856 kB
Mapped:         32847112 kB
Shmem:          33523772 kB
Slab:            1987048 kB
SReclaimable:    1664692 kB
SUnreclaim:       322356 kB
KernelStack:       23912 kB
PageTables:     18320680 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    221273452 kB
Committed_AS:   95458972 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      688676 kB
VmallocChunk:   34224374380 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:    6856
HugePages_Free:     6836
HugePages_Rsvd:       59
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        7852 kB
DirectMap2M:     3102720 kB
DirectMap1G:    265289728 kB


-bash-4.1$ free -m -g
             total       used       free     shared    buffers     cached
Mem:           252        248          3          0          0        164
-/+ buffers/cache:         84        167
Swap:            7          2          5

*** Kernel settings

# sudo -iu postgres ulimit -a

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 2066203
max locked memory       (kbytes, -l) 209715200
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1000000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 5500
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited



cat /etc/security/limits.conf 

postgres soft nproc 5500
postgres hard nproc 5500
postgres soft nofile 1000000
postgres hard nofile 1000000
postgres soft memlock 209715200
postgres hard memlock 209715200

$ grep Commit /proc/meminfo
CommitLimit:    133625220 kB
Committed_AS:   82635628 kB


$ cat /proc/sys/vm/overcommit_memory
2


*** DB Parameter settings
spool of database parameter file is 


Regards
Ali







On Thu, Aug 25, 2016 at 12:57 PM, John R Pierce <pierce@hogranch.com> wrote:
On 8/25/2016 11:49 AM, Ahsan Ali wrote:

I having my production server service intruption because of this "Cannot allocate memory" error in the db log.

could you paste the whole error message?

what version of postgres is this?

what OS (if linux, distribution) version is this?


older versions of postgres require kernel.shmmax and some other settings to be increased if you request larger shared_memory settings.   this is likely what you're running into.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: LOG: could not fork new process for connection: Cannot allocate memory

From
John R Pierce
Date:
On 8/25/2016 3:54 PM, Ahsan Ali wrote:
Red Hat Enterprise Linux Server release 6.3 (Santiago)

that was released in June 2012, you're missing 4+ years of bug fixes, 6.8 is current.


max_connections = 3000

thats insanely high for most purposes unless you have several 100 CPU cores.

otherwise, hard to say whats failing, those log entries aren't giving much info.   How many connections are active (select count(*) from pg_stat_activity; ) when you get the error ?



-- 
john r pierce, recycling bits in santa cruz
yes it is older however we do apply security patches now a then. regarding max connection its the application design however it does not have that many active session. 

postgres=# select count(*) from pg_stat_activity;
 count 
-------
  1818

Please let me know if you like to see any other logs and stuff

Regards
Ali

On Thu, Aug 25, 2016 at 4:51 PM, John R Pierce <pierce@hogranch.com> wrote:
On 8/25/2016 3:54 PM, Ahsan Ali wrote:
Red Hat Enterprise Linux Server release 6.3 (Santiago)

that was released in June 2012, you're missing 4+ years of bug fixes, 6.8 is current.


max_connections = 3000

thats insanely high for most purposes unless you have several 100 CPU cores.

otherwise, hard to say whats failing, those log entries aren't giving much info.   How many connections are active (select count(*) from pg_stat_activity; ) when you get the error ?



-- 
john r pierce, recycling bits in santa cruz

Re: LOG: could not fork new process for connection: Cannot allocate memory

From
John R Pierce
Date:
On 8/25/2016 5:10 PM, Ahsan Ali wrote:
> yes it is older however we do apply security patches now a then.

redhat doesn't really support mixing packages from different releases,
they only test things with all packages from the same snapshot.   "yum
update" should bring the whole system up to current.


> regarding max connection its the application design however it does
> not have that many active session.
> postgres=# select count(*) from pg_stat_activity;
>  count
> -------
>   1818

so there were 1818 postgres client processes at the time it coudln't
create a new process.   thats certainly a larger number than I've ever
run.   if I have client software that has lots and lots of idle
connections, I use a connection pooler like pgbouncer, in transaction mode.

--
john r pierce, recycling bits in santa cruz



we have a pooling on the application level. however we never had this issues before this start happning since last couple of days in past we had over 2300 sessions but no issues.

On Thu, Aug 25, 2016 at 5:29 PM, John R Pierce <pierce@hogranch.com> wrote:
On 8/25/2016 5:10 PM, Ahsan Ali wrote:
yes it is older however we do apply security patches now a then.

redhat doesn't really support mixing packages from different releases, they only test things with all packages from the same snapshot.   "yum update" should bring the whole system up to current.


regarding max connection its the application design however it does not have that many active session. postgres=# select count(*) from pg_stat_activity;
 count
-------
  1818

so there were 1818 postgres client processes at the time it coudln't create a new process.   thats certainly a larger number than I've ever run.   if I have client software that has lots and lots of idle connections, I use a connection pooler like pgbouncer, in transaction mode.

--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

On 8/25/16 7:45 PM, Ahsan Ali wrote:

Please don't top-post; it's harder to read.
> On Thu, Aug 25, 2016 at 5:29 PM, John R Pierce <pierce@hogranch.com
> <mailto:pierce@hogranch.com>> wrote:
>     so there were 1818 postgres client processes at the time it coudln't
>     create a new process.   thats certainly a larger number than I've
>     ever run.   if I have client software that has lots and lots of idle
>     connections, I use a connection pooler like pgbouncer, in
>     transaction mode.

While not the most ideal, people pay way too much attention to large
connection counts. It's not *that* big a deal.

> we have a pooling on the application level. however we never had this
> issues before this start happning since last couple of days in past we
> had over 2300 sessions but no issues.

Well, if I'm reading your original post correctly, this on a server that
only has 252MB of memory, which is *very* small. Even so, according to
`free` there's 175MB cached, which should become available as necessary.

While the shared memory settings are an interesting theory, there's
nothing in 9.3 that would attempt to allocate more shared memory after
the database is started, so that can't be it.

The only thing I can think of is that someone enabled user quotas on the
system... though if that was the case I would expect it to apply to all
the existing backends as well (though, maybe there's some mode where
that doesn't happen...).

It might also be possible that Postgres is reporting the wrong error...
ISTR one or two cases in startup code where failure to allocate
something other than memory (like a socket) could result in a false
memory error in some pathological cases. If you've got debug symbols you
could try attaching to the postmaster and setting a breakpoint at
ereport and then trying to connect. You could then get a backtrace; just
don't leave the system in that state for long. (There might be a more
elegant way to do that...)
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


Re: LOG: could not fork new process for connection: Cannot allocate memory

From
Jeff Janes
Date:
On Sun, Aug 28, 2016 at 5:18 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
On 8/25/16 7:45 PM, Ahsan Ali wrote:

Please don't top-post; it's harder to read.
On Thu, Aug 25, 2016 at 5:29 PM, John R Pierce <pierce@hogranch.com
<mailto:pierce@hogranch.com>> wrote:
    so there were 1818 postgres client processes at the time it coudln't
    create a new process.   thats certainly a larger number than I've
    ever run.   if I have client software that has lots and lots of idle
    connections, I use a connection pooler like pgbouncer, in
    transaction mode.

While not the most ideal, people pay way too much attention to large connection counts. It's not *that* big a deal.

we have a pooling on the application level. however we never had this
issues before this start happning since last couple of days in past we
had over 2300 sessions but no issues.

Well, if I'm reading your original post correctly, this on a server that only has 252MB of memory, which is *very* small. Even so, according to `free` there's 175MB cached, which should become available as necessary.


I believe that is 252GB, not MB.  "free -m -g" is the same thing as "free -g"

I think his problem is more likely to be in "nproc 5500"

If he has 1818 or 2300 user session, who knows how much other miscellaneous cruft is going on?  It could easily exceed 5500.

Cheers,


Jeff