Thread: Problem with Out of Memory and no more connecction possible

Problem with Out of Memory and no more connecction possible

From
"AL-Temimi, Muthana"
Date:

Hello admins,

 

i have postgresql version  9.1 and those my configurations:

 

max_connection=600

shared_buffers=1024M

work_mem=4M

and all others parameters are default as delivered with postgresql.

 

I installed on Suse Enterprice Linux Server and kernel_shxmem= 2GByte in the sysctl.conf, because if I put 1Gbyte postgresql will never started.

The total RAM of the server is 8GB.

 

But when the active connections reach 300 then we get out of memory and I checked the system through “top” and I got 7,8GB used from 8GB and started to swap also. At the same time there are no new connection possible.

 

I have connection pool pgpool II in the front of and the everything with it till now OK. No problem in the system and works fine.

 

Any help will be grateful

 

Best Regards

 

Muthana AL-Temimi

M.Sc. Informations- und Kommunikations-Systeme

 

Technische Universitaet Hamburg Harburg

-Rechenzentrum-

Am Schwarzenberg-Campus 3

D-21073 Hamburg

 

Tel.:  +49.40.42878.2338

Fax.: +49.40.42793.5160

E-Mail: m.al@tu-harburg.de

http://www.tu-harburg.de/rzt

 

 

Re: Problem with Out of Memory and no more connecction possible

From
Dmitrii Golub
Date:
Hello,
It's well known behaviour, just use connection pooler

2015-06-05 0:35 GMT+03:00 AL-Temimi, Muthana <muthana.al-temimi@tu-harburg.hamburg.de>:

Hello admins,

 

i have postgresql version  9.1 and those my configurations:

 

max_connection=600

shared_buffers=1024M

work_mem=4M

and all others parameters are default as delivered with postgresql.

 

I installed on Suse Enterprice Linux Server and kernel_shxmem= 2GByte in the sysctl.conf, because if I put 1Gbyte postgresql will never started.

The total RAM of the server is 8GB.

 

But when the active connections reach 300 then we get out of memory and I checked the system through “top” and I got 7,8GB used from 8GB and started to swap also. At the same time there are no new connection possible.

 

I have connection pool pgpool II in the front of and the everything with it till now OK. No problem in the system and works fine.

 

Any help will be grateful

 

Best Regards

 

Muthana AL-Temimi

M.Sc. Informations- und Kommunikations-Systeme

 

Technische Universitaet Hamburg Harburg

-Rechenzentrum-

Am Schwarzenberg-Campus 3

D-21073 Hamburg

 

Tel.:  +49.40.42878.2338

Fax.: +49.40.42793.5160

E-Mail: m.al@tu-harburg.de

http://www.tu-harburg.de/rzt

 

 


Re: Problem with Out of Memory and no more connecction possible

From
Keith
Date:


On Thu, Jun 4, 2015 at 5:55 PM, Dmitrii Golub <dmitrii.golub@gmail.com> wrote:
Hello,
It's well known behaviour, just use connection pooler

2015-06-05 0:35 GMT+03:00 AL-Temimi, Muthana <muthana.al-temimi@tu-harburg.hamburg.de>:

Hello admins,

 

i have postgresql version  9.1 and those my configurations:

 

max_connection=600

shared_buffers=1024M

work_mem=4M

and all others parameters are default as delivered with postgresql.

 

I installed on Suse Enterprice Linux Server and kernel_shxmem= 2GByte in the sysctl.conf, because if I put 1Gbyte postgresql will never started.

The total RAM of the server is 8GB.

 

But when the active connections reach 300 then we get out of memory and I checked the system through “top” and I got 7,8GB used from 8GB and started to swap also. At the same time there are no new connection possible.

 

I have connection pool pgpool II in the front of and the everything with it till now OK. No problem in the system and works fine.

 

Any help will be grateful

 

Best Regards

 

Muthana AL-Temimi

M.Sc. Informations- und Kommunikations-Systeme

 

Technische Universitaet Hamburg Harburg

-Rechenzentrum-

Am Schwarzenberg-Campus 3

D-21073 Hamburg

 

Tel.:  +49.40.42878.2338

Fax.: +49.40.42793.5160

E-Mail: m.al@tu-harburg.de

http://www.tu-harburg.de/rzt

 

 



Keep in mind that work_mem is memory used in addition to shared_buffers.
With work_mem = 4MB and a possible 600 connections max, that could take up an additional 2.3GB if all connections were in use and doing simple queries.

But work_mem is something that can have many multiples in use per session if you have complex queries being run. Read https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for more info.

Are you running pgpool on the same server? If so, 8GB doesn't sound like enough memory for what you're trying to do if your concurrent connection count is going that high.

Keith

Re: Problem with Out of Memory and no more connecction possible

From
lst_hoe02@kwsoft.de
Date:
Zitat von "AL-Temimi, Muthana" <muthana.al-temimi@tu-harburg.hamburg.de>:

> Hello admins,
>
> i have postgresql version  9.1 and those my configurations:
>
> max_connection=600
> shared_buffers=1024M
> work_mem=4M
> and all others parameters are default as delivered with postgresql.
>
> I installed on Suse Enterprice Linux Server and kernel_shxmem=
> 2GByte in the sysctl.conf, because if I put 1Gbyte postgresql will
> never started.
> The total RAM of the server is 8GB.
>
> But when the active connections reach 300 then we get out of memory
> and I checked the system through "top" and I got 7,8GB used from 8GB
> and started to swap also. At the same time there are no new
> connection possible.
>
> I have connection pool pgpool II in the front of and the everything
> with it till now OK. No problem in the system and works fine.
>
> Any help will be grateful
>
> Best Regards
>
> Muthana AL-Temimi
> M.Sc. Informations- und Kommunikations-Systeme

Hello,

have you check what is actually using the memory, eg. which percentage
for which processes? A typical postgresql process on my systems use ~
4M of non-shared memory, so with ~300 connections without pooling you
will have ~1,2G just for the processes running. Furthermore work_mem
is the limit of memory per sort which you can have many per connection
and query. Depending on your workload you can try reducing
shared_buffers and work_mem because Postgresql does not need much
shared_buffers but relies on the OS cache for most of the time and if
your sorts are rare or not that big/critical work_mem at 2M might also
be sufficient.

This is explained here in detail and there are also hints how to get
around if you really need a massive number of concurrent connections:

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Regards

Andreas



Attachment