Re: Running out of memory on vacuum - Mailing list pgsql-general

From Ioana Danes
Subject Re: Running out of memory on vacuum
Date
Msg-id 1368541827.61061.YahooMailNeo@web164605.mail.gq1.yahoo.com
Whole thread Raw
In response to Re: Running out of memory on vacuum  (Igor Neyman <ineyman@perceptron.com>)
Responses Re: Running out of memory on vacuum  (Igor Neyman <ineyman@perceptron.com>)
Re: Running out of memory on vacuum  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-general
Hi Igor,

1. I could remove the nightly vacuum but I think that is
 not the cause. The vacuum is only catching the problem. If I ignore the
 vacuum message for few days the system is gonna run out of memory on
queries...
2. There is no autovacuum running in the same time. I
tried to run vacuum verbose manually and checked what else was going on
on the server.
I also reduced the maintenance work mem to 1 GB but I get the same error.
3.
 I do use connection pooling. I have 1500 terminals selling and at busy
times I might need more than 100 active connections but just
occationally...

Thanks for quick response,



----- Original Message -----
From: Igor Neyman <ineyman@perceptron.com>
To: Ioana Danes <ioanasoftware@yahoo.ca>; PostgreSQL General <pgsql-general@postgresql.org>
Cc:
Sent: Tuesday, May 14, 2013 10:11:19 AM
Subject: Re: [GENERAL] Running out of memory on vacuum

> Subject: [GENERAL] Running out of memory on vacuum
>
> Hi all,
>
> I have a production database that sometimes runs out of memory=at
> nightly vacuum.
>
> The application runs typically with around 40 post=res connections but
> there are times when the connections increase because =f some queries
> going on. The reason is that the operations are slow, the t=rminals
> time out and try to reconnect using new connections. Some time ago=I
> started to have problems with too many connections being open so I
> lower=d the limit to
> 300 connections. It was all good until recently when even w=th 180
> connections I am running out of memory on vacuum... So the connecti=ns
> increase to 180 and the system still runs properly for other 2 days
> but=then at nightly vacuum runs out of memory.
> The fix is to restart postgre= ... If I only close the connections the
> problem is still these so I need =o restart postgres.
> If I don't restart postgres then the system will run=out of memory on
> queries at a point...
> Another important thing is that d=ring vacuum at 1am nothing else is
> going on that server so all the connect=ons are idle.
>
> 2013-05-14 06:53:51.449
> CST,"postgres","abrazo",8=41,"[local]",519233dc.2225,3,"VACUUM",2013-
> 05-14
> 06:53:48 CST,174/67143,0,=RROR,53200,"out of memory","Failed on request
> of size 668141820.",,,,,"vac=um;",,,"psql"
>
> OS:
>
> SUSE Linux Enterprise Server 11 (x86_64) VER=ION = 11 PATCHLEVEL = 2
>
> Suse compiled postgre version :
>  =A0                        =A0                        =A0 version
> ----------------------------------------------=------------------------
> --------------------------------------------------=--
>  PostgreSQL 9.0.3 on x86_64-suse-linux-gnu, compiled by GCC gcc (SU=E
> Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit
>
> Postgre= configuration parameters:
>
> max_connections = 300
> shared_buffers =3D 2GB
> maintenance_work_mem = 2GB
> effective_cache_size = 4GB
> wor=_mem = 2MB
> wal_buffers = 1MB
>
>
> checkpoint_segments = 16
> au=ovacuum = on
>
>
> wal_level = hot_standby
> archive_mode = on
> a=chive_command = 'test -f
> /cbnDBscripts/tmp/PITR_primarydb_stop_backup ||=rsync --timeout=30 -atz
> %p spsnidb1:/data01/wal_files/%f'
> archive_time=ut = 60
>
>
> free
>
>     total  =A0
> used       free
> shared    buffer=     cached
> Mem:
> 7880512    7825136 =A0
> 55376
> 0      72376  =A0 4537644 -/+
> buffers/cache:    3215116    4665396
> Swap:=A0
> 2097148
> 0    2097148
>
> =etc/sysctl.conf
>
> kernel.shmmax=2684354560
> kernel.shmall=26843545=0
> vm.swappiness=0
> vm.overcommit_memory=2
>
.....
>
> Thanks,
> ioana


1. You have autovacuum running.  Is there specific reason to run "manual" vacuum nightly?
2. Reduce maintenance_work_mem, you may have manual vacuum and autovacuum running at the same time, each requesting 2GB
(yourcurrent setting). 
3. Use connection pooling software (e.g. PgBouncer) and reduce max_connections.

Igor Neyman




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


pgsql-general by date:

Previous
From: Igor Neyman
Date:
Subject: Re: Running out of memory on vacuum
Next
From: kristian kvilekval
Date:
Subject: Re: postgres and xquery