Re: VACUUM ANALYZE out of memory - Mailing list pgsql-hackers

From Michael Akinde
Subject Re: VACUUM ANALYZE out of memory
Date
Msg-id 475E74E3.9040801@met.no
Whole thread Raw
In response to Re: VACUUM ANALYZE out of memory  (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>)
Responses Re: VACUUM ANALYZE out of memory  (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>)
Re: VACUUM ANALYZE out of memory  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
Re: VACUUM ANALYZE out of memory  (Martijn van Oosterhout <kleptog@svana.org>)
List pgsql-hackers
Thanks for the rapid responses.

Stefan Kaltenbrunner wrote:
this seems simply a problem of setting maintenance_work_mem too high (ie higher than what your OS can support - maybe an ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem to say 128MB and retry.
If you promise postgresql that it can get 1GB it will happily try to use it ...
I set up the system together with one of our Linux sysOps, so I think the settings should be OK. Kernel.shmmax is set to 1.2 GB, but I'll get him to recheck if there could be any other limits he has forgotten to increase.

The way the process was running, it seems to have basically just continually allocated memory until (presumably) it broke through the  slightly less than 1.2 GB shared memory allocation we had provided for PostgreSQL (at least the postgres process was still running by the time resident size had reached 1.1 GB).

Incidentally, in the first error of the two I posted, the shared memory setting was significantly lower (24 MB, I believe). I'll try with 128 MB before I leave in the evening, though (assuming the other tests I'm running complete by then).

Simon Riggs wrote:
On Tue, 2007-12-11 at 10:59 +0100, Michael Akinde wrote:
 
I am encountering problems when trying to run VACUUM FULL ANALYZE on a 
particular table in my database; namely that the process crashes out 
with the following problem:   
Probably just as well, since a VACUUM FULL on an 800GB table is going to
take a rather long time, so you are saved from discovering just how
excessively long it will run for. But it seems like a bug. This happens
consistently, I take it? 
I suspect so, though it has only happened a couple of times yet (as it does take a while) before it hits that 1.1 GB roof. But part of the reason for running the VACUUM FULL was of course to find out how long time it would take. Reliability is always a priority for us,
so I like to know what (useful) tools we have available and stress the system as much as possible...  :-)
Can you run ANALYZE and then VACUUM VERBOSE, both on just
pg_largeobject, please? It will be useful to know whether they succeed. 
I ran just ANALYZE on the entire database yesterday, and that worked without any problems.

I am currently running a VACUUM VERBOSE on the database. It isn't done yet, but it is running with a steady (low) resource usage.

Regards,

Michael A.

Attachment

pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: WORM and Read Only Tables (v0.1)
Next
From: Stefan Kaltenbrunner
Date:
Subject: Re: VACUUM ANALYZE out of memory