Re: VACUUM FULL out of memory - Mailing list pgsql-hackers

From Andrew Sullivan
Subject Re: VACUUM FULL out of memory
Date
Msg-id 20080107155753.GG18581@crankycanuck.ca
Whole thread Raw
In response to Re: VACUUM FULL out of memory  (Michael Akinde <michael.akinde@met.no>)
List pgsql-hackers
On Mon, Jan 07, 2008 at 10:40:23AM +0100, Michael Akinde wrote:
> As suggested, I tested a VACUUM FULL ANALYZE with 128MB shared_buffers 
> and 512 MB reserved for maintenance_work_mem (on a 32 bit machine with 4 
> GB RAM). That ought to leave more than enough space for other processes 
> in the system. Again, the system fails on the VACUUM with the following 
> error (identical to the error we had when maintenance_work_mem was very 
> low.
> 
> INFO:  vacuuming "pg_catalog.pg_largeobject"
> ERROR:  out of memory
> DETAIL:  Failed on request of size 536870912

Something is using up the memory on the machine, or (I'll bet this is more
likely) your user (postgres?  Whatever's running the postmaster) has a
ulimit on its ability to allocate memory on the machine.  

> It strikes me as somewhat worrying that VACUUM FULL ANALYZE has so much 
> trouble with a large table. Granted - 730 million rows is a good deal - 

No, it's not really that big.  I've never seen a problem like this.  If it
were the 8.3 beta, I'd be worried; but I'm inclined to suggest you look at
the OS settings first given your set up.

Note that you should almost never use VACUUM FULL unless you've really
messed things up.  I understand from the thread that you're just testing
things out right now.  But VACUUM FULL is not something you should _ever_
need in production, if you've set things up correctly.

A




pgsql-hackers by date:

Previous
From: Andrew Sullivan
Date:
Subject: Re: Dynamic Partitioning using Segment Visibility Maps
Next
From: "Kevin Grittner"
Date:
Subject: Re: OUTER JOIN performance regression remains in 8.3beta4