VACUUM ANALYZE out of memory - Mailing list pgsql-hackers

From Michael Akinde
Subject VACUUM ANALYZE out of memory
Date
Msg-id 475E5F9F.6050205@met.no
Whole thread Raw
Responses Re: VACUUM ANALYZE out of memory  (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>)
Re: VACUUM ANALYZE out of memory  (Simon Riggs <simon@2ndquadrant.com>)
List pgsql-hackers
Hi,

I am encountering problems when trying to run VACUUM FULL ANALYZE on a
particular table in my database; namely that the process crashes out
with the following problem:

INFO:  vacuuming "pg_catalog.pg_largeobject"
ERROR:  out of memory
DETAIL:  Failed on request of size 536870912.

INFO:  vacuuming "pg_catalog.pg_largeobject"
ERROR:  out of memory
DETAIL:  Failed on request of size 32.

Granted, our largeobject table is a bit large:

INFO:  analyzing "pg_catalog.pg_largeobject"
INFO:  "pg_largeobject": scanned 3000 of 116049431 pages, containing
18883 live rows and 409 dead rows; 3000 rows in sample, 730453802
estimated total rows

...but I trust that VACUUM ANALYZE doesn't try to read the entire table
into memory at once. :-) The machine was set up with 1.2 GB shared
memory and 1 GB maintenance memory, so I would have expected this to be
sufficient for the task (we will eventually set this up oa 64-bit
machine with 16 GB memory, but at the moment we are restricted to 32 bit).

This is currently running on PostgreSQL 8.3beta2, but since I haven't
seen this problem reported before, I guess this will also be a problem
in earlier versions. Have we run into a bug/limitation of the Postgres
VACUUM or is this something we might be able to solve via reconfiguring
the server/database, or downgrading the DBMS version.

I shall be trying to run a simple VACUUM later this evening, in order to
see whether that manages to complete. Unfortunately, due to the time it
takes to load data, it's not really practicable to shift servers at the
moment

A little background on the application:
We are building a raster-database to be used for storing weather and
water data. The raster data (2D matrices of floating points) are stored
using large objects and indexed using a values table (with multiple
dimensions: time, parameter, altitudes, etc). This is a technique I've
worked with successfully in the past, though in that case using an
Informix DBMS. My current employer is a strong proponent for Open
Software, which has led to our implementation of the current system on a
PostgreSQL DBMS (we will also be releasing our system as GPL in the near
future).

The test instance we are working on now is about 1 TB; we expect to
increase that by a factor of at least 5 within the first year of
operation, so we'd really like to ensure that we can get VACUUM working
(although the data is mostly going to be static on this installation, we
will have others that won't be).

Anyone with some insights on VACUUM FULL ANALYZE who can weigh in on
what is going wrong?

Regards,

Michael Akinde
----
Database Architect,
met.no


Attachment

pgsql-hackers by date:

Previous
From: "Zeugswetter Andreas ADI SD"
Date:
Subject: Re: Problem of a server gettext message.
Next
From: Stefan Kaltenbrunner
Date:
Subject: Re: VACUUM ANALYZE out of memory