Re: memory leak while using vaccum - Mailing list pgsql-bugs

From Tom Lane
Subject Re: memory leak while using vaccum
Date
Msg-id 415.998613974@sss.pgh.pa.us
Whole thread Raw
In response to memory leak while using vaccum  (Achim Krümmel <akruemmel@dohle.com>)
List pgsql-bugs
Achim Krümmel <akruemmel@dohle.com> writes:
> when using "vacuum analyze <tablename>" on very large tables (I have one
> with about 30GB) the memory usage increases continues until no memory is
> left and the kernel stops this process.

I don't have 30Gb to spare, but I set up a table of the same schema with
a couple hundred meg of toy data and vacuumed it.  I didn't see any
significant memory usage (about 8 meg max).

If there is a lot of free space in your 30Gb table then it's possible
that the problem is simply vacuum's data structures that keep track
of free space.  What exactly are you using as the process memory limit,
and can you increase it?

FWIW, the default vacuum method for 7.2 is designed to use a fixed
amount of memory no matter how large the table.  That won't help you
much today, however.
        regards, tom lane


pgsql-bugs by date:

Previous
From: pgsql-bugs@postgresql.org
Date:
Subject: Bug #424: JDBC driver security issue.
Next
From: pgsql-bugs@postgresql.org
Date:
Subject: Bug #425: Upper(), Lower() bug