Re: Cleaning up large objects - Mailing list pgsql-jdbc

From Tom Lane
Subject Re: Cleaning up large objects
Date
Msg-id 29794.1092160326@sss.pgh.pa.us
Whole thread Raw
In response to Cleaning up large objects  ("Ole Streicher" <ole-usenet-spam@gmx.net>)
List pgsql-jdbc
"Ole Streicher" <ole-usenet-spam@gmx.net> writes:
> What is wrong with my approach that it does not free the disk space?

Plain vacuum is not designed to "free" disk space, at least not in the
sense of returning it to the operating system.  What it's designed to do
is record free disk space within table files in the "free space map"
(FSM) so that it can be re-used for future row creation.

To aggressively compact table files and return space to the OS, you need
vacuum full, which is a great deal slower and requires exclusive table
locks.

If you see space leakage in a database that should have a reasonably
constant total size, the reason is usually that you don't have the FSM
parameters set large enough to remember all the free space.  Check your
settings in postgresql.conf and increase if needed.  (Note that an
increase requires a postmaster restart to take effect.)  You might also
want to think about vacuuming more often than once a day, so that space
can be recycled into the FSM sooner.  When you have a periodic bulk
delete process, it's not a bad idea to vacuum the table that had the
deletes immediately after each deletion run.

Depending on how far behind the eight-ball you are, you may need a pass
of vacuum full to get the DB back down to a reasonable size.

BTW, "cluster" can serve as a substitute for vacuum full, since it also
compacts out dead space.  It can be significantly faster than vacuum
full when there's a lot of space to be reclaimed.

            regards, tom lane

pgsql-jdbc by date:

Previous
From: Jorge Ivan Ortiz Amestelli
Date:
Subject: Unsuscribe
Next
From: Kris Jurka
Date:
Subject: Re: Cleaning up large objects