Re: Data files became huge with no apparent reason - Mailing list pgsql-general

From Jochem van Dieten
Subject Re: Data files became huge with no apparent reason
Date
Msg-id 3D6CF6A6.3090308@oli.tudelft.nl
Whole thread Raw
In response to Data files became huge with no apparent reason  (Dario Fumagalli <dfumagalli@tin.it>)
List pgsql-general
Dario Fumagalli wrote:
>
> Now, the questions are:
> - How is it possible this exceptional growth (N.B. the database server
> has all the default values set in its configuration files - i.e. was not
> "optimized")?

Too many updates without vacuuming.


> - More urgently: why now it takes 30 seconds to perform a select
> count(*) on 900 records (no other clients connected), while in the past
> it was almost instantaneous?

Because a count uses a full table scan, on a table that is now pretty
large on the disk.


> - Why do the backend crashed immediately if I try to VACUUM (ANALYZE)
> it? I think the backend claims additional disk space for this operation
> and fills the disk again, but I'm not sure.
> - And last, but not least... is it possible to restore the situation
> without loosing data (backup is 3 weeks old)? I'm able to start the
> daemon and perform SQL operations, but I don't know how to make
> PostgreSQL release the disk space after I dumped the database in order
> to reload it. And I fear an InitDB will destroy db users and their
> privileges.

What I would do is:
1 Backup.
2 Drop any index that is not unique.
3 "VACUUM VERBOSE tablename" on the smallest table. Try the next
smallest table etc. If you are using PostgreSQL 7.2.x use "VACUUM FULL
VERBOSE tablename"

In the end you either have vacuumed them all and can restore your
indexes or you have an error to post here.

Jochem


pgsql-general by date:

Previous
From: Kevin Brannen
Date:
Subject: Re: [SQL] Retrieving the new "nextval" for primary keys....
Next
From: "scott.marlowe"
Date:
Subject: Re: Noobie Questions...