Re: Largest DATABASE - Mailing list pgsql-benchmarks

From Rod Taylor
Subject Re: Largest DATABASE
Date
Msg-id 1095194766.98565.102.camel@jester
Whole thread Raw
In response to Largest DATABASE  (Jamil <jamil.figueira@ibi.com.br>)
List pgsql-benchmarks
>     I would like to know witch was the largest database that you ever
> had to administrate. My database is about to 160GB and I´ve got some
> problems to backup it using the pg_dump command and I too have some problems
> to run vaccum.

Tell me about it. We started to do more fine grained scheduling of
vacuum a little while back we we past the 120GB mark.

You can try out the vacuum daemon, but it really didn't help me (little
tables were ignored too much, big tables done too often).

Check VACUUM VERBOSE output to see if you need to vacuum all of the
structures at the current rate.


We're still doing the dump, but running pg_dump on a different machine.
Most of the CPU time it eats up is in formatting the data for the dump.


Another option (which can take some effort) is to take the database
offline, fsync, take a filesystem snapshot, restart the database, tar up
the snapshot, remove the snapshot.

Database downtime can be short enough that if clients reattempt a failed
connection for some time (a couple of minutes), they'll simply see a
hicup and not a failure.


Looking forward to PITR making backups much friendlier.



pgsql-benchmarks by date:

Previous
From: Jamil
Date:
Subject: Largest DATABASE
Next
From: "Alban Médici (NetCentrex)"
Date:
Subject: stats on cursor and query execution troubleshooting