Thread: Largest DATABASE

Largest DATABASE

From
Jamil
Date:
Hello everyone,

    I would like to know witch was the largest database that you ever
had to administrate. My database is about to 160GB and I´ve got some
problems to backup it using the pg_dump command and I too have some problems
to run vaccum.
    Do you know any other way to do this operations?

    My hardware is a IBM F50 with 4 processors and 2GB RAM and 163GB of
hard disks plus 16GB of disks to the AIX Operation System. My backup unit is
a 20/40 DAT and if I do a pg_dump them compress it, I can put all the
database in only one tape.


> Best Regards,
>
> Jamil Marques Figueira Junior
>

Re: Largest DATABASE

From
Rod Taylor
Date:
>     I would like to know witch was the largest database that you ever
> had to administrate. My database is about to 160GB and I´ve got some
> problems to backup it using the pg_dump command and I too have some problems
> to run vaccum.

Tell me about it. We started to do more fine grained scheduling of
vacuum a little while back we we past the 120GB mark.

You can try out the vacuum daemon, but it really didn't help me (little
tables were ignored too much, big tables done too often).

Check VACUUM VERBOSE output to see if you need to vacuum all of the
structures at the current rate.


We're still doing the dump, but running pg_dump on a different machine.
Most of the CPU time it eats up is in formatting the data for the dump.


Another option (which can take some effort) is to take the database
offline, fsync, take a filesystem snapshot, restart the database, tar up
the snapshot, remove the snapshot.

Database downtime can be short enough that if clients reattempt a failed
connection for some time (a couple of minutes), they'll simply see a
hicup and not a failure.


Looking forward to PITR making backups much friendlier.