Hi all,
I'm a bit unhappy with the time it takes to do backup of my PG7.4.6
base.
I have 13GB under the pg/data dir and it takes 30 minutes to do the
backup.
Using top and iostat I've figured out that the backup job is cpu bound
in the postmaster process. It eats up 95% cpu while the disk is at 10%
load. In fact I'm able to compress the backup file (using gzip) faster
(35 % cpu load) than the backend can deliver it.
The operating requirements is 24/7 so I can't just take the base
offline and do a file copy. I can do backup that way in 5-6 minutes
BTW.
Would it speed up the process if I did a binary backup instead ?
Are there any other fun tricks to speed up things ?
I run on a four way Linux box and it's not in production yet so there
is no cpu shortage.
The backup script is:
#! /bin/sh
if test $# -lt 2; then
echo "Usage: dbbackup <basename> <filename>"
else
/home/postgres/postgresql/bin/pg_dump -h <hostname> $1 | gzip -f - |
split --bytes 500m - $2.
fi
And the restore script:
#! /bin/sh
if test $# -lt 2; then
echo "Usage: dbrestore <basename> <filename>"
else
cat $2.* | gzip -d -f - | /home/postgres/postgresql/bin/psql -h
<hostname> -f - $1
fi
Cheers,
John