Hi,
We have a nightly backup going on our db server. We use pg_dumpall ,
which when its done, generates a text dump around 9 Gigs.
Most nights, the backup runs at around a load of 2.5 -- with a normal
load of arount 2 -- (this on a dual 2.4Ghz Xeon machine with 6GB ram).
Our schema has a huge table (about 5 million tuples) which gets queried
about 30 times per second. These queries fetch one records at a time
pretty evenly throughout this large table, so I would imagine this table
would dominate the shared RAM (currently set at 320MB).
As you can imagine, at times the backup process (or in fact any large
query that dominates the cache), tends to spike up the load pretty
severely. At some point, we experimented with more shared memory, but
that actually decreased overall performance, as was discussed here
earlier.
What can we do to alleviate this problem? Its going to be difficult to
not query the large table at any given time (24/7 service and all).
Are there any strategies that we can take with pg_dump/pg_dumpall? My
dump command is :
> pg_dumpall -c > /tmp/backupfile.sql
Help!!!
--
Ericson Smith <eric@did-it.com>