Soeren Gerlach wrote:
>
> Dumping needs to be done at least once a day (for copying as well as
> archiving it), while restoring to two other databases will be done twice a
> day. The dump is currently only 80 MB which I consider as very small (the
> expected growth of the database will be 20-30 MB a day later). Currently
> the database has just 6 tables whith 2 tables beeing responsible for 95% of
> the database size.
>
> So...the dump in the above format needs some 14 minutes, the restore 10
> minutes. This seems to be very slow as it means something like 100K/sec for
> dumping and restoring. The drive is cappable of 40 Meg/seconds, so thats
> not the bottleneck ,-) Anyhow postmaster and pg_dump seem to max out the
> CPU cycles as it's running at nearly 100% while dumping and restoring. But
> I frighten the day when the database grows >1Gig as it would take then
> hours to complete which is not acceptable to me.
This is never going to work for you. Increasing the sort_mem during the
restore will probably help, but if it takes too long now then it
certainly won't cope when you have 10 times as much data.
The CPU is probably maxed as pg_restore rebuilds your indexes etc.
You'll find the disk-space occupied by the live system will be much more
than your current 80MB.
I think you want to look at some form of replication. There are a number
of options, the newest being Slony (http://www.slony.org/).
--
Richard Huxton
Archonet Ltd