Hi,
some weeks ago I started to develop an application using Postgresql the
first time. I'm running 7.4.3 on a Linux box with a plain 2.6.7 kernel, the
storage is handled by 5 SATA disks, managed by a 3ware controller and using
a xfs filesystem. The DB server is a 3 Ghz P4 with 4 Gig of Ram, so the
machine is quite fast for most purposes I need it; the DB server is a pure
DB server, theres no application running on it.
I dump and restore it like this:
pg_dump -v ziptrader -o -F t > foo
pg_restore -v -c -F t -d ziptrader_expost foo
Dumping needs to be done at least once a day (for copying as well as
archiving it), while restoring to two other databases will be done twice a
day. The dump is currently only 80 MB which I consider as very small (the
expected growth of the database will be 20-30 MB a day later). Currently
the database has just 6 tables whith 2 tables beeing responsible for 95% of
the database size.
So...the dump in the above format needs some 14 minutes, the restore 10
minutes. This seems to be very slow as it means something like 100K/sec for
dumping and restoring. The drive is cappable of 40 Meg/seconds, so thats
not the bottleneck ,-) Anyhow postmaster and pg_dump seem to max out the
CPU cycles as it's running at nearly 100% while dumping and restoring. But
I frighten the day when the database grows >1Gig as it would take then
hours to complete which is not acceptable to me.
My main memory settings are:
shared_buffers = 30000
sort_mem = 1000
vacuum_mem = 16384
What is the best starting point to look at? I'm sure I'm doing something
badly wrong but neither the docs nor some older mails from this group seem
to offer a hint to me.
Thanks a lot,
Soeren Gerlach