Hi,
Trying to restore a table that has about 80 million records. The database was
dumped and restored according to the following procedure:
1) dump the db, data only
time /usr/local/pgsql/bin/pg_dump -abf ./pgsql.7.4.rc1.pgdump.Z --format=c
--compress=6 -U postgres testdb
2) create db schema from a separate file, including table structures,
constraints, indexes
3) edit restore order to satisfy the constraints
4) restore the db
time /usr/local/pgsql/bin/pg_restore -d testdb -U postgres -a
./pgsql.7.4.rc1.pgdump.Z -L ./restoreOrder.txt --verbose
pg_restore has been running for 14 hours now and it does not appear that
there's any end in sight. Meanwhile, postmaster is slowly eating away at the
memory, it now has 46% of all available memory with about 900MB on swap. HD
activity is non-stopping.
In retrospective, I guess, the table with 80M records could've been created
without indexes (it has 3, pk & ak constraints and fk index) to speed up the
pg_restore ... but then I'm not sure if creating the indexes afterwards
would've been much faster. Anything I'm doing wrong?
Thanks
__________________________________
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree