On Wed, 2010-07-14 at 12:33 -0400, Burgholzer, Robert (DEQ) wrote:
> I am restoring a fairly sizable database from a pg_dump file (COPY FROM
> STDIN style of data) -- the pg_dump file is ~40G.
>
> My system has 4 cores, and 12G of RAM. I drop, then recreate the
> database, and I do this restore via a: cat dumpfile | psql db_name. The
> trouble is that my system free memory (according to top) goes to about
> 60M, which causes all operations on the server to grind to a halt, and
> this 40G restore will take a couple hours to complete.
>
> I noted that the restore file doesn't do anything inappropriate such as
> creating indices BEFORE adding the data or anything - thus I can only
> suspect that my trouble has to do with performance tuning ineptitude in
> postgresql.conf.
The best you will get is ~ 22G an hour. If this is a backup you can take
again in a different format, use -Fc and then use parallel restore. Even
if half of the database is one table, you will still knock the restore
time by 50% or so.
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering