Re: Large Database Restore - Mailing list pgsql-general

From Jim C. Nasby
Subject Re: Large Database Restore
Date
Msg-id 20070517160417.GH14548@nasby.net
Whole thread Raw
In response to Large Database Restore  (Lee Keel <lee.keel@uai.com>)
Responses Re: Large Database Restore
List pgsql-general
On Thu, May 17, 2007 at 08:19:08AM -0500, Lee Keel wrote:
> I am restoring a 51GB backup file that has been running for almost 26 hours.
> There have been no errors and things are still working.  I have turned fsync
> off, but that still did not speed things up.  Can anyone provide me with the
> optimal settings for restoring a large database?

Well, fsync=off is the biggest one. The others to look at:

maintenance_work_mem = 1GB (I generally have problems setting it over
1GB, and I'm not sure it would add much benefit)
checkpoint_timeout = 1 hour
checkpoint_segments = huge (you want to reduce the frequency of
checkpoints... probably to at least less than every 20 minutes)

Finally, if you've got a multi-CPU machine, you might want to build all
the indexes and table constraints as a separate step, and run them
through perl or something so that you'll utilize more than just one CPU.

Of course the downside to all of this is that it would mean starting
over from scratch.

Ultimately though, once your database gets past a certain size, you
really want to be using PITR and not pg_dump as your main recovery
strategy.
--
Jim Nasby                                      decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Memory settings, vm.overcommit, how to get it really safe?
Next
From: Hannes Dorbath
Date:
Subject: Re: Memory settings, vm.overcommit, how to get it really safe?