On May 21, 2004, at 3:36 PM, Dan Langille wrote:
> On 21 May 2004 at 11:10, Jonathan Gardner wrote:
>
>> But local backups -- that's just weird. I've seen backups being made
>> locally, but then moved off the server on to some other data storage
>> device (hard disk, tape drive, CD ROM) on another server.
>
> Yes, that is what I'm talking about.
>
Someone could likely and easily write a script that is fed input from
pg_dump and is smart enough to "chunk" things out. (ie, hit 30gigs,
write a tape, rinse, repeat)
One area though is fast recovery - Reloading a multi-GB db from a
pg_dump is painful, especially if you have foreign keys. Lots of
sort_mem helps.
My plan for our informix->pg migration is to take advantage of the
LVM's snapshot feature. Make a snapshot, backup the raw data. This
way the time to recovery is simply how long it takes to load the backed
up data onto the server. No waiting for indexes & FK's. It will use
more space on the backup media. But that is the price you need to pay.
To PG it looks like a power failure or some other failure.
--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/