Re: pg_dump additional options for performance - Mailing list pgsql-hackers

From Magnus Hagander
Subject Re: pg_dump additional options for performance
Date
Msg-id 20080226113138.GM528@svr2.hagander.net
Whole thread Raw
In response to Re: pg_dump additional options for performance  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_dump additional options for performance  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > ... So it would be good if we could dump objects in 3 groups
> > 1. all commands required to re-create table
> > 2. data
> > 3. all commands required to complete table after data load
> 
> [ much subsequent discussion snipped ]
> 
> BTW, what exactly was the use-case for this?  The recent discussions
> about parallelizing pg_restore make it clear that the all-in-one
> dump file format still has lots to recommend it.  So I'm just wondering
> what the actual advantage of splitting the dump into multiple files
> will be.  It clearly makes life more complicated; what are we buying?

One use-case would be when you have to make some small change to the schema
while reloading it, that's still compatible with the data format. Then
you'd dump schema-no-indexes-and-stuff, then *edit* that file, before
reloading things. It's a lot easier to edit the file if it's not hundreds
of gigabytes..

//Magnus


pgsql-hackers by date:

Previous
From: Martijn van Oosterhout
Date:
Subject: Re: Producer/Consumer Issues in the COPY across network
Next
From: Dimitri Fontaine
Date:
Subject: Re: pg_dump additional options for performance