Thread: pg_dump feature

pg_dump feature

From
Naz
Date:
Hi all,
I brought this up a few years ago in the 7.4 days, and since there is
still no satisfactory solution to this
I thought I'd raise it again. When dumping a schema, it is often
necessary to dump the tables separately to the constraints and other
non-structural metadata. The most obvious use case for this is when
updating a schema for a database.

Assume I have a database named "production" and one named "testing".
Lets now say that I have decided that "testing" is correct and the app
is ready to work with it, and I now want to put the data from
"production" into the schema from "testing".

Currently, I have to dump the schema from the "testing" database, and
then manually break it after the table definitions, and before any
constraints. Otherwise, the data won't restore properly due to ordering
issues etc. It would be far easier if there were a mechanism in pg_dump
that allowed you do dump the two parts of the schema separately,
allowing easy scripting of this, and not requiring you to by hand
examine the file and use head/tail to apply only the correct parts of
the schema at the appropriate points in the script.

I've been given a few different suggestions over the years, but they all
involve computational detection of the break point in the schema dump,
which is unreliable and hacky. A proper mechanism in pg_dump would be
next to trivial to implement (for someone familiar with the code and who
could code C, which I can't otherwise I'd do it), avoid potentially
silent bugs in shell scripts and would massively assist in the efficient
manipulation of in-place PostgreSQL databases.

I hope someone agrees with me :)

السلام عليكم
- Naz.


Re: pg_dump feature

From
David Fetter
Date:
On Mon, Sep 22, 2008 at 03:25:35AM +1000, Naz wrote:
> Hi all,
> I brought this up a few years ago in the 7.4 days, and since there
> is  still no satisfactory solution to this I thought I'd raise it
> again. When dumping a schema, it is often  necessary to dump the
> tables separately to the constraints and other non-structural
> metadata.  The most obvious use case for this is when  updating a
> schema for a database.
>
> Assume I have a database named "production" and one named "testing".  

Good so far.

> Lets now say that I have decided that "testing" is correct and the
> app  is ready to work with it, and I now want to put the data from
> "production" into the schema from "testing".

That's where you should have been storing all the SQL transformations
(DDL, DCL and DML) in your source code management system and testing
said transformations to make sure they all fit inside one transaction :)

> I hope someone agrees with me :)

The above is what I've seen work.  Other schemes...not so well.

> السلام عليكم

And upon you, peace.
David.
-- 
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


Re: pg_dump feature

From
Tom Lane
Date:
Naz <lists@mrnaz.com> writes:
> ... It would be far easier if there were a mechanism in pg_dump 
> that allowed you do dump the two parts of the schema separately, 

Feel free to fix up and resubmit the incomplete patch for that
that was rejected during the last commitfest.
        regards, tom lane