Re: Dump/restore with bad data and large objects - Mailing list pgsql-general

From Joshua Drake
Subject Re: Dump/restore with bad data and large objects
Date
Msg-id 20080825100413.1aca1bcf@jd-laptop
Whole thread Raw
In response to Dump/restore with bad data and large objects  ("John T. Dow" <john@johntdow.com>)
Responses Re: Dump/restore with bad data and large objects
List pgsql-general
On Mon, 25 Aug 2008 10:21:54 -0400
"John T. Dow" <john@johntdow.com> wrote:

> By "bad data", I mean a character that's not UTF8, such as hex 98.
>
> As far as I can tell, pg_dump is the tool to use. But it has
> serious drawbacks.
>
> If you dump in the custom format, the data is compressed (nice) and
> includes large objects (very nice). But, from my tests and the
> postings of others, if there is invalid data in a table, although
> PostgreSQL won't complain and pg_dump won't complain, pg_restore will
> strenuously object, rejecting all rows for that particular table (not
> nice at all).

You can use the TOC feature of -Fc to remove restoring of that single
table. You can then convert that single table to a plain text dump and
clean the data. Then restore it separately.

If you have foregin keys and indexes on the bad data table, don't
restore the keys until *after* you have done the above.

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



pgsql-general by date:

Previous
From: "Scott Marlowe"
Date:
Subject: Re: SERIAL datatype
Next
From: "Scott Marlowe"
Date:
Subject: Re: [ADMIN] Regarding access to a user