Dump/restore with bad data and large objects - Mailing list pgsql-general

From John T. Dow
Subject Dump/restore with bad data and large objects
Date
Msg-id 200808251422.m7PEMRHh038311@web2.nidhog.com
Whole thread Raw
Responses Re: Dump/restore with bad data and large objects
Re: Dump/restore with bad data and large objects
List pgsql-general
By "bad data", I mean a character that's not UTF8, such as hex 98.

As far as I can tell, pg_dump is the tool to use. But it has
serious drawbacks.

If you dump in the custom format, the data is compressed (nice) and
includes large objects (very nice). But, from my tests and the postings of
others, if there is invalid data in a table, although PostgreSQL won't complain and
pg_dump won't complain, pg_restore will strenuously object, rejecting all rows for that
particular table (not nice at all).

If you dump in plain text format, you can at least inspect the dumped
data and fix it manually or with iconv. But the plain text
format doesn't support large objects (again, not nice). While byte arrays are supported, they result in very large dump
files.

Also, neither of these methods gets information such as the roles, so
that has to be captured some other way if the database has to be rebuilt
from scratch.

Is my understanding incomplete or wrong? Is there no good solution?

Why isn't there a dumpall that writes in compressed format and allows recovery from bad data?

John


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Issue with creation of Partial_indexes (Immutable?)
Next
From: Tom Lane
Date:
Subject: Re: playing with catalog tables limits? dangers? was: seq bug 2073 and time machine