Re: Somewhat automated method of cleaning table of corrupt records for pg_dump - Mailing list pgsql-general

From Heiko Wundram
Subject Re: Somewhat automated method of cleaning table of corrupt records for pg_dump
Date
Msg-id 508517E7.2010306@modelnine.org
Whole thread Raw
In response to Re: Somewhat automated method of cleaning table of corrupt records for pg_dump  (Craig Ringer <ringerc@ringerc.id.au>)
Responses Re: Somewhat automated method of cleaning table of corrupt records for pg_dump  (Martijn van Oosterhout <kleptog@svana.org>)
List pgsql-general
Am 22.10.2012 09:05, schrieb Craig Ringer:
> Working strictly with a *copy*, does REINDEXing then CLUSTERing the
> tables help? VACCUM FULL on 8.3 won't rebuild indexes, so if index
> damage is the culprit a reindex may help. Then, if CLUSTER is able to
> rewrite the tables in index order you might be able to recover.

REINDEXing all indexes works, CLUSTERing all tables on the primary key
(index) after rebuilding them also works, but the dump still displays
the same error as before (i.e., out of memory while dumping a specific
table). I guess that the broken record(s) in that table are either not
moved, or their data portion isn't adapted to the respective index.
Which means that besides walking all table rows (and deleting those by
OID that contain error[s]), there's not much that I can do, I guess.

If there's any other possibility of "out of the box" recovery - except
writing myself a small script to walk all rows - I'd still be grateful
for a hint.

Thanks for the hint at CLUSTER and REINDEX, anyway, which I've never
properly used before - this gave me an incentive to dig in the
documentation.

--
--- Heiko.


pgsql-general by date:

Previous
From: "Albe Laurenz"
Date:
Subject: Re: oracle_fdw
Next
From: Hannes Erven
Date:
Subject: Revert TRUNCATE CASCADE?