Re: Somewhat automated method of cleaning table of corrupt records for pg_dump - Mailing list pgsql-general

From Craig Ringer
Subject Re: Somewhat automated method of cleaning table of corrupt records for pg_dump
Date
Msg-id 5084F055.70706@ringerc.id.au
Whole thread Raw
In response to Somewhat automated method of cleaning table of corrupt records forpg_dump  (Heiko Wundram <modelnine@modelnine.org>)
Responses Re: Somewhat automated method of cleaning table of corrupt records for pg_dump  (Heiko Wundram <modelnine@modelnine.org>)
List pgsql-general
On 10/19/2012 10:31 PM, Heiko Wundram wrote:
> Hey!
>
> I'm currently in the situation that due to (probably) broken memory in a
> server, I have a corrupted PostgreSQL database. Getting at the data
> that's in the DB is not time-critical (because backups have restored the
> largest part of it), but I'd still like to restore what can be restored
> from the existing database to fill in the remaining data. VACUUM FULL
> runs successfully (i.e., I've fixed the blocks with broken block
> headers, removed rows that have invalid OIDs as recorded by the VACUUM,
> etc.), but dumping the DB from the rescue system (which is PostgreSQL
> 8.3.21) to transfer it to another still fails with "invalid memory alloc
> request size 18446744073709551613", i.e., most probably one of the TEXT
> colums in the respective tables contains invalid sizings.

Working strictly with a *copy*, does REINDEXing then CLUSTERing the
tables help? VACCUM FULL on 8.3 won't rebuild indexes, so if index
damage is the culprit a reindex may help. Then, if CLUSTER is able to
rewrite the tables in index order you might be able to recover.

--
Craig Ringer



pgsql-general by date:

Previous
From: Craig Ringer
Date:
Subject: Re: Streaming Replication Server Crash
Next
From: Nikolay Samokhvalov
Date:
Subject: Bruce Momjian's talk online/recorded in Moscow