Somewhat automated method of cleaning table of corrupt records forpg_dump - Mailing list pgsql-general

From Heiko Wundram
Subject Somewhat automated method of cleaning table of corrupt records forpg_dump
Date
Msg-id 9a4a84c4bc126ff35f0593b714e4bcf1@modelnine.org
Whole thread Raw
Responses Re: Somewhat automated method of cleaning table of corrupt records for pg_dump  (Craig Ringer <ringerc@ringerc.id.au>)
List pgsql-general
Hey!

I'm currently in the situation that due to (probably) broken memory in
a server, I have a corrupted PostgreSQL database. Getting at the data
that's in the DB is not time-critical (because backups have restored the
largest part of it), but I'd still like to restore what can be restored
from the existing database to fill in the remaining data. VACUUM FULL
runs successfully (i.e., I've fixed the blocks with broken block
headers, removed rows that have invalid OIDs as recorded by the VACUUM,
etc.), but dumping the DB from the rescue system (which is PostgreSQL
8.3.21) to transfer it to another still fails with "invalid memory alloc
request size 18446744073709551613", i.e., most probably one of the TEXT
colums in the respective tables contains invalid sizings.

The methods for restoring from this condition - which I've not really
found much info on (except some posts on pgsql-hackers) - all require
loads of manual work (and don't seem to actually work in my corruption
case, I can "select * from <table> offset <any> limit 1" without it
failing on me); is there any possibility to automate the row-checking?
I'm not proficient with PL/pgSQL, but would start trying to write a
respective script in case it's not available somewhere. I've found a
corresponding PL/pgSQL method on the web, but that does _not_ work for
my case (it's 9.0+ only).

Thanks in advance if anybody could point me at a corresponding hint!

--
--- Heiko.


pgsql-general by date:

Previous
From: GMAIL
Date:
Subject: Re: Multiple Cluster on same host
Next
From: Shaun Thomas
Date:
Subject: Re: Multiple Cluster on same host