Thread: DB corruption.
Hi. I have got a corrupt db.. most likely due to an xfs bug.. pg_dump: SQL command failed pg_dump: Error message from server: ERROR: invalid page header in block 14174944 of relation base/16385/58318948 Can I somehow get pg_dump to "ignore" that block and dump everything else? Jesper
Hi,
I would go with cluster on the relation with invalid page header and try to dump the db again. If you have an older dump of the db and data in corrupted table did not change since turnieju You may try to restore the table from it and run pg_dump again.
I don't know If one can ommit a page during dump You can exclude whole table.
Hope this helps You.
Regards
Luke
09-10-2011 08:49 użytkownik "Jesper Krogh" <jesper@krogh.cc> napisał:
Hi.
I have got a corrupt db.. most likely due to an xfs bug..
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block 14174944 of relation base/16385/58318948
Can I somehow get pg_dump to "ignore" that block and dump everything else?
Jesper
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Lukasz Brodziak wrote: > I would go with cluster on the relation with invalid page header Or, if you don't have the free disk space or the time for that, you could drop and recreate the index. You could even do that with no down time or long blocks by using CREATE INDEX CONCURRENTLY. -Kevin
On Sun, Oct 09, 2011 at 10:49:28AM +0200, Lukasz Brodziak wrote: > Hi, > I would go with cluster on the relation with invalid page header and try to forgive me, what does "cluster on the relation" mean?
It was a shortcut I meant clustering the index on the corrupted table with CLUSTER indexname ON tablename.
09-10-2011 15:17 użytkownik "Ray Stell" <stellr@cns.vt.edu> napisał:
On Sun, Oct 09, 2011 at 10:49:28AM +0200, Lukasz Brodziak wrote:
> Hi,
> I would go with cluster on the relation with invalid page header and try to
forgive me, what does "cluster on the relation" mean?
Jesper Krogh <jesper@krogh.cc> writes: > I have got a corrupt db.. most likely due to an xfs bug.. > pg_dump: SQL command failed > pg_dump: Error message from server: ERROR: invalid page header in block > 14174944 of relation base/16385/58318948 > Can I somehow get pg_dump to "ignore" that block and dump everything else? The traditional solution is to zero out the bad block, eg using dd from /dev/zero. It's easy to zero more than you intended, so practicing on a scratch copy of the table is recommended. But first you should check what kind of relation that is. If it's an index, you could just REINDEX it instead. regards, tom lane
On 2011-10-09 17:41, Tom Lane wrote: > Jesper Krogh<jesper@krogh.cc> writes: >> I have got a corrupt db.. most likely due to an xfs bug.. >> pg_dump: SQL command failed >> pg_dump: Error message from server: ERROR: invalid page header in block >> 14174944 of relation base/16385/58318948 >> Can I somehow get pg_dump to "ignore" that block and dump everything else? > The traditional solution is to zero out the bad block, eg using dd from > /dev/zero. It's easy to zero more than you intended, so practicing on a > scratch copy of the table is recommended. > > But first you should check what kind of relation that is. If it's an > index, you could just REINDEX it instead. So it is "just" blocknumber * blocksize .. offset, blocksize of zeroes.. ? Or is the math harder? -- Jesper
Jesper Krogh <jesper@krogh.cc> writes: > On 2011-10-09 17:41, Tom Lane wrote: >> The traditional solution is to zero out the bad block, eg using dd from >> /dev/zero. It's easy to zero more than you intended, so practicing on a >> scratch copy of the table is recommended. > So it is "just" blocknumber * blocksize .. offset, blocksize of zeroes.. ? > Or is the math harder? You have to account for the division of the table into segment files; if blocknumber * blocksize is greater than 1GB, reduce modulo 1GB and look to the appropriate "xxx.n" segment file. Otherwise it's what you'd expect. regards, tom lane
You may use this:
SET zero_damaged_pages = on;
VACUUM FULL corrupted_table;
Unfortunately zeroing pages will result in losing data written in them(will be rewritten with zeros).
09-10-2011 19:49 użytkownik "Jesper Krogh" <jesper@krogh.cc> napisał:
On 2011-10-09 17:41, Tom Lane wrote:Jesper Krogh<jesper@krogh.cc> writes:I have got a corrupt db.. most likely due to an xfs bug..The traditional solution is to zero out the bad block, eg using dd from
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block
14174944 of relation base/16385/58318948
Can I somehow get pg_dump to "ignore" that block and dump everything else?
/dev/zero. It's easy to zero more than you intended, so practicing on a
scratch copy of the table is recommended.
But first you should check what kind of relation that is. If it's an
index, you could just REINDEX it instead.
So it is "just" blocknumber * blocksize .. offset, blocksize of zeroes.. ?
Or is the math harder?
--
Jesper
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin