Re: Missing pg_clog files - Mailing list pgsql-admin

From Tom Lane
Subject Re: Missing pg_clog files
Date
Msg-id 29313.1222261821@sss.pgh.pa.us
Whole thread Raw
In response to Re: Missing pg_clog files  (Carol Walter <walterc@indiana.edu>)
Responses Re: Missing pg_clog files  (Carol Walter <walterc@indiana.edu>)
Re: Missing pg_clog files  (Carol Walter <walterc@indiana.edu>)
List pgsql-admin
Carol Walter <walterc@indiana.edu> writes:
> I tried creating the files 0000 through 002F.  Pg_dump still will not
> run.  The error was as follows:

> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  could not access status
> of transaction 20080015
> DETAIL:  Could not read from file "pg_clog/0013" at offset 32768:
> Error 0.
> pg_dump: The command was: COPY ebizd.products_categories
> (category_id, product_id) TO stdout;

You need to make the files the right size (256K of zeroes).
A suitable "dd" from /dev/zero will accomplish this on modern
Unixen (ie, anything that has /dev/zero).

Note that this is by no means a fix, it simply allows pg_dump to
complete.  What you are really doing by filling those files with
zeroes is saying "assume all these old transactions aborted".
You *will* have data loss.  It will only affect rows that haven't
been accessed in a very long time (since at least June, looks like)
but gone is gone.

Another possibility that might be better is to fill the files with
0x55, though this is harder since /dev/zero won't help.  That would
force all the old transactions to be considered committed rather than
aborted.  This isn't really better from a consistency standpoint, but
if you feel that most of your data-altering commands succeed then
this might give you a closer approximation to the state you want.

The whole thing is pretty troubling because 8.2.x is supposed to
contain defenses against this type of problem.  Could we see
the contents of "select datname, datfrozenxid from pg_database"?
Also, have you tried dumping individual databases instead of
pg_dumpall?  (It would be good to experiment with that before
you start making bogus pg_clog files; once you do that there's
no going back in terms of recovering the true state of your data.)

            regards, tom lane

PS: Since you've evidently got a lot of rows that haven't been
accessed in months, I conclude that you have not been running
routine backups.  Tut tut.  I trust you'll remedy that oversight
as soon as you get out of the immediate problem.

pgsql-admin by date:

Previous
From: Carol Walter
Date:
Subject: Re: Missing pg_clog files
Next
From: Carol Walter
Date:
Subject: Re: Missing pg_clog files