Re: clog problem - Mailing list pgsql-general

From Tom Lane
Subject Re: clog problem
Date
Msg-id 27106.1031148278@sss.pgh.pa.us
Whole thread Raw
In response to clog problem  (Bob Parkinson <rwp@biome.ac.uk>)
List pgsql-general
Bob Parkinson <rwp@biome.ac.uk> writes:
> FATAL 2:  open of /usr/local/pgsql/data/pg_clog/02B6 failed: No such file
> or directory

The direct cause of this problem is a tuple containing a bogus
transaction ID number (evidently 0x2B6xxxxx for some xxxxx, which I
assume is not close to your really active transaction numbers --- what
filenames do exist in $PGDATA/pg_clog?).

The next question of course is how did it get that way?  It's possible
that this is a symptom of hardware problems, or there could be a
software bug we need to identify and fix.  But that would take a lot
more info than we have.

If you want to dig into it, the next step would be to identify where the
bad tuple is and then use pg_filedump or something similar to have a
look at the raw data.

If you just want to get rid of the bad data as expeditiously as
possible, I'd suggest (a) make a file 256K long containing all zeroes,
(b) temporarily install it as $PGDATA/pg_clog/02B6, (c) run VACUUM;
(d) remove the bogus 02B6 file again.  However this will probably ruin
any chance of deducing what went wrong afterwards...

            regards, tom lane

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: 'Bad timestamp external representation' error when restoring database
Next
From: Francois Suter
Date:
Subject: Remote connection via psql