Re: FATAL 2: open of pg_clog error - Mailing list pgsql-general

From Tom Lane
Subject Re: FATAL 2: open of pg_clog error
Date
Msg-id 15948.1015345599@sss.pgh.pa.us
Whole thread Raw
In response to FATAL 2: open of pg_clog error  ("Bjoern Metzdorf" <bm@turtle-entertainment.de>)
List pgsql-general
"Bjoern Metzdorf" <bm@turtle-entertainment.de> writes:
> since this morning we are getting this error message while vacuuming:

> 2002-03-05 12:42:08 DEBUG:  --Relation pg_toast_16854--
> 2002-03-05 12:42:10 FATAL 2:  open of /raid/pgdata/pg_clog/0202 failed: No
> such file or directory

Given that you don't have any actual clog segments beyond 0046, it would
seem that pg_toast_16854 contains a trashed tuple --- specifically, one
having a bogus xmin or xmax that's far beyond the existing range of
transaction IDs.

> Any hints besides doing an initdb?

You shouldn't need to initdb to get out of a problem with just one
table.  I'd look in pg_class to see which table this is the toast table
for (look for reltoastrelid = (oid of pg_toast_16854)).  Then see if
you can pg_dump that one table.  If so, drop the table and reload from
the dump.  If not, consider dropping the table anyway --- it beats
initdb for your whole database.

Another interesting question is whether the problem stems from a
hardware fault (eg, disk dropped a few bytes) or software (did Postgres
screw up?)  Perhaps you could just rename the broken table out of the
way, instead of dropping it, so as to preserve it for future analysis.
I for one would be interested in looking at the broken data.

            regards, tom lane

pgsql-general by date:

Previous
From: "Johnson, Shaunn"
Date:
Subject: Re: SQL and parse errors
Next
From: Masaru Sugawara
Date:
Subject: Re: help with getting index scan