"Marc Schablewski" <ms@clickware.de> writes:
> I kept a copy of the data files in case it is needed, but I have to
> check first, if I am allowed to give away that information. Some of the
> data is confidential. If you just need the files containing the dammaged
> table, this won't be a big problem, because it does not contain any
> confidential information (as long as one data file only contains the
> data of one table). The other problem is the size of the files. The
> whole database is about 60GB and the files belonging to that table are
> about 2.5GB. Mayby there is a way to pre-select the data you need?
Perhaps. You could find the records with unreasonable values. But I don't
think there's any convenient way to find the records which produce the clog
error or which are missing unless they turn out to be on the same page.
Out of curiosity, what do the unreasonable values look like?
Earlier you said:
> We narrowed it down to a few records in that table. Some records contain
> unreasonable values, others produce the same message about the missing
> pg_clog file when selected and some are simply missing. But they must have
> existed, because there are still records in a second table referencing them.
If you still have a live database with this data then if you can do
SELECT ctid FROM tab WHERE ...
for the records with unreasonable values that might tell you what blocks are
corrupted. The value before the comma is the block number, which when
multiplied by 8192 (assuming you're using 8k blocks) will tell you what file
offset to look for the page.
To find the file to look for the block in do:
postgres=# select relfilenode from pg_class where relname = 'tablename';
relfilenode
-------------
16384
(1 row)
Note that if the file offset is over 1G then you would be looking for a file
named 16384.N where N is which gigabyte chunk.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com