Re: pg_restore recovery from error. - Mailing list pgsql-hackers

From Chris Bowlby
Subject Re: pg_restore recovery from error.
Date
Msg-id 6.1.0.6.2.20040615164459.02406bc8@mail.pgsql.com
Whole thread Raw
In response to Re: pg_restore recovery from error.  ("V i s h a l Kashyap @ [Sai Hertz And Control Systems]" <sank89@sancharnet.in>)
List pgsql-hackers
Hi Vishal,
 Unfortunately, this is not an option for me, I'm working with a file that 
has 400505294 of 400513265 bytes that it can read successfully, however at 
that spot in the file, there is a block that is 3870 bytes long as opposed 
to the expected 4096 bytes. Running the command you gave me as an option 
still causes that point to fail.
 Over the last few hours I've been tracing out pg_restore.c and trying to 
narrow down how it filters through alot of the data, most of it I 
understand, but there is one thing I think I'm missing and maybe one of you 
nice developers can point me in the right direction.
 I've been table to trace down the code such that the _PrintTocData and 
_PrintData functions in the pg_backup_custom.c file are where I need to be 
to understand more about the data returns. I've been able to add in some 
print statements to pg_backup_custom.c such that:

---- cut --- _PrintData cnt = fread(in, 1, blkLen, AH->FH);                if (cnt != blkLen)
die_horribly(AH,modulename,                                  "could not read data block -- expected 
 
%lu, got %lu\n",                                                 (unsigned long) blkLen, 
(unsigned long) cnt);
 printf("%d - %d - %d\n", cnt, blkLen, ctx->filePos);
                ctx->filePos += blkLen;
---- cut ----
 spits out:

...
3870 - 4096 - 400505294

 Telling me that it's only got ~ 6 - 7 kbytes of compressed data to 
retrieve (yes I realize that this is a small remaining number but it works 
out to about 50 tables with missing data).
 What I'd like to do is somehow modify pg_restore such that it skips that 
offending block and continues on with the next one, however each time I do 
so I get either a loop that seems to indicate something got stuck, or a 
core file :>. This is telling me that even a compressed form of a pg_dump 
file is not necessarily going to be in a number of smaller files all 
grouped into one tar when using the custom dump method. This also means 
that I have to somehow find the header to the next table and skip any 
remaining records within that table structure (which I can accept), but 
it's the data in those remaining tables that I need to dig out, anyone have 
an idea on how to skip to the next table ignoring any records that are on 
the other side of that offending block (including the offending record).


At 02:46 PM 6/15/2004, V i s h a l Kashyap @ [Sai Hertz And Control 
Systems] wrote:
>Dear Chris ,
>
>>pg_restore: [custom archiver] could not read data block -- expected 4096, 
>>got 3870
>>pg_restore: *** aborted because of error
>>
>>  It appears some of the data itself is not readable, which is fine, but 
>> I'd like it to skip past this table and move onto the next one. Has 
>> anyone got any ideas as to where I should look for that?
>
>
>Make a plain text  file out of  your  archive and then edit apropriately 
>for desired results
>
>I dont remember the way it could be done but something live
>
>pg_restore -U <BOGUS>  >> my_database.sql
>cat my_database.sql
>
>Drawbacks :
>1. Large database would be headache
>2. Blobs would not be restored
>
>
>--
>Regards,
>Vishal Kashyap
>Director / Lead Software Developer,
>Sai Hertz And Control Systems Pvt Ltd,
>http://saihertz.rediffblogs.com
>Yahoo  IM: mailforvishal[ a t ]yahoo.com
>





pgsql-hackers by date:

Previous
From: "Dann Corbit"
Date:
Subject: Re: #postgresql report
Next
From: Tom Lane
Date:
Subject: Re: pg_restore recovery from error.