Hi again Mike:
mike g wrote:
> Ok,
>
> Other suggestions:
> 1) Have you done a Vaccum full on the table? That should reduce the
> table size and resources required.
First thing I tried was VACCUUM on the table, but got the same errror:
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
> 2) Use psql to dump the table instead of pg_dump. In psql do a COPY
> affected table TO /file name. That will output the table results in a
> tab delimited text file which could then be reimported later using COPY
> affected table from /that file name.
>
> COPY BINARY affected table to /File name could be considered as well.
I could not try this since I already solved the problem saving partial
contents of the table, recreating and inserting. Finally, I only lost
300 tuples :-(
> 3) create a few smaller tables with the same data definitions. Insert
> specific sections of the table into each smaller table via Insert into
> one_smaller_table
> Select * from affected_table where date between X and Y
What worries me most, besides recovering this table, is why could this
happened and how to avoid it. I guess it's due to a hardware problem and
the only solution is a frequent backup.
Thanks for your help, Mike
Ruben.