Partially corrupted table - Mailing list pgsql-bugs

From Filip Hrbek
Subject Partially corrupted table
Date
Msg-id 005101c6cb6f$0e101060$1e03a8c0@fhrbek
Whole thread Raw
Responses Re: Partially corrupted table
List pgsql-bugs
Platform:
CentOS release 4.3 (Final) (Linux 2.6.9-34.EL)

Database version:
PostgreSQL 8.1.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 3.4.5 =
20051201 (Red Hat 3.4.5-2)

Description:
One of cca 100 tables is partially corrupted. An attempt to read or dump th=
e data from the table is sometimes successful, sometimes crashes.
After upgrade to 8.1.4 the behaviour remained unchanged (using the cluster =
created with 8.1.3). Unfortunately, I am not able to reproduce the error fr=
om the beginning using ONLY 8.1.4.

Note:
After successful dump/reload everything is OK. The problem is not in the da=
ta content itself, but in the binary database cluster. This is why I would =
like to send you the whole cluster instead of the database dump as an attac=
hment. The problem is in the file size (30M). Please tell me where or how t=
o send it. For simplicity, I removed all other objects from the database. T=
here is only one table with several indexes, the table contains 56621 rows.

Here are some examples of the bevaviour:

[root@devlin2 tmp]# pg_dumpall -p5447 -U postgres > pgdump.sql
pg_dump: ERROR:  invalid memory alloc request size 4294967290
pg_dump: SQL command to dump the contents of table "fct" failed: PQendcopy(=
) failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request si=
ze 4294967290
pg_dump: The command was: COPY dwhdata_salemc.fct (time_id, company_id, cus=
tomer_id, product_id, flagsprod_id, flagssale_id, account_id, accttime_id, =
invcustomer_id, salesperson_id, vendor_id, inv_cost_amt, inv_base_amt, inv_=
amt, inv_qty, inv_wght, ret_cost_amt, ret_base_amt, ret_amt, ret_qty, ret_w=
ght, unret_cost_amt, unret_base_amt, unret_amt, unret_qty, unret_wght, bonu=
s_forecast, bonus_final, stamp_code) TO stdout;
pg_dumpall: pg_dump failed on database "dwhdb", exiting

dwhdb=3D# create temp table t_fct as select * from dwhdata_salemc.fct;
SELECT

dwhdb=3D# create temp table t_fct as select * from dwhdata_salemc.fct;
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>

dwhdb=3D# create temp table t_fct as select * from dwhdata_salemc.fct;
ERROR:  row is too big: size 119264, maximum size 8136

dwhdb=3D# create temp table t_fct as select * from dwhdata_salemc.fct;
ERROR:  row is too big: size 38788, maximum size 8136

AFTER UPGRADE TO 8.1.4:
dwhdb=3D# create temp table t_fct as select * from dwhdata_salemc.fct;
ERROR:  row is too big: size 52892, maximum size 8136


I noticed one more problem when executing vacuum:
dwhdb=3D# vacuum full;
WARNING:  relation "pg_attribute" page 113 is uninitialized --- fixing
VACUUM

The "vacuum" problem has happend only once.


Regards
  Fililp Hrbek

pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #2594: Gin Indexes cause server to crash on Windows
Next
From: Tom Lane
Date:
Subject: Re: Partially corrupted table