No, I'm sure is it not a HW problem. I tested the same DB cluster on two
different machines. The error is exactly the same.
I can send you the cluster if you tell me how and where to send 30M file.
Thanks for reply
Filip Hrbek
----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: "Filip Hrbek" <filip.hrbek@plz.comstar.cz>
Cc: <pgsql-bugs@postgresql.org>
Sent: Tuesday, August 29, 2006 4:01 PM
Subject: Re: [BUGS] Partially corrupted table
> "Filip Hrbek" <filip.hrbek@plz.comstar.cz> writes:
>> dwhdb=# create temp table t_fct as select * from dwhdata_salemc.fct;
>> SELECT
>
>> dwhdb=# create temp table t_fct as select * from dwhdata_salemc.fct;
>> server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>> before or while processing the request.
>> The connection to the server was lost. Attempting reset: Failed.
>> !>
>
>> dwhdb=# create temp table t_fct as select * from dwhdata_salemc.fct;
>> ERROR: row is too big: size 119264, maximum size 8136
>
>> dwhdb=# create temp table t_fct as select * from dwhdata_salemc.fct;
>> ERROR: row is too big: size 38788, maximum size 8136
>
> I think you've got hardware problems. Getting different answers from
> successive scans of the same table is really hard to explain any other
> way. memtest86 and badblocks are commonly suggested for testing memory
> and disk respectively on Linux machines.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org