Thread: row is too big: size 8168, maximum size 8160

row is too big: size 8168, maximum size 8160

From
Mario De Frutos Dieguez
Date:
Hello every one!

I've found this error message and it's driving me crazy.

I have a table with 790 numeric columns, I'm trying to make an INSERT INTO x SELECT... and in the same column/s I'm getting this message.

I've tried everything VACUUM FULL, Batch the insert with updates...but nothing always the same error.

THe BLCKSZ is set to 8Kb here: https://github.com/postgres/postgres/blob/master/src/include/pg_config.h.in#L36 but I'm suspecting that this or corrupted data or something I'm missing.

I've also read about TOAST-able and so on and I haven't messed with storage properties. The data into that columns aren't big numbers (maybe having many decimal counts)

Any clues? help?

Thank you

Re: row is too big: size 8168, maximum size 8160

From
Tom Lane
Date:
Mario De Frutos Dieguez <mariodefrutos@gmail.com> writes:
> I've found this error message and it's driving me crazy.

> I have a table with 790 numeric columns, I'm trying to make an INSERT INTO
> x SELECT... and in the same column/s I'm getting this message.

You can either rethink your data representation (maybe you could put a
bunch of those values into an array?) or recompile/re-initdb with a
larger value of BLCKSZ.  I'd favor the former --- 790 columns is too many.

            regards, tom lane


Re: row is too big: size 8168, maximum size 8160

From
Mario de Frutos Dieguez
Date:
I see!

Thank you for your quick answer. We have to think if we can rearrange it.

The funny thing is that we have the same number of columns for other
tables which are the same except for stored geometries and those
stores the information without any problem.
That's why I thought maybe is a corruption thing or something

2018-07-11 16:40 GMT+02:00 Tom Lane <tgl@sss.pgh.pa.us>:
> Mario De Frutos Dieguez <mariodefrutos@gmail.com> writes:
>> I've found this error message and it's driving me crazy.
>
>> I have a table with 790 numeric columns, I'm trying to make an INSERT INTO
>> x SELECT... and in the same column/s I'm getting this message.
>
> You can either rethink your data representation (maybe you could put a
> bunch of those values into an array?) or recompile/re-initdb with a
> larger value of BLCKSZ.  I'd favor the former --- 790 columns is too many.
>
>                         regards, tom lane
>


Re: row is too big: size 8168, maximum size 8160

From
ramsiddu007
Date:
Hi,
     Recently we have faced the same problem. We're trying to create table with dynamic colums, after that while updating the data it was shown that row size too big error. After as per our DBA lead suggestion, we done some chsnges like "there is one date column its taking lenght high because of it is timestamp format, thats why we cast that column with date data type". By this lenght became 10,  obviously space released problem solved. But this same cause not for all the time for that error. So first check your data if there is any unnecessary thing avoid it. Its my first experience for this error.
Thanking you

On Wed, 11 Jul 2018, 19:46 Mario De Frutos Dieguez, <mariodefrutos@gmail.com> wrote:
Hello every one!

I've found this error message and it's driving me crazy.

I have a table with 790 numeric columns, I'm trying to make an INSERT INTO x SELECT... and in the same column/s I'm getting this message.

I've tried everything VACUUM FULL, Batch the insert with updates...but nothing always the same error.

THe BLCKSZ is set to 8Kb here: https://github.com/postgres/postgres/blob/master/src/include/pg_config.h.in#L36 but I'm suspecting that this or corrupted data or something I'm missing.

I've also read about TOAST-able and so on and I haven't messed with storage properties. The data into that columns aren't big numbers (maybe having many decimal counts)

Any clues? help?

Thank you

Re: row is too big: size 8168, maximum size 8160

From
Tom Lane
Date:
Mario de Frutos Dieguez <mariodefrutos@gmail.com> writes:
> Thank you for your quick answer. We have to think if we can rearrange it.
> The funny thing is that we have the same number of columns for other
> tables which are the same except for stored geometries and those
> stores the information without any problem.
> That's why I thought maybe is a corruption thing or something

No, it's just too many columns.  If we suppose that the average width of
your numeric columns is 12 bytes, then you need 790*12 = 9480 bytes,
which doesn't fit on an 8K page even without any accounting for row
header and page header overhead.  TOAST can't help by pushing values
out-of-line, because a TOAST pointer is 18 bytes so it'd actually make
things worse.  (TOAST works well for individual large fields, but not
at all for this case.)

You can get away with 790 columns if they're 4 or 8 bytes apiece, or if
many of them are NULL, but evidently these particular numeric values
average more than that.

In practice, though, there are a lot of other inefficiencies with
tables that have so many columns.  So I counsel looking into arrays
or some other approach.

            regards, tom lane


Re: row is too big: size 8168, maximum size 8160

From
Mario de Frutos Dieguez
Date:
I've been able to solve it reducing the precision of the data inside
the columns to 2 decimals. The original data was stored as integer, I
was putting much more data inside adding precision that wasn't needed
:)

Thank you for your responses :)

2018-07-11 18:17 GMT+02:00 Tom Lane <tgl@sss.pgh.pa.us>:
> Mario de Frutos Dieguez <mariodefrutos@gmail.com> writes:
>> Thank you for your quick answer. We have to think if we can rearrange it.
>> The funny thing is that we have the same number of columns for other
>> tables which are the same except for stored geometries and those
>> stores the information without any problem.
>> That's why I thought maybe is a corruption thing or something
>
> No, it's just too many columns.  If we suppose that the average width of
> your numeric columns is 12 bytes, then you need 790*12 = 9480 bytes,
> which doesn't fit on an 8K page even without any accounting for row
> header and page header overhead.  TOAST can't help by pushing values
> out-of-line, because a TOAST pointer is 18 bytes so it'd actually make
> things worse.  (TOAST works well for individual large fields, but not
> at all for this case.)
>
> You can get away with 790 columns if they're 4 or 8 bytes apiece, or if
> many of them are NULL, but evidently these particular numeric values
> average more than that.
>
> In practice, though, there are a lot of other inefficiencies with
> tables that have so many columns.  So I counsel looking into arrays
> or some other approach.
>
>                         regards, tom lane
>