Thread: postgres block_size problem

postgres block_size problem

From
"Bhujbal, Santosh"
Date:

Hi All,

We were getting the following error on executing a COPY command on postgres8.2.0.

SQL execution failed, Internal error. SQL Statement: (COPY command failed with error: ERROR:  row is too big: size 8200, maximum size 8136)

So, in order to fix the error we tried increasing the block size to 16k and recompiled the postgres library. The problem at hand was resolved with the new postgres library and we were now able to populate data for the mentioned query successfully. However, this experiment fails when we tried to run the new postgres on data directory generated with 8k block size which is likely to occur during an upgrade with postgres log showing the following error:

 

2008-12-30 14:57:33 IST  DETAIL:  The database cluster was initialized with BLCKSZ 8192, but the server was compiled with BLCKSZ 16384.

2008-12-30 14:57:33 IST  HINT:  It looks like you need to recompile or initdb.

 

What could be the possible solution to this?

 

 

Thanks,

Santosh.

This email and any attachments may contain legally privileged and/or confidential information of Starent Networks, Corp. and is intended only for the individual or entity named in the message. The information transmitted may not be used to create or change any contractual obligations of Starent Networks, Corp. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this e-mail and its attachments by persons or entities other than the intended recipient is prohibited. If you are not the intended recipient, please notify the sender immediately -- by replying to this message or by sending an email to postmaster@starentnetworks.com -- and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you.

Re: postgres block_size problem

From
Peter Eisentraut
Date:
Bhujbal, Santosh wrote:
> Hi All,
>
> We were getting the following error on executing a COPY command on
> postgres8.2.0.
>
> SQL execution failed, Internal error. SQL Statement: (COPY command failed with error: ERROR:  row is too big: size
8200,maximum size 8136) 
>
> So, in order to fix the error we tried increasing the block size to 16k
> and recompiled the postgres library. The problem at hand was resolved
> with the new postgres library and we were now able to populate data for
> the mentioned query successfully. However, this experiment fails when we
> tried to run the new postgres on data directory generated with 8k block
> size which is likely to occur during an upgrade with postgres log
> showing the following error:
>
>
>
> 2008-12-30 14:57:33 IST  DETAIL:  The database cluster was initialized
> with BLCKSZ 8192, but the server was compiled with BLCKSZ 16384.
>
> 2008-12-30 14:57:33 IST  HINT:  It looks like you need to recompile or
> initdb.
>
>
>
> What could be the possible solution to this?

If you want to migrate data from an 8k block size data directory to a
16k block size data directory, you need to do a dump and restore, just
like on a major version upgrade.

Re: postgres block_size problem

From
"Scott Marlowe"
Date:
On Tue, Dec 30, 2008 at 5:28 AM, Bhujbal, Santosh
<sbhujbal@starentnetworks.com> wrote:
> Hi All,
>
> We were getting the following error on executing a COPY command on
> postgres8.2.0.

First and foremost, you need to run the latest version of 8.2, not
8.2.0.  .0 releases of postgresql tend to have the most bugs.  You're
missing two years of updates by running 8.2.0.  Who knows what bugs
you may be running into by running 8.2.0

While you're at it, you might want to look at upgrading to 8.3.5,
which has quite a few enhancements over 8.2.x  But it's not that big
of a deal.  8.2 is a good performer overall.

> SQL execution failed, Internal error. SQL Statement: (COPY command failed
> with error: ERROR:  row is too big: size 8200, maximum size 8136)

Hmmm.  What exactly are you trying to insert into what?  Normally when
you see something like this it's an index on a text field that causes
this type of problem.  But I'm not sure what's going on here.
Normally postgresql will move large things out of line into a toast
table.  Are you running a non-standard storage parameter?

> So, in order to fix the error we tried increasing the block size to 16k and
> recompiled the postgres library.

If you can get away with NOT using >8k blocks, do so.  The code base
is no where near as well tested with block sizes over 8k as it is with
8k blocks, the default.

> The problem at hand was resolved with the
> new postgres library and we were now able to populate data for the mentioned
> query successfully. However, this experiment fails when we tried to run the
> new postgres on data directory generated with 8k block size which is likely
> to occur during an upgrade with postgres log showing the following error:

Yeah, running a non-standard block size is only advised if you're
willing to go to the extra effort each time to build a new package by
hand.  And require that of any users who use your application to do
the same, or run a custom package you provide.

> What could be the possible solution to this?

Can you explain in more detail exactly what you're doing to cause the
problem?  A short test case you can post would be most helpful.

Re: postgres block_size problem

From
Tom Lane
Date:
"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> On Tue, Dec 30, 2008 at 5:28 AM, Bhujbal, Santosh
> <sbhujbal@starentnetworks.com> wrote:
>> SQL execution failed, Internal error. SQL Statement: (COPY command failed
>> with error: ERROR:  row is too big: size 8200, maximum size 8136)

> Hmmm.  What exactly are you trying to insert into what?  Normally when
> you see something like this it's an index on a text field that causes
> this type of problem.

It's not an index because the error message would say so.  Evidently
it's a row that TOAST is unable to squeeze down to a workable size,
which suggests a very large number of columns.  I'd suggest rethinking
the table schema ...

            regards, tom lane

Re: postgres block_size problem

From
"Scott Marlowe"
Date:
On Tue, Dec 30, 2008 at 12:56 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Scott Marlowe" <scott.marlowe@gmail.com> writes:
>> On Tue, Dec 30, 2008 at 5:28 AM, Bhujbal, Santosh
>> <sbhujbal@starentnetworks.com> wrote:
>>> SQL execution failed, Internal error. SQL Statement: (COPY command failed
>>> with error: ERROR:  row is too big: size 8200, maximum size 8136)
>
>> Hmmm.  What exactly are you trying to insert into what?  Normally when
>> you see something like this it's an index on a text field that causes
>> this type of problem.
>
> It's not an index because the error message would say so.  Evidently
> it's a row that TOAST is unable to squeeze down to a workable size,
> which suggests a very large number of columns.  I'd suggest rethinking
> the table schema ...

Oh yeah, that does sound like wide tables.  Yeah, it's likely highly
denormalized or something like that.

I thought the error message was different, but I haven't seen it in years... :)