Thread: Out of memory error with PG10.3, 10.4 but not 9.3.19

Out of memory error with PG10.3, 10.4 but not 9.3.19

From
ChatPristi
Date:
Dear all,


I have a SELECT command (in partitionned tables) that failed with:
psql:/tmp/query.txt:1: ERROR:  out of memory
DETAIL:  Cannot enlarge string buffer containing 1073741818 bytes by 32 more bytes.

I got the error with PG 10.3 and 10.4 on CentOS 7 up-to-date.
The command works with a smaller size database.

The command works with the same database with PG 9.3.19 on RHEL 6.9 up-to-date.

I attach the EXPLAIN SELECT command.

Apart rewriting the query is there any parameter that could be changed to make the query work in the postgresql.conf ?

Thank you very much for any help.



Attachment

Re: Out of memory error with PG10.3, 10.4 but not 9.3.19

From
Michael Paquier
Date:
On Tue, May 15, 2018 at 03:02:48PM +0000, ChatPristi wrote:
> I got the error with PG 10.3 and 10.4 on CentOS 7 up-to-date.The
> command works with a smaller size database.
> The command works with the same database with PG 9.3.19 on RHEL 6.9
> up-to-date.
>
> I attach the EXPLAIN SELECT command.

Well, instead of an explain output which takes 2.4MB compressed and
9.6MB uncompressed (take it as unreadable), could you produce a
self-contained test case with a glimpse of the schema you are using?
Where does the OOM happen, and how did you change your partitioned table
schema?  Are you using the native partitioning instead?
--
Michael

Attachment

Re: Out of memory error with PG10.3, 10.4 but not 9.3.19

From
"Peter J. Holzer"
Date:
On 2018-05-15 15:02:48 +0000, ChatPristi wrote:
> I have a SELECT command (in partitionned tables) that failed with:
> psql:/tmp/query.txt:1: ERROR:  out of memory
> DETAIL:  Cannot enlarge string buffer containing 1073741818 bytes by 32 more
> bytes.

1073741818 is a bit less than 1GB and 1073741818+32 is a bit more. So
you are obviously hitting a 1GB limit here.

Given that 1GB is the maximum length of a character type value in
PostgreSQL and the error message mentions a "string buffer", I suspect
that your query tries to construct a very long string. Try to rewrite
the query so that it creates several shorter strings instead.

        hp




--
   _  | Peter J. Holzer    | we build much bigger, better disasters now
|_|_) |                    | because we have much more sophisticated
| |   | hjp@hjp.at         | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>

Attachment

Fw: Re: Out of memory error with PG10.3, 10.4 but not 9.3.19

From
ChatPristi
Date:
Forgotten to CC the list, sorry...


>Well, instead of an explain output which takes 2.4MB compressed and
>9.6MB uncompressed (take it as unreadable), could you produce a
>self-contained test case with a glimpse of the schema you are using?
>Where does the OOM happen, and how did you change your partitioned table
>schema?  Are you using the native partitioning instead?

>Michael,

Thank you for your answer.

Sorry for the unreadable explain output.

I attached a SQL dump with 2 entities loaded in the database (2,872,265 entities in the actual database), the actual query and the actual output.

The OOM is durin the query (SELECT) after ~9 minutes the memory of the postgres increase until 8GB and the OOM message.

Partitioning is done by inherhitance.

After a complete reload of the database in PG10.4 the OOM still exists.



--
Michael
Attachment

Fw: Re: Out of memory error with PG10.3, 10.4 but not 9.3.19

From
ChatPristi
Date:

Forgotten to CC the list too, sorry. again..

>1073741818 is a bit less than 1GB and 1073741818+32 is a bit more. So
>you are obviously hitting a 1GB limit here.
>Given that 1GB is the maximum length of a character type value in
>PostgreSQL and the error message mentions a "string buffer", I suspect
>that your query tries to construct a very long string. Try to rewrite
>the query so that it creates several shorter strings instead.
>      hp


Peter,

Thank you for your answer.
I understood tht some buffer limit is reached. But I do not see where.
The output is 1513 rows and 78 kb.



--
  _  | Peter J. Holzer    | we build much bigger, better disasters now
|_|_) |                    | because we have much more sophisticated
| |  | hjp@hjp.at        | management tools.
__/  | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/
>

Re: Out of memory error with PG10.3, 10.4 but not 9.3.19

From
Christophe combet
Date:
Le mercredi 16 mai 2018 à 09:48:54 UTC+2, ChatPristi <cchristo_0899@yahoo.fr> a écrit :


Dear all,


I have a SELECT command (in partitionned tables) that failed with:
psql:/tmp/query.txt:1: ERROR:  out of memory
DETAIL:  Cannot enlarge string buffer containing 1073741818 bytes by 32 more bytes.

I got the error with PG 10.3 and 10.4 on CentOS 7 up-to-date.
The command works with a smaller size database.

The command works with the same database with PG 9.3.19 on RHEL 6.9 up-to-date.

I attach the EXPLAIN SELECT command.

Apart rewriting the query is there any parameter that could be changed to make the query work in the postgresql.conf ?

Thank you very much for any help.


Hello,

Any ideas where the OOM comes from in 10.4 while 9.3 is working ?

Thanks.