Re: [PERFORM] out of memory - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [PERFORM] out of memory
Date
Msg-id CA+TgmoYKfSY5-PcFqNJyYbe3sAxTc8mN=CL1MoXW+PDYdd3SVw@mail.gmail.com
Whole thread Raw
In response to Re: [PERFORM] out of memory  (Tatsuo Ishii <ishii@postgresql.org>)
Responses Re: [PERFORM] out of memory  (John R Pierce <pierce@hogranch.com>)
List pgsql-hackers
On Tue, Oct 30, 2012 at 6:08 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:
>> i have sql file (it's size are 1GB  )
>> when i execute it then the String is 987098801 bytr too long for encoding
>> conversion  error occured .
>> pls give me solution about
>
> You hit the upper limit of internal memory allocation limit in
> PostgreSQL. IMO, there's no way to avoid the error except you use
> client encoding identical to backend.

We recently had a customer who suffered a failed in pg_dump because
the quadruple-allocation required by COPY OUT for an encoding
conversion exceeded allocatable memory.  I wonder whether it would be
possible to rearrange things so that we can do a "streaming" encoding
conversion.  That is, if we have a large datum that we're trying to
send back to the client, could we perhaps chop off the first 50MB or
so, do the encoding on that amount of data, send the data to the
client, lather, rinse, repeat?

Your recent work to increase the maximum possible size of large
objects (for which I thank you) seems like it could make these sorts
of issues more common.  As objects get larger, I don't think we can go
on assuming that it's OK for peak memory utilization to keep hitting
5x or more.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: WIP checksums patch
Next
From: John R Pierce
Date:
Subject: Re: [PERFORM] out of memory