Re: BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB - Mailing list pgsql-bugs

From Tom Lane
Subject Re: BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB
Date
Msg-id 1849376.1737055086@sss.pgh.pa.us
Whole thread Raw
In response to Re: BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB  (Ilya Knyazev <knuazev@gmail.com>)
List pgsql-bugs
Ilya Knyazev <knuazev@gmail.com> writes:
> But I know that there may not be enough memory, so I use the "copy" keyword
> in the query and the PQgetCopyData function. I thought that this function
> was designed for portioned work. By analogy with the PQputCopyData
> function, which works fine.

Its documentation is fairly clear, I thought:

       Attempts to obtain another row of data from the server during a
       <command>COPY</command>.  Data is always returned one data row at
       a time; if only a partial row is available, it is not returned.

If you need to work with data values that are large enough to risk
memory problems, I think "large objects" are the best answer.  Their
interface is a bit clunky, but it's at least designed to let you
both read and write by chunks.

            regards, tom lane



pgsql-bugs by date:

Previous
From: PG Bug reporting form
Date:
Subject: BUG #18777: Error running unnest function in a two phase commit transaction
Next
From: Tom Lane
Date:
Subject: Re: BUG #18777: Error running unnest function in a two phase commit transaction