Re: [HACKERS] PL/Perl list value return causes segfault - Mailing list pgsql-docs

From Andrew Dunstan
Subject Re: [HACKERS] PL/Perl list value return causes segfault
Date
Msg-id 42EB850E.9080205@dunslane.net
Whole thread Raw
In response to Re: [HACKERS] PL/Perl list value return causes segfault  (David Fetter <david@fetter.org>)
List pgsql-docs

David Fetter wrote:

>*** 716,724 ****
>
>      <listitem>
>       <para>
>!       In the current implementation, if you are fetching or returning
>!       very large data sets, you should be aware that these will all go
>!       into memory.
>       </para>
>      </listitem>
>     </itemizedlist>
>--- 766,776 ----
>
>      <listitem>
>       <para>
>!       If you are fetching or returning very large data sets using
>!       <literal>spi_exec_query</literal>, you should be aware that
>!       these will all go into memory.  You can avoid this by using
>!       <literal>spi_query</literal>/<literal>spi_fetchrow</literal> as
>!       illustrated earlier.
>       </para>
>      </listitem>
>     </itemizedlist>
>
>
>
>

You have rolled 2 problems into one - spi_query+spi_fetchrow does not
address the issue of returning large data sets.

Suggest instead:

<para>

       If you are fetching very large data sets using
       <literal>spi_exec_query</literal>, you should be aware that
       these will all go into memory.  You can avoid this by using
       <literal>spi_query</literal> and <literal>spi_fetchrow</literal>
    as illustrated earlier.
</para>
<para>
    A similar problem occurs if a set-returning function passes
    a large set of rows back to postgres via
    <literal>return</literal>. You can avoid this
    problem too by instead using <literal>return_next</literal> for
    each row returned, as shown previously.
</para>




cheers

andrew

pgsql-docs by date:

Previous
From: "Norm Costa "
Date:
Subject: unsubscribe
Next
From: Jeff Davis
Date:
Subject: broken link on techdocs