Re: Limits on PostgreSQL - Mailing list pgsql-general

From Tom Lane
Subject Re: Limits on PostgreSQL
Date
Msg-id 536.959295676@sss.pgh.pa.us
Whole thread Raw
In response to Limits on PostgreSQL  (Marcos Barreto de Castro <mbdecastro@yahoo.com>)
List pgsql-general
Marcos Barreto de Castro <mbdecastro@yahoo.com> writes:
> 1 - How many tuples can be returned as a result of a
> query using a CURSOR? (Is it possible to do a SELECT
> * on a table that has 2 million records and OPEN a
> CURSOR for that SELECT and show all records'contents
> using FETCH FORWARD, for example?)

You probably wouldn't want to fetch all 2 million rows in one FETCH,
because you'd risk running out of memory on the client side.  But as
long as you grab a reasonable number of rows per FETCH command, it works
fine.  This is in fact the recommended method for dealing with extremely
large SELECT results.

>  2 - When one uses a CURSOR for a SELECT is there a
> big memory consumption or there is a memory buffer
> limit and beyond that the result is written to a file
> (virtual memory) and read from there when needed?

Cursors work just the same as plain evaluation of the query would,
except that the query execution is suspended after having fetched the
requested number of tuples.  There isn't any special memory requirement
on the backend side.

Some types of queries need temporary files (an explicit sort of a large
volume of data is an example).  But that'll be just the same whether you
run them via a cursor or a plain SELECT.

            regards, tom lane

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: problem with NOTICE: _outNode: don't know how to print type
Next
From: Marcin Inkielman
Date:
Subject: PG 7.0 vacuum problem