Re: Large Tables(>1 Gb) - Mailing list pgsql-general

From Tom Lane
Subject Re: Large Tables(>1 Gb)
Date
Msg-id 19026.962379136@sss.pgh.pa.us
Whole thread Raw
In response to Re: Large Tables(>1 Gb)  (Denis Perchine <dyp@perchine.com>)
List pgsql-general
Denis Perchine <dyp@perchine.com> writes:
> 2. Use limit & offset capability of postgres.

> select * from big_table limit 1000 offset 0;
> select * from big_table limit 1000 offset 1000;

This is a risky way to do it --- the Postgres optimizer considers
limit/offset when choosing a plan, and is quite capable of choosing
different plans that yield different tuple orderings depending on the
size of the offset+limit.  For a plain SELECT as above you'd probably
be safe enough, but in more complex cases such as having potentially-
indexable WHERE clauses you'll very likely get bitten, unless you have
an ORDER BY clause to guarantee a unique tuple ordering.

Another advantage of FETCH is that you get a consistent result set
even if other backends are modifying the table, since it all happens
within one transaction.

            regards, tom lane

pgsql-general by date:

Previous
From: Martijn van Oosterhout
Date:
Subject: Re: disk backups
Next
From: Peter Eisentraut
Date:
Subject: Re: Comments with embedded single quotes