Re: Performance with very large tables - Mailing list pgsql-general

From Richard Huxton
Subject Re: Performance with very large tables
Date
Msg-id 45AB5EE2.70901@archonet.com
Whole thread Raw
In response to Performance with very large tables  ("Jan van der Weijde" <Jan.van.der.Weijde@attachmate.com>)
Responses Re: Performance with very large tables  ("Shoaib Mir" <shoaibmir@gmail.com>)
List pgsql-general
Jan van der Weijde wrote:
> Hello all,
>
> one of our customers is using PostgreSQL with tables containing millions
> of records. A simple 'SELECT * FROM <table>'  takes way too much time in
> that case, so we have advised him to use the LIMIT and OFFSET clauses.

That won't reduce the time to fetch millions of rows.

It sounds like your customer doesn't want millions of rows at once, but
rather a few rows quickly and then to fetch more as required. For this
you want to use a cursor. You can do this via SQL, or perhaps via your
database library.

In SQL:
http://www.postgresql.org/docs/8.2/static/sql-declare.html
http://www.postgresql.org/docs/8.2/static/sql-fetch.html
In pl/pgsql:
http://www.postgresql.org/docs/8.2/static/plpgsql-cursors.html

HTH
--
   Richard Huxton
   Archonet Ltd

pgsql-general by date:

Previous
From: "Jan van der Weijde"
Date:
Subject: Performance with very large tables
Next
From: "Shoaib Mir"
Date:
Subject: Re: Performance with very large tables