Re: Large tables, ORDER BY and sequence/index scans - Mailing list pgsql-general

From Albe Laurenz
Subject Re: Large tables, ORDER BY and sequence/index scans
Date
Msg-id D960CB61B694CF459DCFB4B0128514C20393810B@exadv11.host.magwien.gv.at
Whole thread Raw
In response to Large tables, ORDER BY and sequence/index scans  (Milan Zamazal <pdm@brailcom.org>)
Responses Re: Large tables, ORDER BY and sequence/index scans  (Milan Zamazal <pdm@brailcom.org>)
List pgsql-general
Milan Zamazal wrote:
> My problem is that retrieving sorted data from large tables
> is sometimes
> very slow in PostgreSQL (8.4.1, FWIW).
>
> I typically retrieve the data using cursors, to display them in UI:
>
>   BEGIN;
>   DECLARE ... SELECT ... ORDER BY ...;
>   FETCH ...;
>   ...
>
> On a newly created table of about 10 million rows the FETCH command
> takes about one minute by default, with additional delay during the
> contingent following COMMIT command.  This is because PostgreSQL uses
> sequence scan on the table even when there is an index on the ORDER BY
> column.  When I can force PostgreSQL to perform index scan (e.g. by
> setting one of the options enable_seqscan or enable_sort to off), FETCH
> response is immediate.
>
> PostgreSQL manual explains motivation for sequence scans of large tables
> and I can understand the motivation.  Nevertheless such behavior leads
> to unacceptably poor performance in my particular case.  It is important
> to get first resulting rows quickly, to display them to the user without
> delay.

Did you try to reduce the cursor_tuple_fraction parameter?

Yours,
Laurenz Albe

pgsql-general by date:

Previous
From: Milan Zamazal
Date:
Subject: Re: Large tables, ORDER BY and sequence/index scans
Next
From: Grzegorz Jaśkiewicz
Date:
Subject: Re: Large tables, ORDER BY and sequence/index scans