On Tue, 12 Feb 2002, Tom Lane wrote:
> Holger Marzen <holger@marzen.de> writes:
> > Are 152428 rows not enough to use the index?
>
> More like "there are too many rows to use the index".
>
> You've got a query that is estimated to hit 22322/152428 = 15% of the
> table. If the rows in question are uniformly scattered through the
> table then the system will certainly have to read every block of the
> table to get them all. It may as well read the table sequentially,
> rather than do the extra I/O to read the index too.
You're right. When the result is small, the index is used. When the
number of rows is a little bit higher and postgres uses a table scan,
the query is much slower. The table is not very unordered. Can I force
postgres to raise the limit where it starts scanning the whole table?
--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1