On Fri, 5 Dec 2003, Andrei Bintintan wrote:
> There are around 700 rows in this table.
> If I set enable_seqscan=off then the index is used and I also used Vacuum
> Analyze recently.
>
> I find it strange because the number of values of id_user and id_modull are
> somehow in the same distribution and when I search the table the id_user
> index is used but the id_modull index is not used.
It was guessing that one would return 11 rows and the other 42 which is
why one used the index and the other wouldn't. If those numbers aren't
realistic, you may want to raise the statistics target for the columns
(see ALTER TABLE) and re-run analyze.
> Does somehow postgre know that a seq scan runs faster in this case as a
> index scan? Should I erase this index?
It's making an educated guess. When you're doing an index scan, it needs
to read through the index and then get matching rows from the table.
However, because those reads from the table are in a potentially random
order, there's usually a higher cost associated with those reads than if
the table was read in order (barring cases where you know your database
should always stay cached in disk cache, etc...). If there's say 50 pages
in the entire table, a sequence scan does 50 sequential page reads and is
checking all those tuples. If you're getting say 42 rows through an
index, you're first reading through the index, and then getting <n> pages
in a random order from the table where <n> depends on the distribution of
values throughout the table. There's a variable in the configuration,
random_page_cost which controls the ratio of cost between a sequential
read and a random one (defaulting to 4).