If I recall correctly, when the optimizer was changed (greatly enhanced),
there was a debate about what the default behavior should be. The problem
was that a large number of users would populate they're database after
index creation and see sluggishness because the statistics had not yet been
updated vs. the much smaller number of users that would suffer at the hands
of an index scan against a table that would be better served with a
sequential scan. I *think* the result of assuming 0 rows in a newly created
table, until the next vacuum, would yield a significant increase in
mailing-list traffic complaints to the tune of:
"Why isn't PostgreSQL using my index?"
followed by the usual
"Did you run VACUUM ANALYZE?"
So an assumption of 1000 rows was made, with 10 rows matching your WHERE
clause.
Mike Mascari
mascarm@mascari.com
-----Original Message-----
From: Daniel ?erud [SMTP:zilch@home.se]
Sent: Sunday, April 01, 2001 12:43 PM
To: pgsql-general@postgresql.org
Subject: Re: RE: Re: [GENERAL] Dissapearing indexes, what's that all about?
after a refresh database the explain yields:
index scan using xXxX (cost=0.00..8.14 rows=10 width=147)
after a vacuum + vacuum analyze the explain yields:
seq scan on acc xXxX A(cost=0.00..1.23 rows=1 width=147)
humm, seems you are right here... but why is it choosing a
index scan in the first place then?
> What are the costs associated with the EXPLAIN output?
Perhaps a sequential scan is *faster* then an index scan.
>
> Mike Mascari
> mascarm@mascari.com