Well, we dropped a couple of the indices that weren't being used, and also
dropped an index that included the 'state' column. These states are not
geographic states, they're process states and there's less than 10 unique
values. Our guess is that 10 unique values between 160K rows caused the
index to be of negative value. The explain now shows that the query is
being done with an index scan, and it's back to a _very_ respectable time (2
seconds to return 1550 rows).
Stephen, thank you very much for your time and effort.
-ron
> -----Original Message-----
> From: Ron Snyder
> Sent: Thursday, March 14, 2002 6:33 PM
> To: 'Stephan Szabo'; Ron Snyder
> Cc: 'pgsql-general@postgresql.org'
> Subject: RE: [GENERAL] Unexplainable slow down...
>
>
> > >
> > > OK, I'm now more confused. What do I do next to figure out why
> > > postgres isn't choosing the better query? We're running a vacuum
> > > analyze every
> > > night-- do I need to tweak the weights so that seq_scan is
> > less likely?
> >
> > Hmm, I'm not sure what the best answer is for this, it's
> > getting beyond my depth. I'd guess that it's possible that
> > it's over estimating the number of reads necessary to do the
> > index scan because the rows are clustered together which
> > would make it over-estimate the index scan cost and/or it
> > could be underestimating the cost of the sequence scan/limit
> > set as well (for example if the rows you want are late in the
> > table it's going to underestimate the final cost I think.)
>
> Hmm, I don't if it would be related, but the data only grows.
> Additionally, we (about a week ago) migrated all of this data
> (via pg_dump/pg_restore) from 7.1.3. Perhaps this is related?
>
> Thanks again for all the help!
>
> -ron
>