In response to Gregory Stark <stark@enterprisedb.com>:
> "Bill Moran" <wmoran@collaborativefusion.com> writes:
>
> > In response to Matthew <matthew@flymine.org>:
> >
> >> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
> >> > it would be nice to do something with selects so we can recover a rowset
> >> > on huge tables using a criteria with indexes without fall running a full
> >> > scan.
> >>
> >> You mean: Be able to tell Postgres "Don't ever do a sequential scan of
> >> this table. It's silly. I would rather the query failed than have to wait
> >> for a sequential scan of the entire table."
> >>
> >> Yes, that would be really useful, if you have huge tables in your
> >> database.
> >
> > Is there something wrong with:
> > set enable_seqscan = off
> > ?
>
> This does kind of the opposite of what you would actually want here. What you
> want is that if you give it a query which would be best satisfied by a
> sequential scan it should throw an error since you've obviously made an error
> in the query.
>
> What this does is it forces such a query to use an even *slower* method such
> as a large index scan. In cases where there isn't any other method it goes
> ahead and does the sequential scan anyways.
Ah. I misunderstood the intent of the comment.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
wmoran@collaborativefusion.com
Phone: 412-422-3463x4023