Re: TB-sized databases - Mailing list pgsql-performance

From Gregory Stark
Subject Re: TB-sized databases
Date
Msg-id 871waah3ux.fsf@oxford.xeocode.com
Whole thread Raw
In response to Re: TB-sized databases  (Bill Moran <wmoran@collaborativefusion.com>)
Responses Re: TB-sized databases
Re: TB-sized databases
List pgsql-performance
"Bill Moran" <wmoran@collaborativefusion.com> writes:

> In response to Matthew <matthew@flymine.org>:
>
>> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
>> > it would be nice to do something with selects so we can recover a rowset
>> > on huge tables using a criteria with indexes without fall running a full
>> > scan.
>>
>> You mean: Be able to tell Postgres "Don't ever do a sequential scan of
>> this table. It's silly. I would rather the query failed than have to wait
>> for a sequential scan of the entire table."
>>
>> Yes, that would be really useful, if you have huge tables in your
>> database.
>
> Is there something wrong with:
> set enable_seqscan = off
> ?

This does kind of the opposite of what you would actually want here. What you
want is that if you give it a query which would be best satisfied by a
sequential scan it should throw an error since you've obviously made an error
in the query.

What this does is it forces such a query to use an even *slower* method such
as a large index scan. In cases where there isn't any other method it goes
ahead and does the sequential scan anyways.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

pgsql-performance by date:

Previous
From: Bill Moran
Date:
Subject: Re: TB-sized databases
Next
From: david@lang.hm
Date:
Subject: Re: TB-sized databases