Re: surprising query optimisation - Mailing list pgsql-general

From Chris Withers
Subject Re: surprising query optimisation
Date
Msg-id ab12dfe6-b6c6-95b8-6f4c-b65d12011826@withers.org
Whole thread Raw
In response to Re: surprising query optimisation  (Stephen Frost <sfrost@snowman.net>)
Responses Re: surprising query optimisation  (Ron <ronljohnsonjr@gmail.com>)
List pgsql-general
On 05/12/2018 14:38, Stephen Frost wrote:
> Greetings,
> 
> * Chris Withers (chris@withers.org) wrote:
>> On 30/11/2018 15:33, Stephen Frost wrote:
>>> * Chris Withers (chris@withers.org) wrote:
>>>> On 28/11/2018 22:49, Stephen Frost wrote:
>>> For this, specifically, it's because you end up with exactly what you
>>> have: a large index with tons of duplicate values.  Indexes are
>>> particularly good when you have high-cardinality fields.  Now, if you
>>> have a skewed index, where there's one popular value and a few much less
>>> popular ones, then that's where you really want a partial index (as I
>>> suggest earlier) so that queries against the non-popular value(s) is
>>> able to use the index and the index is much smaller.
>>
>> Interesting! In my head, for some reason, I'd always assumed a btree index
>> would break down a char field based on the characters within it. Does that
>> never happen?
> 
> Not sure what you mean by 'break down a char field'.

Rather than breaking into three buckets ('NEW', 'ACK', 'RSV'), a more 
complicated hierarchy ('N', 'NE', 'A', 'AC', etc).

>> If I changed this to be an enum field, would != still perform poorly or can
>> the query optimiser spot that it's an enum and just look for the other
>> options?
> 
> I don't believe we've got any kind of optimization like that today for
> enums.

Good to know, I see query optimisers as magic, and postgres often seems 
to achieve magic results ;-)

Chris


pgsql-general by date:

Previous
From: Stephen Frost
Date:
Subject: Re: surprising query optimisation
Next
From: Ron
Date:
Subject: Re: surprising query optimisation