Re: pg_trgm indexes giving bad estimations? - Mailing list pgsql-performance

From Ben
Subject Re: pg_trgm indexes giving bad estimations?
Date
Msg-id Pine.LNX.4.64.0610312106260.5452@GRD.cube42.tai.silentmedia.com
Whole thread Raw
In response to Re: pg_trgm indexes giving bad estimations?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_trgm indexes giving bad estimations?
List pgsql-performance
Now that I have a little time to work on this again, I've thought about it
and it seems that an easy and somewhat accurate cop-out to do this is to
use whatever the selectivity function would be for the like operator,
multiplied by a scalar that pg_tgrm should already have access to.

Unfortunately, it's not at all clear to me from reading
http://www.postgresql.org/docs/8.1/interactive/xoper-optimization.html#AEN33077
how like impliments selectivity. Any pointers on where to look?

On Wed, 4 Oct 2006, Tom Lane wrote:

> Ben <bench@silentmedia.com> writes:
>> How can I get the planner to not expect so many rows to be returned?
>
> Write an estimation function for the pg_trgm operator(s).  (Send in a
> patch if you do!)  I see that % is using "contsel" which is only a stub,
> and would likely be wrong for % even if it weren't.
>
>> A possibly related question is: because pg_tgrm lets me set the
>> matching threshold of the % operator, how does that affect the planner?
>
> It hasn't a clue about that.
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>

pgsql-performance by date:

Previous
From: "Luke Lonergan"
Date:
Subject: Re: Help w/speeding up range queries?
Next
From: Tom Lane
Date:
Subject: Re: Help w/speeding up range queries?