Re: why doesn't optimizer can pull up where a > ( ... ) - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: why doesn't optimizer can pull up where a > ( ... )
Date
Msg-id 20191121101155.a4wn3vmiq2nbx2rk@development
Whole thread Raw
In response to Re: why doesn't optimizer can pull up where a > ( ... )  (Andy Fan <zhihui.fan1213@gmail.com>)
Responses Re: why doesn't optimizer can pull up where a > ( ... )
List pgsql-hackers
On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:
>>
>>
>> Hm.  That actually raises the stakes a great deal, because if that's
>> what you're expecting, it would require planning out both the transformed
>> and untransformed versions of the query before you could make a cost
>> comparison.
>
>
>I don't know an official name,  let's call it as "bloom filter push down
>(BFPD)" for reference.  this algorithm  may be helpful on this case with
>some extra effort.
>
>First, Take . "select ... from t1,  t2 where t1.a = t2.a and t1.b = 100"
>for example,  and assume t1 is scanned before t2 scanning, like hash
>join/sort merge and take t1's result as inner table.
>
>1.  it first scan t1  with filter t1.b = 100;
>2.  during the above scan,  it build a bloom filter *based on the join key
>(t1.a) for the "selected" rows.*
>3.  during scan t2.a,  it filters t2.a with the bloom filter.
>4.  probe the the hash table with the filtered rows from the above step.
>

So essentially just a hash join with a bloom filter? That doesn't seem
very relevant to this thread (at least I don't see any obvious link),
but note that this has been discussed in the past - see [1]. And in some
cases building a bloom filter did result in nice speedups, but in other
cases it was just an extra overhead. But it does not require change of
plan shape, unlike the optimization discussed here.

[1] https://www.postgresql.org/message-id/flat/5670946E.8070705%402ndquadrant.com

Ultimately there were discussions about pushing the bloom filter much
deeper on the non-hash side, but that was never implemented.

>Back to this problem,  if we have a chance to get the p_brand we are
>interested,  we can use the same logic to only group by the p_brand.
>
>Another option may be we just keep the N versions, and search them
>differently and compare their cost at last.
>

Maybe. I think the problem is going to be that with multiple such
correlated queries you may significantly increase the number of plan
variants to consider - each subquery may be transformed or not, so the
space splits into 2. With 6 such subqueries you suddenly have 64x the
number of plan variants you have to consider (I don't think you can just
elimiate those early on).

>>  The Greenplum page mentions they also added "join-aggregates
>reordering", in addition to subquery unnesting.
>Thanks,  I will search more about this.
>
>>Having said that, the best form of criticism is a patch.  If somebody
>>actually wrote the code to do something like this, we could look at how
>>much time it wasted in which unsuccessful cases and then have an
>>informed discussion about whether it was worth adopting.
>>
>
>I would try to see how far I can get.

+1

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



pgsql-hackers by date:

Previous
From: Dilip Kumar
Date:
Subject: Re: [HACKERS] Block level parallel vacuum
Next
From: Peter Eisentraut
Date:
Subject: Re: tableam vs. TOAST