Re: Bitmap scan is undercosted? - boolean correlation - Mailing list pgsql-performance

From Jeff Janes
Subject Re: Bitmap scan is undercosted? - boolean correlation
Date
Msg-id CAMkU=1wABEn19sBkiSj5ZqmUjR5dcPc203avvth2vNhy72htrQ@mail.gmail.com
Whole thread Raw
In response to Re: Bitmap scan is undercosted? - boolean correlation  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
On Tue, Dec 5, 2017 at 10:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Jeff Janes <jeff.janes@gmail.com> writes:
> On Dec 3, 2017 15:31, "Tom Lane" <tgl@sss.pgh.pa.us> wrote:
>> Jeff Janes <jeff.janes@gmail.com> writes:
>>> But I do see that ties within the logical order of the column values are
>>> broken to agree with the physical order.  That is wrong, right?  Is there
>>> any argument that this is desirable?

>> Uh ... what do you propose doing instead?  We'd have to do something with
>> ties, and it's not so obvious this way is wrong.

> Let them be tied.  If there are 10 distinct values, number the values 0 to
> 9, and all rows of a given distinct values get the same number for the
> logical order axis.
> Calling the correlation 0.8 when it is really 0.0 seems obviously wrong to
> me.  Although if we switched btree to store duplicate values with tid as a
> tie breaker, then maybe it wouldn't be as obviously wrong.

I thought some more about this.  What we really want the correlation stat
to do is help us estimate how randomly an index-ordered scan will access
the heap.

The correlation is used in another place, estimating how much of the table we will visit in the first place.  If the correlation is very high, then scanning 10% of the index leaf pages means we will visit 10% of the table.  If the correlation is low, then we use Mackert and Lohman, and (in the case of visiting 10% of the index) predict we will visit most of the table.  Assuming effective_cache_size is high, we will visit most of the table just once, but still in a random order, because subsequent visits for the same query will be found in the cache.  Rather than visiting the various pages repeatedly and not finding them in cache each time.

In addition to estimating how much of the table we visit, we also estimate how "sequential like" those visits are.  Which is the use that you describe.  Ideally for that use case, we would know for each distinct value, how correlated the tids are with the leaf page ordering.  If the index is freshly built, that is very high.  We visit 1/10 of the index, which causes us to visit 100% of the table but in perfect order, plucking 1/10 of the tuples from each table page.  

But visiting 100% of the table in physical order in order to pluck out 10% of the tuples from each page is quite different than visiting 10% of the table pages in physical order to pluck out 100% of the tuples from those pages and 0% from the pages not visited.
 
...

BTW, I disagree that "correlation = zero" is the right answer for this
particular example.  If the btree is freshly built, then an index-order
scan would visit all the heap pages in sequence to fetch "f" rows, and
then visit them all in sequence again to fetch "t" rows, which is a whole
lot better than the purely random access that zero correlation implies.
So I think 0.8 or so is actually a perfectly reasonable answer when the
index is fresh.  The trouble is just that it'd behoove us to derate that
answer somewhat for the probability that the index isn't fresh.

But, for the case of "how much of the table do we visit at all", correlation = zero is the right answer, even if it isn't the right answer for "how sequentially do we visit whatever we visit"

 
My first thought for a concrete fix was to use the mean position of
a group of duplicates for the purposes of the correlation calculation,
but on reflection that's clearly wrong.  We do have an idea, from the
data we have, whether the duplicates are close together in the heap
or spread all over.  Using only mean position would fail to distinguish
those cases, but really we'd better penalize the spread-all-over case.
I'm not sure how to do that.

Departing from correlations, we could also try to estimate "How many different table pages does each index leaf page reference".  This could capture functional dependencies which are strong, but not in the form of linear correlations.  (The current extended statistics only captures dependencies between user columns, not between one user column and one system column such as table slot)

For whatever its worth, here is my "let ties be ties" patch.

It breaks two regression tests due to plan changes, and both are cases where maybe the plan ought to change for the very reason being discussed.  If I just put random gibberish into the correlation field, more regression tests fail, so I think my implementation is not too far broken.

The accumulations into corr_ysum and corr_y2sum could trivially be pushed down into the "if", and corr_xysum could as well with a little algebra.  But that seems like premature optimization for a proof-of-concept patch.


Cheers,

Jeff

Attachment

pgsql-performance by date:

Previous
From: Jeff Janes
Date:
Subject: Re: Bitmap scan is undercosted?
Next
From: Gunther
Date:
Subject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade