Re: benchmarking the query planner - Mailing list pgsql-hackers

From Robert Haas
Subject Re: benchmarking the query planner
Date
Msg-id 603c8f070812111752v693e2da2ydc6dea4d263b7647@mail.gmail.com
Whole thread Raw
In response to Re: benchmarking the query planner  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: benchmarking the query planner  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
> I think though that the case for doing so is pretty good.  "MCVs" that
> are beyond the K'th entry can't possibly have frequencies greater than
> 1/K, and in most cases it'll be a lot less.  So the incremental
> contribution to the accuracy of the join selectivity estimate drops off
> pretty quickly, I should think.  And it's not like we're ignoring the
> existence of those values entirely --- we'd just be treating them as if
> they are part of the undifferentiated collection of non-MCV values.
>
> It might be best to stop when the frequency drops below some threshold,
> rather than taking a fixed number of entries.

OK, I'll bite.  How do we decide where to put the cutoff?  If we make
it too high, it will have a negative effect on join selectivity
estimates; if it's too low, it won't really address the problem we're
trying to fix.  I randomly propose p = 0.001, which should limit
eqjoinsel() to about a million equality tests in the worst case.  In
the synthetic example we were just benchmarking, that causes the
entire MCV array to be tossed out the window (which feels about
right).

...Robert


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: benchmarking the query planner
Next
From: KaiGai Kohei
Date:
Subject: Re: Updates of SE-PostgreSQL 8.4devel patches (r1268)