Re: [HACKERS] PATCH: multivariate histograms and MCV lists - Mailing list pgsql-hackers
From | Dean Rasheed |
---|---|
Subject | Re: [HACKERS] PATCH: multivariate histograms and MCV lists |
Date | |
Msg-id | CAEZATCXU5jqG_o_O6WH-Vp1avO_hWpB0UMbXhQpsmrU9RpP3Gw@mail.gmail.com Whole thread Raw |
In response to | Re: [HACKERS] PATCH: multivariate histograms and MCV lists (David Rowley <david.rowley@2ndquadrant.com>) |
List | pgsql-hackers |
On Sun, 10 Mar 2019 at 22:28, David Rowley <david.rowley@2ndquadrant.com> wrote: > > On Mon, 11 Mar 2019 at 06:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: > > > > On 3/9/19 7:33 PM, Dean Rasheed wrote: > > > I wonder if it's possible to write smaller, more targeted tests. > > > Currently "stats_ext" is by far the slowest test in its group, and I'm > > > not sure that some of those tests add much. It ought to be possible to > > > write a function that calls EXPLAIN and returns a query's row > > > estimate, and then you could write tests to confirm the effect of the > > > new stats by verifying the row estimates change as expected. > > > > Sure, if we can write more targeted tests, that would be good. But it's > > not quite clear to me how wrapping EXPLAIN in a function makes those > > tests any faster? > > I've not looked at the tests in question, but if they're executing an > inferior plan is used when no extended stats exists, then maybe that's > why they're slow. > > I think Dean might mean to create a function similar to > explain_parallel_append() in partition_prune.sql then write tests that > check the row estimate with EXPLAIN (COSTS ON) but strip out the other > costing stuff instead of validating that the poor plan was chosen. > Yeah that's the sort of thing I was thinking of. I think it might be possible to write simpler and faster tests by inserting far fewer rows and relying on ANALYSE having sampled everything, so the row estimates should be predictable. It may be the case that, with just a handful of rows, the extended stats don't affect the plan, but you'd still see a difference in the row estimates, and that could be a sufficient test I think. > > On 3/10/19 2:09 PM, Dean Rasheed wrote: > > > 12). bms_member_index() should surely be in bitmapset.c. It could be > > > more efficient by just traversing the bitmap words and making use of > > > bmw_popcount(). Also, its second argument should be of type 'int' for > > > consistency with other bms_* functions. > > > > Yes, moving to bitmapset.c definitely makes sense. I don't see how it > > could use bms_popcount() though. > > I think it could be done by first checking if the parameter is a > member of the set, and then if so, count all the bits that come on and > before that member. You can use bmw_popcount() for whole words before > the specific member's word then just bitwise-and a bit mask of a > bitmapword that has all bits set for all bits on and before your > parameter's BITNUM(), and add the bmw_popcount of the final word > bitwise-anding the mask. bms_add_range() has some masking code you > could copy. Yep, that's what I was imagining. Except I think that to get a 0-based index result you'd want the mask to have all bits set for bits *before* the parameter's BITNUM(), rather than on and before. So I think the mask would simply be ((bitmapword) 1 << bitnum) - 1 Regards, Dean
pgsql-hackers by date: