Re: Proposed Patch to Improve Performance of Multi-BatchHash Join for Skewed Data Sets - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Proposed Patch to Improve Performance of Multi-BatchHash Join for Skewed Data Sets
Date
Msg-id 12249.1237594492@sss.pgh.pa.us
Whole thread Raw
In response to Re: Proposed Patch to Improve Performance of Multi-BatchHash Join for Skewed Data Sets  (Bryce Cutt <pandasuit@gmail.com>)
Responses Re: Proposed Patch to Improve Performance of Multi-BatchHash Join for Skewed Data Sets  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
Bryce Cutt <pandasuit@gmail.com> writes:
> Here is the new patch.

Applied with revisions.  I undid some of the "optimizations" that
cluttered the code in order to save a cycle or two per tuple --- as per
previous discussion, that's not what the performance questions were
about.  Also, I did not like the terminology "in-memory"/"IM"; it seemed
confusing since the main hash table is in-memory too.  I revised the
code to consistently refer to the additional hash table as a "skew"
hashtable and the optimization in general as skew optimization.  Hope
that seems reasonable to you --- we could search-and-replace it to
something else if you'd prefer.

For the moment, I didn't really do anything about teaching the planner
to account for this optimization in its cost estimates.  The initial
estimate of the number of MCVs that will be specially treated seems to
me to be too high (it's only accurate if the inner relation is unique),
but getting a more accurate estimate seems pretty hard, and it's not
clear it's worth the trouble.  Without that, though, you can't tell
what fraction of outer tuples will get the short-circuit treatment.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Sergey Burladyan
Date:
Subject: Re: gettext, plural form and translation
Next
From: Robert Haas
Date:
Subject: mbox-to-html script with stable identifiers