Hi,
Commit 6b94e7a has added a "fractional" strategy to the merge join path
search scope. But as I see it incorrectly calculates tuple_fraction
value. Let's see one of the regression tests' queries:
EXPLAIN (COSTS OFF)
SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y
USING (id) ORDER BY x.id DESC LIMIT 10;
It uses NestedLoop with parameterised inner scan as a subplan of
MergeAppend. Overall cost is:
Limit (cost=1.11..4.86 rows=10 width=16)
Let's reduce the LIMIT a little:
EXPLAIN (COSTS OFF)
SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y
USING (id) ORDER BY x.id DESC LIMIT 2;
There, the query plan uses plain NestLoop node and overall cost
significantly increases:
Limit (cost=0.56..30.70 rows=2 width=16)
The origin problem lies in the following code line:
path_fraction = (1.0 / root->tuple_fraction);
Building startup, total and fractional MergeAppends, the optimiser puts
off the 'fractional' way because it has built for a 50% limit (uses
MergeJoin with high startup cost) and chooses a startup-optimal path
(built upon plain NestLoops) that is better than the total one.
Instead, the path_fraction value must be corrected when
root->tuple_fraction contains the absolute value of tuples. See the
patch in the attachment.
I report this issue into the bugs mailing list because it might cause
some 'enigmatic' performance slowdowns I have heard about (but have no
proofs) and may need back-patching.
--
regards, Andrei Lepikhov