David Rowley <> writes:
> On 27 February 2016 at 11:07, James Parks <> wrote:
>> If you force the query planner to use a merge join on the above query, it
>> takes 10+ minutes to complete using the data as per below. If you force the
>> query planner to use a hash join on the same data, it takes ~200
> I believe I know what is going on here, but can you please test;
> SELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND
> a.nonce = ?) ORDER BY b.id ASC;
> using the merge join plan.
> If this performs much better then the problem is due to the merge join
> mark/restore causing the join to have to transition through many
> tuples which don't match the a.nonce = ? predicate.
Clearly we are rescanning an awful lot of the "a" table:
-> Index Scan using a_pkey on a (cost=0.00..26163.20 rows=843 width=8) (actual time=5.706..751385.306
Filter: (nonce = 64)
Rows Removed by Filter: 2201063696
Buffers: shared hit=2151024418 read=340
I/O Timings: read=1.015
The other explain shows a scan of "a" reading about 490k rows and
returning 395 of them, so there's a factor of about 200 re-read here.
I wonder if the planner should have inserted a materialize node to
However, I think the real problem is upstream of that: if that indexscan
was estimated at 26163.20 units, how'd the mergejoin above it get costed
at only 7850.13 units? The answer has to be that the planner thought the
merge would stop before reading most of "a", as a result of limited range
of b.a_id. It would be interesting to look into what the actual maximum
b.a_id value is.
regards, tom lane