Chao Li <li.evan.chao@gmail.com> writes:
> My comment was trying to say that if there are a lot of null join key tuples in outer table, then
hj_NullOuterTupleStoremight use a lot of memory or swap data to disk, which might lead to performance burden. So, I was
thinkingwe could keep the original logic for outer table, and return null join key tuples immediately.
I don't think that works for the parallel-hash-join case, at least not
for the multi-batch code path. That path insists on putting every
potentially-outputtable tuple into some batch's shared tuplestore, cf
ExecParallelHashJoinPartitionOuter. We can make that function put
the tuple into a different tuplestore instead, but I think it's quite
unreasonable to think of returning the tuple immediately from there.
It certainly wouldn't be "keeping the original logic".
Yeah, we could make multi-batch PHJ do this differently from the other
cases, but I don't want to go there: too much complication and risk of
bugs for what is a purely hypothetical performance issue. Besides
which, if the join is large enough to be worth worrying over, it's
most likely taking that code path anyhow.
> We can simply added a new flag to HashTable, say named skip_building_hash. Upon right join (join to the hash side),
andouter table is empty, set the flag to true, then in the MultiExecPrivateHash(), if skip_building_hash is true,
directlyput all tuples into node->null_tuple_store without building a hash table.
> Then in ExecHashJoinImpl(), after "(void) MultiExecProcNode()" is called, if hashtable->skip_building_hash is true,
directlyset node->hj_JoinState = HJ_FILL_INNER_NULL_TUPLES.
I'm not excited about this idea either. It's completely abusing the
data structure, because the "null_tuple_store" is now being used for
tuples that (probably) don't have null join keys. The fact that you
could cram it in with not very many lines of code does not mean that
the result will be understandable or maintainable --- and certainly,
hash join is on the hairy edge of being too complicated already.
regards, tom lane