Re: Planner chose a much slower plan in hashjoin, using a large tableas the inner table. - Mailing list pgsql-hackers

From Thomas Munro
Subject Re: Planner chose a much slower plan in hashjoin, using a large tableas the inner table.
Date
Msg-id CA+hUKGJyF7xWHhFg1zY1CeJiJq9t-aj2AKx0=V5R6VqiieAfbQ@mail.gmail.com
Whole thread Raw
In response to Planner chose a much slower plan in hashjoin, using a large table asthe inner table.  (Jinbao Chen <jinchen@pivotal.io>)
Responses Re: Planner chose a much slower plan in hashjoin, using a large tableas the inner table.  (Jinbao Chen <jinchen@pivotal.io>)
List pgsql-hackers
On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:
> In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.
> If we use small table as inner table.  The chan length of the bucket is 30. And we need to
> search the whole chain on probing the hash table. So the cost of probing is bigger than build
> hash table, and we need to use big table as inner.
>
> But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets
> has chains with length 30. Other buckets are empty. Only hash values need to be compared.
> Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.
> Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples
> (number = (98900/100000)*tuple_num*) in outer
> table just compare with the hash value. So the actual cost is much smaller than the planner
> calculated. This is the reason why using a small table as inner is faster.

So basically we think that if t_big is on the outer side, we'll do
100,000,000 probes and each one is going to scan a t_small bucket with
chain length 30, so that looks really expensive.  Actually only a
small percentage of its probes find tuples with the right hash value,
but final_cost_hash_join() doesn't know that.  So we hash t_big
instead, which we estimated pretty well and it finishes up with
buckets of length 1,000 (which is actually fine in this case, they're
not unwanted hash collisions, they're duplicate keys that we need to
emit) and we probe them 3,000 times (which is also fine in this case),
but we had to do a bunch of memory allocation and/or batch file IO and
that turns out to be slower.

I am not at all sure about this but I wonder if it would be better to
use something like:

  run_cost += outer_path_rows * some_small_probe_cost;
  run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();

If we can estimate how many tuples will actually match accurately,
that should also be the number of times we have to run the quals,
since we don't usually expect hash collisions (bucket collisions, yes,
but hash collisions where the key doesn't turn out to be equal, no*).

* ... but also yes as you approach various limits, so you could also
factor in bucket chain length that is due to being prevented from
expanding the number of buckets by arbitrary constraints, and perhaps
also birthday_problem(hash size, key space) to factor in unwanted hash
collisions that start to matter once you get to billions of keys and
expect collisions with short hashes.



pgsql-hackers by date:

Previous
From: Julien Rouhaud
Date:
Subject: Re: Hypothetical indexes using BRIN broken since pg10
Next
From: Amit Kapila
Date:
Subject: Re: logical decoding : exceeded maxAllocatedDescs for .spill files