Re: Why hash join cost calculation need reduction - Mailing list pgsql-general

From Tom Lane
Subject Re: Why hash join cost calculation need reduction
Date
Msg-id 21225.1371131974@sss.pgh.pa.us
Whole thread Raw
In response to Re: Why hash join cost calculation need reduction  (Stephen Frost <sfrost@snowman.net>)
Responses Re: Why hash join cost calculation need reduction  (高健 <luckyjackgao@gmail.com>)
List pgsql-general
Stephen Frost <sfrost@snowman.net> writes:
> * 高健 (luckyjackgao@gmail.com) wrote:
>> Why the reduction  is needed here  for cost calculation?

>     cost_qual_eval(&hash_qual_cost, hashclauses, root);
> returns the costs for *just the quals which can be used for the
> hashjoin*, while
>     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
> returns the costs for *ALL the quals*

Right.  Note what it says in create_hashjoin_path:

 * 'restrict_clauses' are the RestrictInfo nodes to apply at the join
 ...
 * 'hashclauses' are the RestrictInfo nodes to use as hash clauses
 *        (this should be a subset of the restrict_clauses list)

So the two cost_qual_eval() calls are *both* counting the cost of the
hashclauses, and we have to undo that to get at just the cost of any
additional clauses beside the hash clauses.  See the comment about the
usage of qp_qual_cost further down:

    /*
     * For each tuple that gets through the hashjoin proper, we charge
     * cpu_tuple_cost plus the cost of evaluating additional restriction
     * clauses that are to be applied at the join.  (This is pessimistic since
     * not all of the quals may get evaluated at each tuple.)
     */
    startup_cost += qp_qual_cost.startup;
    cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    run_cost += cpu_per_tuple * hashjointuples;

            regards, tom lane


pgsql-general by date:

Previous
From: Rebecca Clarke
Date:
Subject: Re: Get data type aliases
Next
From: Tom Lane
Date:
Subject: Re: Explicit LOAD and dynamic library loading