Re: [HACKERS] why not parallel seq scan for slow functions - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: [HACKERS] why not parallel seq scan for slow functions
Date
Msg-id CAFiTN-ufW5nQNqaM3boUA1=u6CYbykRDY7y7PDDc1QO+HMA=qw@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] why not parallel seq scan for slow functions  (Dilip Kumar <dilipbalaut@gmail.com>)
Responses Re: [HACKERS] why not parallel seq scan for slow functions  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> Either we can pass "num_gene" to merge_clump or we can store num_gene
> in the root. And inside merge_clump we can check. Do you see some more
> complexity?
>
After putting some more thought I see one more problem but not sure
whether we can solve it easily. Now, if we skip generating the gather
path at top level node then our cost comparison while adding the
element to pool will not be correct as we are skipping some of the
paths (gather path).  And, it's very much possible that the path1 is
cheaper than path2 without adding gather on top of it but with gather,
path2 can be cheaper.  But, there is not an easy way to handle it
because without targetlist we can not calculate the cost of the
gather(which is the actual problem).

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Daniel Gustafsson
Date:
Subject: Re: [HACKERS] Refactoring identifier checks to consistently use strcmp
Next
From: Simon Riggs
Date:
Subject: Re: [HACKERS] recovery_target_time = 'now' is not an error but stillimpractical setting