Thread: [HACKERS] why not parallel seq scan for slow functions

[HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
If I have a slow function which is evaluated in a simple seq scan, I do not get parallel execution, even though it would be massively useful.  Unless force_parallel_mode=ON, then I get a dummy parallel plan with one worker.

explain select aid,slow(abalance) from pgbench_accounts;

CREATE OR REPLACE FUNCTION slow(integer)
 RETURNS integer
 LANGUAGE plperl
 IMMUTABLE PARALLEL SAFE STRICT COST 10000000
AS $function$
  my $thing=$_[0];
  foreach (1..1_000_000) {
    $thing = sqrt($thing);
    $thing *= $thing;
  };
  return ($thing+0);
$function$;

The partial path is getting added to the list of paths, it is just not getting chosen, even if parallel_*_cost are set to zero.  Why not?

If I do an aggregate, then it does use parallel workers:

explain select sum(slow(abalance)) from pgbench_accounts;

It doesn't use as many as I would like, because there is a limit based on the logarithm of the table size (I'm using -s 10 and get 3 parallel processes) , but at least I know how to start looking into that.

Also, how do you debug stuff like this?  Are there some gdb tricks to make this easier to introspect into the plans?

Cheers,

Jeff

Re: [HACKERS] why not parallel seq scan for slow functions

From
Dilip Kumar
Date:
On Tue, Jul 11, 2017 at 9:02 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> If I have a slow function which is evaluated in a simple seq scan, I do not
> get parallel execution, even though it would be massively useful.  Unless
> force_parallel_mode=ON, then I get a dummy parallel plan with one worker.
>
> explain select aid,slow(abalance) from pgbench_accounts;

After analysing this, I see multiple reasons of this getting not selected

1. The query is selecting all the tuple and the benefit what we are
getting by parallelism is by dividing cpu_tuple_cost which is 0.01 but
for each tuple sent from worker to gather there is parallel_tuple_cost
which is 0.1 for each tuple.  (which will be very less in case of
aggregate).   Maybe you can try some selecting with some condition.

like below:
postgres=# explain select slow(abalance) from pgbench_accounts where
abalance > 1;                                   QUERY PLAN
-----------------------------------------------------------------------------------Gather  (cost=0.00..46602.33 rows=1
width=4) Workers Planned: 2  ->  Parallel Seq Scan on pgbench_accounts  (cost=0.00..46602.33
 
rows=1 width=4)        Filter: (abalance > 1)

2. The second problem I am seeing is that (maybe the code problem),
the "slow" function is very costly (10000000) and in
apply_projection_to_path we account for this cost.  But, I have
noticed that for gather node also we are adding this cost to all the
rows but actually, if the lower node is already doing the projection
then gather node just need to send out the tuple instead of actually
applying the projection.

In below function, we always multiply the target->cost.per_tuple with
path->rows, but in case of gather it should multiply this with
subpath->rows

apply_projection_to_path()
....

path->startup_cost += target->cost.startup - oldcost.startup;
path->total_cost += target->cost.startup - oldcost.startup +
(target->cost.per_tuple - oldcost.per_tuple) * path->rows;


So because of this high projection cost the seqpath and parallel path
both have fuzzily same cost but seqpath is winning because it's
parallel safe.

>
> CREATE OR REPLACE FUNCTION slow(integer)
>  RETURNS integer
>  LANGUAGE plperl
>  IMMUTABLE PARALLEL SAFE STRICT COST 10000000
> AS $function$
>   my $thing=$_[0];
>   foreach (1..1_000_000) {
>     $thing = sqrt($thing);
>     $thing *= $thing;
>   };
>   return ($thing+0);
> $function$;
>
> The partial path is getting added to the list of paths, it is just not
> getting chosen, even if parallel_*_cost are set to zero.  Why not?
>
> If I do an aggregate, then it does use parallel workers:
>
> explain select sum(slow(abalance)) from pgbench_accounts;
>
> It doesn't use as many as I would like, because there is a limit based on
> the logarithm of the table size (I'm using -s 10 and get 3 parallel
> processes) , but at least I know how to start looking into that.
>
> Also, how do you debug stuff like this?  Are there some gdb tricks to make
> this easier to introspect into the plans?
>
> Cheers,
>
> Jeff


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:


In below function, we always multiply the target->cost.per_tuple with
path->rows, but in case of gather it should multiply this with
subpath->rows

apply_projection_to_path()
....

path->startup_cost += target->cost.startup - oldcost.startup;
path->total_cost += target->cost.startup - oldcost.startup +
(target->cost.per_tuple - oldcost.per_tuple) * path->rows;


So because of this high projection cost the seqpath and parallel path
both have fuzzily same cost but seqpath is winning because it's
parallel safe.

I think you are correct.  However, unless parallel_tuple_cost is set very low, apply_projection_to_path never gets called with the Gather path as an argument.  It gets ruled out at some earlier stage, presumably because it assumes the projection step cannot make it win if it is already behind by enough.

So the attached patch improves things, but doesn't go far enough.

Cheers,

Jeff
Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>
>> So because of this high projection cost the seqpath and parallel path
>> both have fuzzily same cost but seqpath is winning because it's
>> parallel safe.
>
>
> I think you are correct.  However, unless parallel_tuple_cost is set very
> low, apply_projection_to_path never gets called with the Gather path as an
> argument.  It gets ruled out at some earlier stage, presumably because it
> assumes the projection step cannot make it win if it is already behind by
> enough.
>

I think that is genuine because tuple communication cost is very high.
If your table is reasonable large then you might want to try by
increasing parallel workers (Alter Table ... Set (parallel_workers =
..))

> So the attached patch improves things, but doesn't go far enough.
>

It seems to that we need to adjust the cost based on if the below node
is projection capable.  See attached.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Dilip Kumar
Date:
On Wed, Jul 12, 2017 at 10:55 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
>> On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>>
>>> So because of this high projection cost the seqpath and parallel path
>>> both have fuzzily same cost but seqpath is winning because it's
>>> parallel safe.
>>
>>
>> I think you are correct.  However, unless parallel_tuple_cost is set very
>> low, apply_projection_to_path never gets called with the Gather path as an
>> argument.  It gets ruled out at some earlier stage, presumably because it
>> assumes the projection step cannot make it win if it is already behind by
>> enough.
>>
>
> I think that is genuine because tuple communication cost is very high.
> If your table is reasonable large then you might want to try by
> increasing parallel workers (Alter Table ... Set (parallel_workers =
> ..))
>
>> So the attached patch improves things, but doesn't go far enough.
>>
>
> It seems to that we need to adjust the cost based on if the below node
> is projection capable.  See attached.

Patch looks good to me.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>
>> So because of this high projection cost the seqpath and parallel path
>> both have fuzzily same cost but seqpath is winning because it's
>> parallel safe.
>
>
> I think you are correct.  However, unless parallel_tuple_cost is set very
> low, apply_projection_to_path never gets called with the Gather path as an
> argument.  It gets ruled out at some earlier stage, presumably because it
> assumes the projection step cannot make it win if it is already behind by
> enough.
>

I think that is genuine because tuple communication cost is very high.

Sorry, I don't know which you think is genuine, the early pruning or my complaint about the early pruning.  

I agree that the communication cost is high, which is why I don't want to have to set parellel_tuple_cost very low.  For example, to get the benefit of your patch, I have to set parellel_tuple_cost to 0.0049 or less (in my real-world case, not the dummy test case I posted, although the number are around the same for that one too).  But with a setting that low, all kinds of other things also start using parallel plans, even if they don't benefit from them and are harmed.

I realize we need to do some aggressive pruning to avoid an exponential explosion in planning time, but in this case it has some rather unfortunate consequences.  I wanted to explore it, but I can't figure out where this particular pruning is taking place.

By the time we get to planner.c line 1787, current_rel->pathlist already does not contain the parallel plan if parellel_tuple_cost >= 0.0050, so the pruning is happening earlier than that.

 
If your table is reasonable large then you might want to try by
increasing parallel workers (Alter Table ... Set (parallel_workers =
..))


Setting parallel_workers to 8 changes the threshold for the parallel to even be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076.  So it is going in the correct direction, but not by enough to matter.
 
Cheers,

Jeff

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila <amit.kapila16@gmail.com>
> wrote:
>>
>> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
>> > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com>
>> > wrote:
>> >>
>> >> So because of this high projection cost the seqpath and parallel path
>> >> both have fuzzily same cost but seqpath is winning because it's
>> >> parallel safe.
>> >
>> >
>> > I think you are correct.  However, unless parallel_tuple_cost is set
>> > very
>> > low, apply_projection_to_path never gets called with the Gather path as
>> > an
>> > argument.  It gets ruled out at some earlier stage, presumably because
>> > it
>> > assumes the projection step cannot make it win if it is already behind
>> > by
>> > enough.
>> >
>>
>> I think that is genuine because tuple communication cost is very high.
>
>
> Sorry, I don't know which you think is genuine, the early pruning or my
> complaint about the early pruning.
>

Early pruning.  See, currently, we don't have a way to maintain both
parallel and non-parallel paths till later stage and then decide which
one is better. If we want to maintain both parallel and non-parallel
paths, it can increase planning cost substantially in the case of
joins.  Now, surely it can have benefit in many cases, so it is a
worthwhile direction to pursue.

> I agree that the communication cost is high, which is why I don't want to
> have to set parellel_tuple_cost very low.  For example, to get the benefit
> of your patch, I have to set parellel_tuple_cost to 0.0049 or less (in my
> real-world case, not the dummy test case I posted, although the number are
> around the same for that one too).  But with a setting that low, all kinds
> of other things also start using parallel plans, even if they don't benefit
> from them and are harmed.
>
> I realize we need to do some aggressive pruning to avoid an exponential
> explosion in planning time, but in this case it has some rather unfortunate
> consequences.  I wanted to explore it, but I can't figure out where this
> particular pruning is taking place.
>
> By the time we get to planner.c line 1787, current_rel->pathlist already
> does not contain the parallel plan if parellel_tuple_cost >= 0.0050, so the
> pruning is happening earlier than that.
>

Check generate_gather_paths.

>
>>
>> If your table is reasonable large then you might want to try by
>> increasing parallel workers (Alter Table ... Set (parallel_workers =
>> ..))
>
>
>
> Setting parallel_workers to 8 changes the threshold for the parallel to even
> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076.  So it is
> going in the correct direction, but not by enough to matter.
>

You might want to play with cpu_tuple_cost and or seq_page_cost.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Jul 13, 2017 at 7:38 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>
>>
>>
>> Setting parallel_workers to 8 changes the threshold for the parallel to even
>> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076.  So it is
>> going in the correct direction, but not by enough to matter.
>>
>
> You might want to play with cpu_tuple_cost and or seq_page_cost.
>

I don't know whether the patch will completely solve your problem, but
this seems to be the right thing to do.  Do you think we should stick
this for next CF?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
On Sat, Jul 22, 2017 at 8:53 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Thu, Jul 13, 2017 at 7:38 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>
>>
>>
>> Setting parallel_workers to 8 changes the threshold for the parallel to even
>> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076.  So it is
>> going in the correct direction, but not by enough to matter.
>>
>
> You might want to play with cpu_tuple_cost and or seq_page_cost.
>

I don't know whether the patch will completely solve your problem, but
this seems to be the right thing to do.  Do you think we should stick
this for next CF?

It doesn't solve the problem for me, but I agree it is an improvement we should commit.
 
Cheers,

Jeff

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Mon, Jul 24, 2017 at 9:21 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Sat, Jul 22, 2017 at 8:53 PM, Amit Kapila <amit.kapila16@gmail.com>
> wrote:
>>
>> On Thu, Jul 13, 2017 at 7:38 AM, Amit Kapila <amit.kapila16@gmail.com>
>> wrote:
>> > On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes <jeff.janes@gmail.com>
>> > wrote:
>> >>
>> >>
>> >>
>> >> Setting parallel_workers to 8 changes the threshold for the parallel to
>> >> even
>> >> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076.  So it
>> >> is
>> >> going in the correct direction, but not by enough to matter.
>> >>
>> >
>> > You might want to play with cpu_tuple_cost and or seq_page_cost.
>> >
>>
>> I don't know whether the patch will completely solve your problem, but
>> this seems to be the right thing to do.  Do you think we should stick
>> this for next CF?
>
>
> It doesn't solve the problem for me, but I agree it is an improvement we
> should commit.
>

Okay, added the patch for next CF.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
On Wed, Jul 12, 2017 at 7:08 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila <amit.kapila16@gmail.com>
> wrote:
>>
>> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
>> > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com>
>> > wrote:
>> >>
>> >> So because of this high projection cost the seqpath and parallel path
>> >> both have fuzzily same cost but seqpath is winning because it's
>> >> parallel safe.
>> >
>> >
>> > I think you are correct.  However, unless parallel_tuple_cost is set
>> > very
>> > low, apply_projection_to_path never gets called with the Gather path as
>> > an
>> > argument.  It gets ruled out at some earlier stage, presumably because
>> > it
>> > assumes the projection step cannot make it win if it is already behind
>> > by
>> > enough.
>> >
>>
>> I think that is genuine because tuple communication cost is very high.
>
>
> Sorry, I don't know which you think is genuine, the early pruning or my
> complaint about the early pruning.
>

Early pruning.  See, currently, we don't have a way to maintain both
parallel and non-parallel paths till later stage and then decide which
one is better. If we want to maintain both parallel and non-parallel
paths, it can increase planning cost substantially in the case of
joins.  Now, surely it can have benefit in many cases, so it is a
worthwhile direction to pursue.

If I understand it correctly, we have a way, it just can lead to exponential explosion problem, so we are afraid to use it, correct?  If I just lobotomize the path domination code (make pathnode.c line 466 always test false) 

                if (JJ_all_paths==0 && costcmp != COSTS_DIFFERENT)

Then it keeps the parallel plan and later chooses to use it (after applying your other patch in this thread) as the overall best plan.  It even doesn't slow down "make installcheck-parallel" by very much, which I guess just means the regression tests don't have a lot of complex joins.

But what is an acceptable solution?  Is there a heuristic for when retaining a parallel path could be helpful, the same way there is for fast-start paths?  It seems like the best thing would be to include the evaluation costs in the first place at this step.

Why is the path-cost domination code run before the cost of the function evaluation is included?  Is that because the information needed to compute it is not available at that point, or because it would be too slow to include it at that point? Or just because no one thought it important to do?

Cheers,

Jeff

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Aug 2, 2017 at 11:12 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Wed, Jul 12, 2017 at 7:08 PM, Amit Kapila <amit.kapila16@gmail.com>
> wrote:
>>
>> On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>> > On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila <amit.kapila16@gmail.com>
>> > wrote:
>> >>
>> >> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes <jeff.janes@gmail.com>
>> >> wrote:
>> >> > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar <dilipbalaut@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> So because of this high projection cost the seqpath and parallel
>> >> >> path
>> >> >> both have fuzzily same cost but seqpath is winning because it's
>> >> >> parallel safe.
>> >> >
>> >> >
>> >> > I think you are correct.  However, unless parallel_tuple_cost is set
>> >> > very
>> >> > low, apply_projection_to_path never gets called with the Gather path
>> >> > as
>> >> > an
>> >> > argument.  It gets ruled out at some earlier stage, presumably
>> >> > because
>> >> > it
>> >> > assumes the projection step cannot make it win if it is already
>> >> > behind
>> >> > by
>> >> > enough.
>> >> >
>> >>
>> >> I think that is genuine because tuple communication cost is very high.
>> >
>> >
>> > Sorry, I don't know which you think is genuine, the early pruning or my
>> > complaint about the early pruning.
>> >
>>
>> Early pruning.  See, currently, we don't have a way to maintain both
>> parallel and non-parallel paths till later stage and then decide which
>> one is better. If we want to maintain both parallel and non-parallel
>> paths, it can increase planning cost substantially in the case of
>> joins.  Now, surely it can have benefit in many cases, so it is a
>> worthwhile direction to pursue.
>
>
> If I understand it correctly, we have a way, it just can lead to exponential
> explosion problem, so we are afraid to use it, correct?  If I just
> lobotomize the path domination code (make pathnode.c line 466 always test
> false)
>
>                 if (JJ_all_paths==0 && costcmp != COSTS_DIFFERENT)
>
> Then it keeps the parallel plan and later chooses to use it (after applying
> your other patch in this thread) as the overall best plan.  It even doesn't
> slow down "make installcheck-parallel" by very much, which I guess just
> means the regression tests don't have a lot of complex joins.
>
> But what is an acceptable solution?  Is there a heuristic for when retaining
> a parallel path could be helpful, the same way there is for fast-start
> paths?  It seems like the best thing would be to include the evaluation
> costs in the first place at this step.
>
> Why is the path-cost domination code run before the cost of the function
> evaluation is included?

Because the function evaluation is part of target list and we create
path target after the creation of base paths (See call to
create_pathtarget @ planner.c:1696).

>  Is that because the information needed to compute
> it is not available at that point,

Right.

I see two ways to include the cost of the target list for parallel
paths before rejecting them (a) Don't reject parallel paths
(Gather/GatherMerge) during add_path.  This has the danger of path
explosion. (b)  In the case of parallel paths, somehow try to identify
that path has a costly target list (maybe just check if the target
list has anything other than vars) and use it as a heuristic to decide
that whether a parallel path can be retained.

I think the preference will be to do something on the lines of
approach (b), but I am not sure whether we can easily do that.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Right.
>
> I see two ways to include the cost of the target list for parallel
> paths before rejecting them (a) Don't reject parallel paths
> (Gather/GatherMerge) during add_path.  This has the danger of path
> explosion. (b)  In the case of parallel paths, somehow try to identify
> that path has a costly target list (maybe just check if the target
> list has anything other than vars) and use it as a heuristic to decide
> that whether a parallel path can be retained.

I think the right approach to this problem is to get the cost of the
GatherPath correct when it's initially created.  The proposed patch
changes the cost after-the-fact, but that (1) doesn't prevent a
promising path from being rejected before we reach this point and (2)
is probably unsafe, because it might confuse code that reaches the
modified-in-place path through some other pointer (e.g. code which
expects the RelOptInfo's paths to still be sorted by cost).  Perhaps
the way to do that is to skip generate_gather_paths() for the toplevel
scan/join node and do something similar later, after we know what
target list we want.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Aug 10, 2017 at 1:07 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Right.
>>
>> I see two ways to include the cost of the target list for parallel
>> paths before rejecting them (a) Don't reject parallel paths
>> (Gather/GatherMerge) during add_path.  This has the danger of path
>> explosion. (b)  In the case of parallel paths, somehow try to identify
>> that path has a costly target list (maybe just check if the target
>> list has anything other than vars) and use it as a heuristic to decide
>> that whether a parallel path can be retained.
>
> I think the right approach to this problem is to get the cost of the
> GatherPath correct when it's initially created.  The proposed patch
> changes the cost after-the-fact, but that (1) doesn't prevent a
> promising path from being rejected before we reach this point and (2)
> is probably unsafe, because it might confuse code that reaches the
> modified-in-place path through some other pointer (e.g. code which
> expects the RelOptInfo's paths to still be sorted by cost).  Perhaps
> the way to do that is to skip generate_gather_paths() for the toplevel
> scan/join node and do something similar later, after we know what
> target list we want.
>

I think skipping a generation of gather paths for scan node or top
level join node generated via standard_join_search seems straight
forward, but skipping for paths generated via geqo seems to be tricky
(See use of generate_gather_paths in merge_clump).  Assuming, we find
some way to skip it for top level scan/join node, I don't think that
will be sufficient, we have some special way to push target list below
Gather node in apply_projection_to_path, we need to move that part as
well in generate_gather_paths.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think skipping a generation of gather paths for scan node or top
> level join node generated via standard_join_search seems straight
> forward, but skipping for paths generated via geqo seems to be tricky
> (See use of generate_gather_paths in merge_clump).  Assuming, we find
> some way to skip it for top level scan/join node, I don't think that
> will be sufficient, we have some special way to push target list below
> Gather node in apply_projection_to_path, we need to move that part as
> well in generate_gather_paths.

I don't think that can work, because at that point we don't know what
target list the upper node wants to impose.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Tue, Aug 15, 2017 at 7:15 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I think skipping a generation of gather paths for scan node or top
>> level join node generated via standard_join_search seems straight
>> forward, but skipping for paths generated via geqo seems to be tricky
>> (See use of generate_gather_paths in merge_clump).  Assuming, we find
>> some way to skip it for top level scan/join node, I don't think that
>> will be sufficient, we have some special way to push target list below
>> Gather node in apply_projection_to_path, we need to move that part as
>> well in generate_gather_paths.
>
> I don't think that can work, because at that point we don't know what
> target list the upper node wants to impose.
>

I am suggesting to call generate_gather_paths just before we try to
apply projection on paths in grouping_planner (file:planner.c;
line:1787; commit:004a9702).  Won't the target list for upper nodes be
available at that point?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Aug 16, 2017 at 7:23 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Tue, Aug 15, 2017 at 7:15 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> I think skipping a generation of gather paths for scan node or top
>>> level join node generated via standard_join_search seems straight
>>> forward, but skipping for paths generated via geqo seems to be tricky
>>> (See use of generate_gather_paths in merge_clump).  Assuming, we find
>>> some way to skip it for top level scan/join node, I don't think that
>>> will be sufficient, we have some special way to push target list below
>>> Gather node in apply_projection_to_path, we need to move that part as
>>> well in generate_gather_paths.
>>
>> I don't think that can work, because at that point we don't know what
>> target list the upper node wants to impose.
>>
>
> I am suggesting to call generate_gather_paths just before we try to
> apply projection on paths in grouping_planner (file:planner.c;
> line:1787; commit:004a9702).  Won't the target list for upper nodes be
> available at that point?

Oh, yes.  Apparently I misunderstood your proposal.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Dilip Kumar
Date:
On Sat, Aug 12, 2017 at 6:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Aug 10, 2017 at 1:07 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> Right.
>>>
>
> I think skipping a generation of gather paths for scan node or top
> level join node generated via standard_join_search seems straight
> forward, but skipping for paths generated via geqo seems to be tricky
> (See use of generate_gather_paths in merge_clump).

Either we can pass "num_gene" to merge_clump or we can store num_gene
in the root. And inside merge_clump we can check. Do you see some more
complexity?

if (joinrel)

{   /* Create GatherPaths for any useful partial paths for rel */   if (old_clump->size + new_clump->size < num_gene)
     generate_gather_paths(root, joinrel);
 

}
 Assuming, we find
> some way to skip it for top level scan/join node, I don't think that
> will be sufficient, we have some special way to push target list below
> Gather node in apply_projection_to_path, we need to move that part as
> well in generate_gather_paths.
>

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Dilip Kumar
Date:
On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> Either we can pass "num_gene" to merge_clump or we can store num_gene
> in the root. And inside merge_clump we can check. Do you see some more
> complexity?
>
After putting some more thought I see one more problem but not sure
whether we can solve it easily. Now, if we skip generating the gather
path at top level node then our cost comparison while adding the
element to pool will not be correct as we are skipping some of the
paths (gather path).  And, it's very much possible that the path1 is
cheaper than path2 without adding gather on top of it but with gather,
path2 can be cheaper.  But, there is not an easy way to handle it
because without targetlist we can not calculate the cost of the
gather(which is the actual problem).

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Aug 17, 2017 at 2:45 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>
>> Either we can pass "num_gene" to merge_clump or we can store num_gene
>> in the root. And inside merge_clump we can check. Do you see some more
>> complexity?
>>

I think something like that should work.

> After putting some more thought I see one more problem but not sure
> whether we can solve it easily. Now, if we skip generating the gather
> path at top level node then our cost comparison while adding the
> element to pool will not be correct as we are skipping some of the
> paths (gather path).  And, it's very much possible that the path1 is
> cheaper than path2 without adding gather on top of it but with gather,
> path2 can be cheaper.
>

I think that should not matter because the costing of gather is mainly
based on a number of rows and that should be same for both path1 and
path2 in this case.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Dilip Kumar
Date:

On Fri, 18 Aug 2017 at 4:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Thu, Aug 17, 2017 at 2:45 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>
>> Either we can pass "num_gene" to merge_clump or we can store num_gene
>> in the root. And inside merge_clump we can check. Do you see some more
>> complexity?
>>

I think something like that should work.
Ok


> After putting some more thought I see one more problem but not sure
> whether we can solve it easily. Now, if we skip generating the gather
> path at top level node then our cost comparison while adding the
> element to pool will not be correct as we are skipping some of the
> paths (gather path).  And, it's very much possible that the path1 is
> cheaper than path2 without adding gather on top of it but with gather,
> path2 can be cheaper.
>

I think that should not matter because the costing of gather is mainly
based on a number of rows and that should be same for both path1 and
path2 in this case.

Yeah, I think you are right.



--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Aug 16, 2017 at 5:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Aug 16, 2017 at 7:23 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Tue, Aug 15, 2017 at 7:15 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>>> On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>> I think skipping a generation of gather paths for scan node or top
>>>> level join node generated via standard_join_search seems straight
>>>> forward, but skipping for paths generated via geqo seems to be tricky
>>>> (See use of generate_gather_paths in merge_clump).  Assuming, we find
>>>> some way to skip it for top level scan/join node, I don't think that
>>>> will be sufficient, we have some special way to push target list below
>>>> Gather node in apply_projection_to_path, we need to move that part as
>>>> well in generate_gather_paths.
>>>
>>> I don't think that can work, because at that point we don't know what
>>> target list the upper node wants to impose.
>>>
>>
>> I am suggesting to call generate_gather_paths just before we try to
>> apply projection on paths in grouping_planner (file:planner.c;
>> line:1787; commit:004a9702).  Won't the target list for upper nodes be
>> available at that point?
>
> Oh, yes.  Apparently I misunderstood your proposal.
>

Thanks for acknowledging the idea.  I have written a patch which
implements the above idea.  At this stage, it is merely to move the
discussion forward. Few things which I am not entirely happy about
this patch are:

(a) To skip generating gather path for top level scan node, I have
used the number of relations which has RelOptInfo, basically
simple_rel_array_size. Is there any problem with it or do you see any
better way?
(b) I have changed the costing of gather path for path target in
generate_gather_paths which I am not sure is the best way. Another
possibility could have been that I change the code in
apply_projection_to_path as done in the previous patch and just call
it from generate_gather_paths.  I have not done that because of your
comment above thread ("is probably unsafe, because it might confuse
code that reaches the modified-in-place path through some other
pointer (e.g. code which expects the RelOptInfo's paths to still be
sorted by cost).").  It is not clear to me what exactly is bothering
you if we directly change costing in apply_projection_to_path.

Thoughts?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Simon Riggs
Date:
On 21 August 2017 at 10:08, Amit Kapila <amit.kapila16@gmail.com> wrote:

> Thoughts?

This seems like a very basic problem for parallel queries.

The problem seems to be that we are calculating the cost of the plan
rather than the speed of the plan.

Clearly, a parallel task has a higher overall cost but a lower time to
complete if resources are available.

We have the choice of 1) adding a new optimizable quantity, or of 2)
treating cost = speed, so we actually reduce the cost of a parallel
plan rather than increasing it so it is more likely to be picked.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Mon, Aug 21, 2017 at 3:15 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 21 August 2017 at 10:08, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
>> Thoughts?
>
> This seems like a very basic problem for parallel queries.
>
> The problem seems to be that we are calculating the cost of the plan
> rather than the speed of the plan.
>
> Clearly, a parallel task has a higher overall cost but a lower time to
> complete if resources are available.
>
> We have the choice of 1) adding a new optimizable quantity,
>

I think this has the potential of making costing decisions difficult.
I mean to say, if we include any such new parameter, then we need to
consider that along with cost as we can't completely ignore the cost.

> or of 2)
> treating cost = speed, so we actually reduce the cost of a parallel
> plan rather than increasing it so it is more likely to be picked.
>

Yeah, this is what is being currently followed for costing of parallel
plans and this patch also tries to follow the same.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] why not parallel seq scan for slow functions

From
Simon Riggs
Date:
On 21 August 2017 at 11:42, Amit Kapila <amit.kapila16@gmail.com> wrote:

>> or of 2)
>> treating cost = speed, so we actually reduce the cost of a parallel
>> plan rather than increasing it so it is more likely to be picked.
>>
>
> Yeah, this is what is being currently followed for costing of parallel
> plans and this patch also tries to follow the same.

OK, I understand this better now, thanks.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Mon, Aug 21, 2017 at 5:08 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Thanks for acknowledging the idea.  I have written a patch which
> implements the above idea.  At this stage, it is merely to move the
> discussion forward. Few things which I am not entirely happy about
> this patch are:
>
> (a) To skip generating gather path for top level scan node, I have
> used the number of relations which has RelOptInfo, basically
> simple_rel_array_size. Is there any problem with it or do you see any
> better way?

I'm not sure.

> (b) I have changed the costing of gather path for path target in
> generate_gather_paths which I am not sure is the best way. Another
> possibility could have been that I change the code in
> apply_projection_to_path as done in the previous patch and just call
> it from generate_gather_paths.  I have not done that because of your
> comment above thread ("is probably unsafe, because it might confuse
> code that reaches the modified-in-place path through some other
> pointer (e.g. code which expects the RelOptInfo's paths to still be
> sorted by cost).").  It is not clear to me what exactly is bothering
> you if we directly change costing in apply_projection_to_path.

The point I was trying to make is that if you retroactively change the
cost of a path after you've already done add_path(), it's too late to
change your mind about whether to keep the path.  At least according
to my current understanding, that's the root of this problem in the
first place.  On top of that, add_path() and other routines that deal
with RelOptInfo path lists expect surviving paths to be ordered by
descending cost; if you frob the cost, they might not be properly
ordered any more.

I don't really have time right now to give this patch the attention
which it deserves; I can possibly come back to it at some future
point, or maybe somebody else will have time to give it a look.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> The point I was trying to make is that if you retroactively change the
> cost of a path after you've already done add_path(), it's too late to
> change your mind about whether to keep the path.  At least according
> to my current understanding, that's the root of this problem in the
> first place.  On top of that, add_path() and other routines that deal
> with RelOptInfo path lists expect surviving paths to be ordered by
> descending cost; if you frob the cost, they might not be properly
> ordered any more.

Hadn't been paying attention to this thread, but I happened to notice
Robert's comment here, and I strongly agree: it is *not* cool to be
changing a path's cost (or, really, anything else about it) after
it's already been given to add_path.  add_path has already made
irreversible choices on the basis of what it was given.
        regards, tom lane



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Fri, Aug 25, 2017 at 10:08 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Aug 21, 2017 at 5:08 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> (b) I have changed the costing of gather path for path target in
>> generate_gather_paths which I am not sure is the best way. Another
>> possibility could have been that I change the code in
>> apply_projection_to_path as done in the previous patch and just call
>> it from generate_gather_paths.  I have not done that because of your
>> comment above thread ("is probably unsafe, because it might confuse
>> code that reaches the modified-in-place path through some other
>> pointer (e.g. code which expects the RelOptInfo's paths to still be
>> sorted by cost).").  It is not clear to me what exactly is bothering
>> you if we directly change costing in apply_projection_to_path.
>
> The point I was trying to make is that if you retroactively change the
> cost of a path after you've already done add_path(), it's too late to
> change your mind about whether to keep the path.  At least according
> to my current understanding, that's the root of this problem in the
> first place.  On top of that, add_path() and other routines that deal
> with RelOptInfo path lists expect surviving paths to be ordered by
> descending cost; if you frob the cost, they might not be properly
> ordered any more.
>

Okay, now I understand your point, but I think we already change the
cost of paths in apply_projection_to_path which is done after add_path
for top level scan/join paths.  I think this matters a lot in case of
Gather because the cost of computing target list can be divided among
workers.  I have changed the patch such that parallel paths for top
level scan/join node will be generated after pathtarget is ready.  I
had kept the costing of path targets local to
apply_projection_to_path() as that makes the patch simple.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Tue, Sep 5, 2017 at 4:34 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Okay, now I understand your point, but I think we already change the
> cost of paths in apply_projection_to_path which is done after add_path
> for top level scan/join paths.

Yeah. I think that's a nasty hack, and I think it's Tom's fault.  :-)

It's used in various places with comments like this:
   /*    * The path might not return exactly what we want, so fix that.  (We    * assume that this won't change any
conclusionsabout which was the    * cheapest path.)    */
 

And in another place:
        * In principle we should re-run set_cheapest() here to identify the        * cheapest path, but it seems
unlikelythat adding the same tlist        * eval costs to all the paths would change that, so we don't bother.
 

I think these assumptions were a little shaky even before parallel
query came along, but they're now outright false, because we're not
adding the *same* tlist eval costs to all paths any more.  The
parallel paths are getting smaller costs.  That probably doesn't
matter much if the expressions in questions are things like a + b, but
when as in Jeff's example it's slow(a), then it matters a lot.

I'd feel a lot happier if Tom were to decide how this ought to be
fixed, because - in spite of some modifications by various parallel
query code - this is basically all his design and mostly his code.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Tue, Sep 5, 2017 at 4:34 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Okay, now I understand your point, but I think we already change the
>> cost of paths in apply_projection_to_path which is done after add_path
>> for top level scan/join paths.

> Yeah. I think that's a nasty hack, and I think it's Tom's fault.  :-)

Yeah, and it's also documented:
* This has the same net effect as create_projection_path(), except that if* a separate Result plan node isn't needed,
wejust replace the given path's* pathtarget with the desired one.  This must be used only when the caller* knows that
thegiven path isn't referenced elsewhere and so can be modified* in-place.
 

If somebody's applying apply_projection_to_path to a path that's already
been add_path'd, that's a violation of the documented restriction.

It might be that we should just get rid of apply_projection_to_path and
use create_projection_path, which is less mistake-prone at the cost of
manufacturing another level of Path node.  Now that that has the dummypp
flag, it really shouldn't make any difference in terms of the accuracy of
the cost estimates.

> I'd feel a lot happier if Tom were to decide how this ought to be
> fixed, because - in spite of some modifications by various parallel
> query code - this is basically all his design and mostly his code.

I can take a look, but not right away.
        regards, tom lane



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Sep 6, 2017 at 1:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Tue, Sep 5, 2017 at 4:34 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> Okay, now I understand your point, but I think we already change the
>>> cost of paths in apply_projection_to_path which is done after add_path
>>> for top level scan/join paths.
>
>> Yeah. I think that's a nasty hack, and I think it's Tom's fault.  :-)
>
> Yeah, and it's also documented:
>
>  * This has the same net effect as create_projection_path(), except that if
>  * a separate Result plan node isn't needed, we just replace the given path's
>  * pathtarget with the desired one.  This must be used only when the caller
>  * knows that the given path isn't referenced elsewhere and so can be modified
>  * in-place.
>
> If somebody's applying apply_projection_to_path to a path that's already
> been add_path'd, that's a violation of the documented restriction.

/me is confused.  Isn't that exactly what grouping_planner() is doing,
and has done ever since your original pathification commit
(3fc6e2d7f5b652b417fa6937c34de2438d60fa9f)?  It's iterating over
current_rel->pathlist, so surely everything in there has been
add_path()'d.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Sep 6, 2017 at 1:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> If somebody's applying apply_projection_to_path to a path that's already
>> been add_path'd, that's a violation of the documented restriction.

> /me is confused.  Isn't that exactly what grouping_planner() is doing,
> and has done ever since your original pathification commit
> (3fc6e2d7f5b652b417fa6937c34de2438d60fa9f)?  It's iterating over
> current_rel->pathlist, so surely everything in there has been
> add_path()'d.

I think the assumption there is that we no longer care about validity of
the input Relation, since we won't be looking at it any more (and
certainly not adding more paths to it).  If there's some reason why
that's not true, then maybe grouping_planner has a bug there.
        regards, tom lane



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Sep 6, 2017 at 3:18 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Wed, Sep 6, 2017 at 1:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> If somebody's applying apply_projection_to_path to a path that's already
>>> been add_path'd, that's a violation of the documented restriction.
>
>> /me is confused.  Isn't that exactly what grouping_planner() is doing,
>> and has done ever since your original pathification commit
>> (3fc6e2d7f5b652b417fa6937c34de2438d60fa9f)?  It's iterating over
>> current_rel->pathlist, so surely everything in there has been
>> add_path()'d.
>
> I think the assumption there is that we no longer care about validity of
> the input Relation, since we won't be looking at it any more (and
> certainly not adding more paths to it).  If there's some reason why
> that's not true, then maybe grouping_planner has a bug there.

Right, that's sorta what I assumed.  But I think that thinking is
flawed in the face of parallel query, because of the fact that
apply_projection_to_path() pushes down target list projection below
Gather when possible.  In particular, as Jeff and Amit point out, it
may well be that (a) before apply_projection_to_path(), the cheapest
plan is non-parallel and (b) after apply_projection_to_path(), the
cheapest plan would be a Gather plan, except that it's too late
because we've already thrown that path out.

What we ought to do, I think, is avoid generating gather paths until
after we've applied the target list (and the associated costing
changes) to both the regular path list and the partial path list.
Then the cost comparison is apples-to-apples.  The use of
apply_projection_to_path() on every path in the pathlist would be fine
if it were adjusting all the costs by a uniform amount, but it isn't.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> In particular, as Jeff and Amit point out, it
> may well be that (a) before apply_projection_to_path(), the cheapest
> plan is non-parallel and (b) after apply_projection_to_path(), the
> cheapest plan would be a Gather plan, except that it's too late
> because we've already thrown that path out.

I'm not entirely following.  I thought that add_path was set up to treat
"can be parallelized" as an independent dimension of merit, so that
parallel paths would always survive.

> What we ought to do, I think, is avoid generating gather paths until
> after we've applied the target list (and the associated costing
> changes) to both the regular path list and the partial path list.

Might be a tad messy to rearrange things that way.
        regards, tom lane



Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Sep 6, 2017 at 3:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> In particular, as Jeff and Amit point out, it
>> may well be that (a) before apply_projection_to_path(), the cheapest
>> plan is non-parallel and (b) after apply_projection_to_path(), the
>> cheapest plan would be a Gather plan, except that it's too late
>> because we've already thrown that path out.
>
> I'm not entirely following.  I thought that add_path was set up to treat
> "can be parallelized" as an independent dimension of merit, so that
> parallel paths would always survive.

It treats parallel-safety as an independent dimension of merit; a
parallel-safe plan is more meritorious than one of equal cost which is
not.  We need that so that because, for example, forming a partial
path for a join means joining a partial path to a parallel-safe path.
But that doesn't help us here; that's to make sure we can build the
necessary stuff *below* the Gather.  IOW, if we threw away
parallel-safe paths because there was a cheaper parallel-restricted
path, we might be unable to build a partial path for the join *at
all*.

Here, the Gather path is not parallel-safe, but rather
parallel-restricted: it's OK for it to exist in a plan that uses
parallelism (duh), but it can't be nested under another Gather (also
duh, kinda).  So before accounting for the differing projection cost,
the Gather path is doubly inferior: it is more expensive AND not
parallel-safe, whereas the competing non-parallel plan is both cheaper
AND parallel-safe.  After applying the expensive target list, the
parallel-safe plan gets a lot more expensive, but the Gather path gets
more expensive to a lesser degree because the projection step ends up
below the Gather and thus happens in parallel, so now the Gather plan,
still a loser on parallel-safety, is a winner on total cost and thus
ought to have been retained and, in fact, ought to have won.  Instead,
we threw it out too early.

>> What we ought to do, I think, is avoid generating gather paths until
>> after we've applied the target list (and the associated costing
>> changes) to both the regular path list and the partial path list.
>
> Might be a tad messy to rearrange things that way.

Why do you think I wanted you to do it?  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] why not parallel seq scan for slow functions

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Sep 6, 2017 at 3:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I'm not entirely following.  I thought that add_path was set up to treat
>> "can be parallelized" as an independent dimension of merit, so that
>> parallel paths would always survive.

> Here, the Gather path is not parallel-safe, but rather
> parallel-restricted:

Ah, right, the problem is with the Gather not its sub-paths.

>> Might be a tad messy to rearrange things that way.

> Why do you think I wanted you to do it?  :-)

I'll think about it.
        regards, tom lane



Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Khandekar
Date:
On 5 September 2017 at 14:04, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Aug 25, 2017 at 10:08 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Mon, Aug 21, 2017 at 5:08 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> (b) I have changed the costing of gather path for path target in
>>> generate_gather_paths which I am not sure is the best way. Another
>>> possibility could have been that I change the code in
>>> apply_projection_to_path as done in the previous patch and just call
>>> it from generate_gather_paths.  I have not done that because of your
>>> comment above thread ("is probably unsafe, because it might confuse
>>> code that reaches the modified-in-place path through some other
>>> pointer (e.g. code which expects the RelOptInfo's paths to still be
>>> sorted by cost).").  It is not clear to me what exactly is bothering
>>> you if we directly change costing in apply_projection_to_path.
>>
>> The point I was trying to make is that if you retroactively change the
>> cost of a path after you've already done add_path(), it's too late to
>> change your mind about whether to keep the path.  At least according
>> to my current understanding, that's the root of this problem in the
>> first place.  On top of that, add_path() and other routines that deal
>> with RelOptInfo path lists expect surviving paths to be ordered by
>> descending cost; if you frob the cost, they might not be properly
>> ordered any more.
>>
>
> Okay, now I understand your point, but I think we already change the
> cost of paths in apply_projection_to_path which is done after add_path
> for top level scan/join paths.  I think this matters a lot in case of
> Gather because the cost of computing target list can be divided among
> workers.  I have changed the patch such that parallel paths for top
> level scan/join node will be generated after pathtarget is ready.  I
> had kept the costing of path targets local to
> apply_projection_to_path() as that makes the patch simple.

I started with a quick review ... a couple of comments below :

- * If this is a baserel, consider gathering any partial paths we may have
- * created for it.  (If we tried to gather inheritance children, we could
+ * If this is a baserel and not the only rel, consider gathering any
+ * partial paths we may have created for it.  (If we tried to gather
 /* Create GatherPaths for any useful partial paths for rel */
-  generate_gather_paths(root, rel);
+  if (lev < levels_needed)
+     generate_gather_paths(root, rel, NULL);

I think at the above two places, and may be in other place also, it's
better to mention the reason why we should generate the gather path
only if it's not the only rel.

----------

-       if (rel->reloptkind == RELOPT_BASEREL)
-               generate_gather_paths(root, rel);
+       if (rel->reloptkind == RELOPT_BASEREL &&
root->simple_rel_array_size > 2)
+               generate_gather_paths(root, rel, NULL);

Above, in case it's a partitioned table, root->simple_rel_array_size
includes the child rels. So even if it's a simple select without a
join rel, simple_rel_array_size would be > 2, and so gather path would
be generated here for the root table, and again in grouping_planner().



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Tue, Sep 12, 2017 at 5:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 5 September 2017 at 14:04, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> I started with a quick review ... a couple of comments below :
>
> - * If this is a baserel, consider gathering any partial paths we may have
> - * created for it.  (If we tried to gather inheritance children, we could
> + * If this is a baserel and not the only rel, consider gathering any
> + * partial paths we may have created for it.  (If we tried to gather
>
>   /* Create GatherPaths for any useful partial paths for rel */
> -  generate_gather_paths(root, rel);
> +  if (lev < levels_needed)
> +     generate_gather_paths(root, rel, NULL);
>
> I think at the above two places, and may be in other place also, it's
> better to mention the reason why we should generate the gather path
> only if it's not the only rel.
>

I think the comment you are looking is present where we are calling
generate_gather_paths in grouping_planner. Instead of adding same or
similar comment at multiple places, how about if we just say something
like "See in grouping_planner where we generate gather paths" at all
other places?

> ----------
>
> -       if (rel->reloptkind == RELOPT_BASEREL)
> -               generate_gather_paths(root, rel);
> +       if (rel->reloptkind == RELOPT_BASEREL &&
> root->simple_rel_array_size > 2)
> +               generate_gather_paths(root, rel, NULL);
>
> Above, in case it's a partitioned table, root->simple_rel_array_size
> includes the child rels. So even if it's a simple select without a
> join rel, simple_rel_array_size would be > 2, and so gather path would
> be generated here for the root table, and again in grouping_planner().
>

Yeah, that could be a problem.  I think we should ensure that there is
no append rel list by checking root->append_rel_list.  Can you think
of a better way to handle it?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Sep 13, 2017 at 9:39 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Tue, Sep 12, 2017 at 5:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 5 September 2017 at 14:04, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> I started with a quick review ... a couple of comments below :
>>
>> - * If this is a baserel, consider gathering any partial paths we may have
>> - * created for it.  (If we tried to gather inheritance children, we could
>> + * If this is a baserel and not the only rel, consider gathering any
>> + * partial paths we may have created for it.  (If we tried to gather
>>
>>   /* Create GatherPaths for any useful partial paths for rel */
>> -  generate_gather_paths(root, rel);
>> +  if (lev < levels_needed)
>> +     generate_gather_paths(root, rel, NULL);
>>
>> I think at the above two places, and may be in other place also, it's
>> better to mention the reason why we should generate the gather path
>> only if it's not the only rel.
>>
>
> I think the comment you are looking is present where we are calling
> generate_gather_paths in grouping_planner. Instead of adding same or
> similar comment at multiple places, how about if we just say something
> like "See in grouping_planner where we generate gather paths" at all
> other places?
>
>> ----------
>>
>> -       if (rel->reloptkind == RELOPT_BASEREL)
>> -               generate_gather_paths(root, rel);
>> +       if (rel->reloptkind == RELOPT_BASEREL &&
>> root->simple_rel_array_size > 2)
>> +               generate_gather_paths(root, rel, NULL);
>>
>> Above, in case it's a partitioned table, root->simple_rel_array_size
>> includes the child rels. So even if it's a simple select without a
>> join rel, simple_rel_array_size would be > 2, and so gather path would
>> be generated here for the root table, and again in grouping_planner().
>>
>
> Yeah, that could be a problem.  I think we should ensure that there is
> no append rel list by checking root->append_rel_list.  Can you think
> of a better way to handle it?
>

The attached patch fixes both the review comments as discussed above.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Thomas Munro
Date:
On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> The attached patch fixes both the review comments as discussed above.

This cost stuff looks unstable:

test select_parallel          ... FAILED

!  Gather  (cost=0.00..623882.94 rows=9976 width=8)    Workers Planned: 4
!    ->  Parallel Seq Scan on tenk1  (cost=0.00..623882.94 rows=2494 width=8) (3 rows)
 drop function costly_func(var1 integer);
--- 112,120 ---- explain select ten, costly_func(ten) from tenk1;                                  QUERY PLAN
----------------------------------------------------------------------------
!  Gather  (cost=0.00..625383.00 rows=10000 width=8)    Workers Planned: 4
!    ->  Parallel Seq Scan on tenk1  (cost=0.00..625383.00 rows=2500 width=8) (3 rows)
 drop function costly_func(var1 integer);

From https://travis-ci.org/postgresql-cfbot/postgresql/builds/277376953 .

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> The attached patch fixes both the review comments as discussed above.

This cost stuff looks unstable:

test select_parallel          ... FAILED

!  Gather  (cost=0.00..623882.94 rows=9976 width=8)
     Workers Planned: 4
!    ->  Parallel Seq Scan on tenk1  (cost=0.00..623882.94 rows=2494 width=8)
  (3 rows)

  drop function costly_func(var1 integer);
--- 112,120 ----
  explain select ten, costly_func(ten) from tenk1;
                                   QUERY PLAN
  ----------------------------------------------------------------------------
!  Gather  (cost=0.00..625383.00 rows=10000 width=8)
     Workers Planned: 4
!    ->  Parallel Seq Scan on tenk1  (cost=0.00..625383.00 rows=2500 width=8)
  (3 rows)

that should be fixed by turning costs on the explain, as is the tradition.


See attached.

Cheers,

Jeff
Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>>
>> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit.kapila16@gmail.com>
>> wrote:
>> > The attached patch fixes both the review comments as discussed above.
>
>
> that should be fixed by turning costs on the explain, as is the tradition.
>

Right.  BTW, did you get a chance to run the original test (for which
you have reported the problem) with this patch?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Jeff Janes
Date:
On Tue, Sep 19, 2017 at 9:15 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>>
>> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit.kapila16@gmail.com>
>> wrote:
>> > The attached patch fixes both the review comments as discussed above.
>
>
> that should be fixed by turning costs on the explain, as is the tradition.
>

Right.  BTW, did you get a chance to run the original test (for which
you have reported the problem) with this patch?

Yes, this patch makes it use a parallel scan, with great improvement.  No more having to \copy the data out, then run GNU split, then run my perl or python program with GNU parallel on each file.  Instead I just have to put a pl/perl wrapper around the function.

(next up, how to put a "create temp table alsdkfjaslfdj as" in front of it and keep it running in parallel)

Thanks,

Jeff


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Sep 21, 2017 at 2:35 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 9:15 PM, Amit Kapila <amit.kapila16@gmail.com>
> wrote:
>>
>> On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
>> > On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro
>> > <thomas.munro@enterprisedb.com> wrote:
>> >>
>> >> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit.kapila16@gmail.com>
>> >> wrote:
>> >> > The attached patch fixes both the review comments as discussed above.
>> >
>> >
>> > that should be fixed by turning costs on the explain, as is the
>> > tradition.
>> >
>>
>> Right.  BTW, did you get a chance to run the original test (for which
>> you have reported the problem) with this patch?
>
>
> Yes, this patch makes it use a parallel scan, with great improvement.
>

Thanks for the confirmation.  Find rebased patch attached.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Sun, Nov 5, 2017 at 12:57 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Thanks for the confirmation.  Find rebased patch attached.

This looks like it's on the right track to me.  I hope Tom will look
into it, but if he doesn't I may try to get it committed myself.

-    if (rel->reloptkind == RELOPT_BASEREL)
-        generate_gather_paths(root, rel);
+    if (rel->reloptkind == RELOPT_BASEREL &&
+        root->simple_rel_array_size > 2 &&
+        !root->append_rel_list)

This test doesn't look correct to me.  Actually, it doesn't look
anywhere close to correct to me.  So, one of us is very confused...
not sure whether it's you or me.
    simple_gather_path = (Path *)        create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
               NULL, NULL);
 
+
+    /* Add projection step if needed */
+    if (target && simple_gather_path->pathtarget != target)
+        simple_gather_path = apply_projection_to_path(root, rel,
simple_gather_path, target);

Instead of using apply_projection_to_path, why not pass the correct
reltarget to create_gather_path?

+        /* Set or update cheapest_total_path and related fields */
+        set_cheapest(current_rel);

I wonder if it's really OK to call set_cheapest() a second time for
the same relation...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> This looks like it's on the right track to me.  I hope Tom will look
> into it, but if he doesn't I may try to get it committed myself.

I do plan to take a look at it during this CF.

> +        /* Set or update cheapest_total_path and related fields */
> +        set_cheapest(current_rel);

> I wonder if it's really OK to call set_cheapest() a second time for
> the same relation...

It's safe enough, we do it in some places already when converting
a relation to dummy.  But having to do that in a normal code path
suggests that something's not right about the design ...
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Mon, Nov 6, 2017 at 3:51 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sun, Nov 5, 2017 at 12:57 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Thanks for the confirmation.  Find rebased patch attached.
>
> This looks like it's on the right track to me.  I hope Tom will look
> into it, but if he doesn't I may try to get it committed myself.
>
> -    if (rel->reloptkind == RELOPT_BASEREL)
> -        generate_gather_paths(root, rel);
> +    if (rel->reloptkind == RELOPT_BASEREL &&
> +        root->simple_rel_array_size > 2 &&
> +        !root->append_rel_list)
>
> This test doesn't look correct to me.  Actually, it doesn't look
> anywhere close to correct to me.  So, one of us is very confused...
> not sure whether it's you or me.
>

It is quite possible that I haven't got it right, but it shouldn't be
completely bogus as it stands the regression tests and some manual
verification.  Can you explain what is your concern about this test?

>      simple_gather_path = (Path *)
>          create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
>                             NULL, NULL);
> +
> +    /* Add projection step if needed */
> +    if (target && simple_gather_path->pathtarget != target)
> +        simple_gather_path = apply_projection_to_path(root, rel,
> simple_gather_path, target);
>
> Instead of using apply_projection_to_path, why not pass the correct
> reltarget to create_gather_path?
>

We need to push it to gather's subpath as is done in
apply_projection_to_path and then we have to cost it accordingly.  I
think if we don't use apply_projection_to_path  then we might end up
with much of the code similar to it in generate_gather_paths.  In
fact, I have tried something similar to what you are suggesting in the
first version of patch [1] and it didn't turn out to be clean.  Also,
I think we already do something similar in create_ordered_paths.


> +        /* Set or update cheapest_total_path and related fields */
> +        set_cheapest(current_rel);
>
> I wonder if it's really OK to call set_cheapest() a second time for
> the same relation...
>

I think if we want we can avoid it by checking whether we have
generated any gather path for the relation (basically, check if it has
partial path list).  Another idea could be that we consider the
generation of gather/gathermerge for top-level scan/join relation as a
separate step and generate a new kind of upper rel for it which will
be a mostly dummy but will have paths for gather/gathermerge.

[1] - https://www.postgresql.org/message-id/CAA4eK1JUvL9WS9z%3D5hjSuSMNCo3TdBxFa0pA%3DE__E%3Dp6iUffUQ%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Mon, Nov 6, 2017 at 11:20 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Mon, Nov 6, 2017 at 3:51 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> This looks like it's on the right track to me.  I hope Tom will look
>> into it, but if he doesn't I may try to get it committed myself.
>>
>> -    if (rel->reloptkind == RELOPT_BASEREL)
>> -        generate_gather_paths(root, rel);
>> +    if (rel->reloptkind == RELOPT_BASEREL &&
>> +        root->simple_rel_array_size > 2 &&
>> +        !root->append_rel_list)
>>
>> This test doesn't look correct to me.  Actually, it doesn't look
>> anywhere close to correct to me.  So, one of us is very confused...
>> not sure whether it's you or me.
>>
> It is quite possible that I haven't got it right, but it shouldn't be
> completely bogus as it stands the regression tests and some manual
> verification.  Can you explain what is your concern about this test?

Well, I suppose that test will fire for a baserel when the total
number of baserels is at least 3 and there's no inheritance involved.
But if there are 2 baserels, we're still not the topmost scan/join
target.  Also, even if inheritance is used, we might still be the
topmost scan/join target.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Mon, Nov 6, 2017 at 7:05 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Nov 6, 2017 at 11:20 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Mon, Nov 6, 2017 at 3:51 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>>> This looks like it's on the right track to me.  I hope Tom will look
>>> into it, but if he doesn't I may try to get it committed myself.
>>>
>>> -    if (rel->reloptkind == RELOPT_BASEREL)
>>> -        generate_gather_paths(root, rel);
>>> +    if (rel->reloptkind == RELOPT_BASEREL &&
>>> +        root->simple_rel_array_size > 2 &&
>>> +        !root->append_rel_list)
>>>
>>> This test doesn't look correct to me.  Actually, it doesn't look
>>> anywhere close to correct to me.  So, one of us is very confused...
>>> not sure whether it's you or me.
>>>
>> It is quite possible that I haven't got it right, but it shouldn't be
>> completely bogus as it stands the regression tests and some manual
>> verification.  Can you explain what is your concern about this test?
>
> Well, I suppose that test will fire for a baserel when the total
> number of baserels is at least 3 and there's no inheritance involved.
> But if there are 2 baserels, we're still not the topmost scan/join
> target.
>

No, if there are 2 baserels, then simple_rel_array_size will be 3.
The simple_rel_array_size is always the "number of relations" plus
"one".  See setup_simple_rel_arrays.

>  Also, even if inheritance is used, we might still be the
> topmost scan/join target.
>

Sure, but in that case, it won't generate the gather path here (due to
this part of check "!root->append_rel_list").  I am not sure whether I
have understood the second part of your question, so if my answer
appears inadequate, then can you provide more details on what you are
concerned about?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Mon, Nov 6, 2017 at 9:57 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Well, I suppose that test will fire for a baserel when the total
>> number of baserels is at least 3 and there's no inheritance involved.
>> But if there are 2 baserels, we're still not the topmost scan/join
>> target.
>
> No, if there are 2 baserels, then simple_rel_array_size will be 3.
> The simple_rel_array_size is always the "number of relations" plus
> "one".  See setup_simple_rel_arrays.

Oh, wow.  That's surprising.

>>  Also, even if inheritance is used, we might still be the
>> topmost scan/join target.
>
> Sure, but in that case, it won't generate the gather path here (due to
> this part of check "!root->append_rel_list").  I am not sure whether I
> have understood the second part of your question, so if my answer
> appears inadequate, then can you provide more details on what you are
> concerned about?

I don't know why the question of why root->append_rel_list is empty is
the relevant thing here for deciding whether to generate gather paths
at this point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Nov 8, 2017 at 2:51 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Nov 6, 2017 at 9:57 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
>>>  Also, even if inheritance is used, we might still be the
>>> topmost scan/join target.
>>
>> Sure, but in that case, it won't generate the gather path here (due to
>> this part of check "!root->append_rel_list").  I am not sure whether I
>> have understood the second part of your question, so if my answer
>> appears inadequate, then can you provide more details on what you are
>> concerned about?
>
> I don't know why the question of why root->append_rel_list is empty is
> the relevant thing here for deciding whether to generate gather paths
> at this point.
>

This is required to prohibit generating gather path for top rel in
case of inheritence (Append node) at this place (we want to generate
it later when scan/join target is available).  For such a case, the
reloptkind will be RELOPT_BASEREL and simple_rel_array_size will be
greater than two as it includes child rels as well.  So, the check for
root->append_rel_list will prohibit generating gather path for such a
rel.  Now, for all the child rels of Append, the  reloptkind will be
RELOPT_OTHER_MEMBER_REL, so it won't generate gather path here because
of the first part of check (rel->reloptkind == RELOPT_BASEREL).


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Tue, Nov 7, 2017 at 9:41 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> This is required to prohibit generating gather path for top rel in
> case of inheritence (Append node) at this place (we want to generate
> it later when scan/join target is available).

OK, but why don't we want to generate it later when there isn't
inheritance involved?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Nov 8, 2017 at 4:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Nov 7, 2017 at 9:41 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> This is required to prohibit generating gather path for top rel in
>> case of inheritence (Append node) at this place (we want to generate
>> it later when scan/join target is available).
>
> OK, but why don't we want to generate it later when there isn't
> inheritance involved?
>

We do want to generate it later when there isn't inheritance involved,
but only if there is a single rel involved (simple_rel_array_size
<=2).  The rule is something like this, we will generate the gather
paths at this stage only if there are more than two rels involved and
there isn't inheritance involved.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Nov 8, 2017 at 7:26 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> We do want to generate it later when there isn't inheritance involved,
> but only if there is a single rel involved (simple_rel_array_size
> <=2).  The rule is something like this, we will generate the gather
> paths at this stage only if there are more than two rels involved and
> there isn't inheritance involved.

Why is that the correct rule?

Sorry if I'm being dense here.  I would have thought we'd want to skip
it for the topmost scan/join rel regardless of the presence or absence
of inheritance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Nov 8, 2017 at 6:48 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Nov 8, 2017 at 7:26 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> We do want to generate it later when there isn't inheritance involved,
>> but only if there is a single rel involved (simple_rel_array_size
>> <=2).  The rule is something like this, we will generate the gather
>> paths at this stage only if there are more than two rels involved and
>> there isn't inheritance involved.
>
> Why is that the correct rule?
>
> Sorry if I'm being dense here.  I would have thought we'd want to skip
> it for the topmost scan/join rel regardless of the presence or absence
> of inheritance.
>

I think I understood your concern after some offlist discussion and it
is primarily due to the inheritance related check which can skip the
generation of gather paths when it shouldn't.  So what might fit
better here is a straight check on the number of base rels such that
allow generating gather path in set_rel_pathlist, if there are
multiple baserels involved.  I have used all_baserels which I think
will work better for this purpose.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Thu, Nov 9, 2017 at 3:47 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think I understood your concern after some offlist discussion and it
> is primarily due to the inheritance related check which can skip the
> generation of gather paths when it shouldn't.  So what might fit
> better here is a straight check on the number of base rels such that
> allow generating gather path in set_rel_pathlist, if there are
> multiple baserels involved.  I have used all_baserels which I think
> will work better for this purpose.

Yes, that looks a lot more likely to be correct.

Let's see what Tom thinks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] why not parallel seq scan for slow functions

From
Michael Paquier
Date:
On Fri, Nov 10, 2017 at 4:42 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Nov 9, 2017 at 3:47 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I think I understood your concern after some offlist discussion and it
>> is primarily due to the inheritance related check which can skip the
>> generation of gather paths when it shouldn't.  So what might fit
>> better here is a straight check on the number of base rels such that
>> allow generating gather path in set_rel_pathlist, if there are
>> multiple baserels involved.  I have used all_baserels which I think
>> will work better for this purpose.
>
> Yes, that looks a lot more likely to be correct.
>
> Let's see what Tom thinks.

Moved to next CF for extra reviews.
-- 
Michael


Re: [HACKERS] why not parallel seq scan for slow functions

From
Marina Polyakova
Date:
Hello everyone in this thread!

On 29-11-2017 8:01, Michael Paquier wrote:
> Moved to next CF for extra reviews.

Amit, I would like to ask some questions about your patch (and can you 
please rebase it on the top of the master?):

1)
+            path->total_cost -= (target->cost.per_tuple - oldcost.per_tuple) * 
path->rows;

Here we undo the changes that we make in this function earlier:

path->total_cost += target->cost.startup - oldcost.startup +
    (target->cost.per_tuple - oldcost.per_tuple) * path->rows;

Perhaps we should not start these "reversible" changes in this case from 
the very beginning?

2)
          gpath->subpath = (Path *)
              create_projection_path(root,
                                     gpath->subpath->parent,
                                     gpath->subpath,
                                     target);
...
+        if (((ProjectionPath *) gpath->subpath)->dummypp)
+        {
...
+            path->total_cost += (target->cost.per_tuple - oldcost.per_tuple) * 
gpath->subpath->rows;
+        }
+        else
+        {
...
+            path->total_cost += (cpu_tuple_cost + target->cost.per_tuple) * 
gpath->subpath->rows;
+        }

As I understand it, here in the if-else block we change the run costs of 
gpath in the same way as they were changed for its subpath in the 
function create_projection_path earlier. But for the startup costs we 
always subtract the cost of the old target:

path->startup_cost += target->cost.startup - oldcost.startup;
path->total_cost += target->cost.startup - oldcost.startup +
    (target->cost.per_tuple - oldcost.per_tuple) * path->rows;

Should we change the startup costs of gpath in this way if 
((ProjectionPath *) gpath->subpath)->dummypp is false?

3)
      simple_gather_path = (Path *)
          create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
                             NULL, NULL);
+    /* Add projection step if needed */
+    if (target && simple_gather_path->pathtarget != target)
+        simple_gather_path = apply_projection_to_path(root, rel, 
simple_gather_path, target);
...
+        path = (Path *) create_gather_merge_path(root, rel, subpath, 
rel->reltarget,
+                                                 subpath->pathkeys, NULL, NULL);
+        /* Add projection step if needed */
+        if (target && path->pathtarget != target)
+            path = apply_projection_to_path(root, rel, path, target);
...
@@ -2422,16 +2422,27 @@ apply_projection_to_path(PlannerInfo *root,
...
          gpath->subpath = (Path *)
              create_projection_path(root,
                                     gpath->subpath->parent,
                                     gpath->subpath,
                                     target);

The target is changing so we change it for the gather(merge) node and 
for its subpath. Do not we have to do this work (replace the subpath by 
calling the function create_projection_path if the target is different) 
in the functions create_gather(_merge)_path too? I suppose that the 
target of the subpath affects its costs => the costs of the 
gather(merge) node in the functions cost_gather(_merge) (=> the costs of 
the gather(merge) node in the function apply_projection_to_path).

4)
+         * Consider ways to implement parallel paths.  We always skip
+         * generating parallel path for top level scan/join nodes till the
+         * pathtarget is computed.  This is to ensure that we can account for
+         * the fact that most of the target evaluation work will be performed
+         * in workers.
+         */
+        generate_gather_paths(root, current_rel, scanjoin_target);
+
+        /* Set or update cheapest_total_path and related fields */
+        set_cheapest(current_rel);

As I understand it (if correctly, thank the comments of Robert Haas and 
Tom Lane :), after that we cannot use the function 
apply_projection_to_path for paths in the current_rel->pathlist without 
risking that the cheapest path will change. And we have several calls to 
the function adjust_paths_for_srfs (which uses apply_projection_to_path 
for paths in the current_rel->pathlist) in grouping_planner after 
generating the gather paths:

        /* Now fix things up if scan/join target contains SRFs */
        if (parse->hasTargetSRFs)
            adjust_paths_for_srfs(root, current_rel,
                                  scanjoin_targets,
                                  scanjoin_targets_contain_srfs);
...
            /* Fix things up if grouping_target contains SRFs */
            if (parse->hasTargetSRFs)
                adjust_paths_for_srfs(root, current_rel,
                                      grouping_targets,
                                      grouping_targets_contain_srfs);
...
            /* Fix things up if sort_input_target contains SRFs */
            if (parse->hasTargetSRFs)
                adjust_paths_for_srfs(root, current_rel,
                                      sort_input_targets,
                                      sort_input_targets_contain_srfs);
...
        /* Fix things up if final_target contains SRFs */
        if (parse->hasTargetSRFs)
            adjust_paths_for_srfs(root, current_rel,
                                  final_targets,
                                  final_targets_contain_srfs);

Maybe we should add the appropriate call to the function set_cheapest() 
after this if parse->hasTargetSRFs is true?

5)
+     * If this is a baserel and not the top-level rel, consider gathering 
any
+     * partial paths we may have created for it.  (If we tried to gather
+     * inheritance children, we could end up with a very large number of
+     * gather nodes, each trying to grab its own pool of workers, so don't 
do
+     * this for otherrels.  Instead, we'll consider gathering partial 
paths
+     * for the parent appendrel.).  We can check for joins by counting the
+     * membership of all_baserels (note that this correctly counts 
inheritance
+     * trees as single rels).  The gather path for top-level rel is 
generated
+     * once path target is available.  See grouping_planner.
       */
-    if (rel->reloptkind == RELOPT_BASEREL)
-        generate_gather_paths(root, rel);
+    if (rel->reloptkind == RELOPT_BASEREL &&
+        bms_membership(root->all_baserels) != BMS_SINGLETON)
+        generate_gather_paths(root, rel, NULL);

Do I understand correctly that here there's an assumption about 
root->query_level (which we don't need to check)?

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Fri, Dec 29, 2017 at 7:56 PM, Marina Polyakova
<m.polyakova@postgrespro.ru> wrote:
> Hello everyone in this thread!
>
> On 29-11-2017 8:01, Michael Paquier wrote:
>>
>> Moved to next CF for extra reviews.
>
>
> Amit, I would like to ask some questions about your patch (and can you
> please rebase it on the top of the master?):
>

Thanks for looking into the patch.

> 1)
> +                       path->total_cost -= (target->cost.per_tuple -
> oldcost.per_tuple) * path->rows;
>
> Here we undo the changes that we make in this function earlier:
>
> path->total_cost += target->cost.startup - oldcost.startup +
>         (target->cost.per_tuple - oldcost.per_tuple) * path->rows;
>
> Perhaps we should not start these "reversible" changes in this case from the
> very beginning?
>

We can do that way as well, however, when the patch was written we
don't have GatherMerge handling in that function due to which it
appears to me that changing the code the way you are suggesting will
complicate the code, but now it looks saner to do it that way.  I have
changed the code accordingly.


> 2)
>                 gpath->subpath = (Path *)
>                         create_projection_path(root,
>
> gpath->subpath->parent,
>
> gpath->subpath,
>                                                                    target);
> ...
> +               if (((ProjectionPath *) gpath->subpath)->dummypp)
> +               {
> ...
> +                       path->total_cost += (target->cost.per_tuple -
> oldcost.per_tuple) * gpath->subpath->rows;
> +               }
> +               else
> +               {
> ...
> +                       path->total_cost += (cpu_tuple_cost +
> target->cost.per_tuple) * gpath->subpath->rows;
> +               }
>
> As I understand it, here in the if-else block we change the run costs of
> gpath in the same way as they were changed for its subpath in the function
> create_projection_path earlier. But for the startup costs we always subtract
> the cost of the old target:
>
> path->startup_cost += target->cost.startup - oldcost.startup;
> path->total_cost += target->cost.startup - oldcost.startup +
>         (target->cost.per_tuple - oldcost.per_tuple) * path->rows;
>
> Should we change the startup costs of gpath in this way if ((ProjectionPath
> *) gpath->subpath)->dummypp is false?
>

The startup costs will be computed in the way you mentioned for both
gpath and gmpath as that code is executed before we do any handling
for Gather or GatherMerge path.

> 3)
>         simple_gather_path = (Path *)
>                 create_gather_path(root, rel, cheapest_partial_path,
> rel->reltarget,
>                                                    NULL, NULL);
> +       /* Add projection step if needed */
> +       if (target && simple_gather_path->pathtarget != target)
> +               simple_gather_path = apply_projection_to_path(root, rel,
> simple_gather_path, target);
> ...
> +               path = (Path *) create_gather_merge_path(root, rel, subpath,
> rel->reltarget,
> +
> subpath->pathkeys, NULL, NULL);
> +               /* Add projection step if needed */
> +               if (target && path->pathtarget != target)
> +                       path = apply_projection_to_path(root, rel, path,
> target);
> ...
> @@ -2422,16 +2422,27 @@ apply_projection_to_path(PlannerInfo *root,
> ...
>                 gpath->subpath = (Path *)
>                         create_projection_path(root,
>
> gpath->subpath->parent,
>
> gpath->subpath,
>                                                                    target);
>
> The target is changing so we change it for the gather(merge) node and for
> its subpath. Do not we have to do this work (replace the subpath by calling
> the function create_projection_path if the target is different) in the
> functions create_gather(_merge)_path too? I suppose that the target of the
> subpath affects its costs => the costs of the gather(merge) node in the
> functions cost_gather(_merge) (=> the costs of the gather(merge) node in the
> function apply_projection_to_path).
>

The cost impact due to different target will be taken care when we
call apply_projection_to_path.  I am not sure if I understand your
question completely, but do you see anything in functions
cost_gather(_merge) which is suspicious and the target list costing
can impact the same.  Generally, we compute the target list cost in
the later stage of planning once the path target is formed.

> 4)
> +                * Consider ways to implement parallel paths.  We always
> skip
> +                * generating parallel path for top level scan/join nodes
> till the
> +                * pathtarget is computed.  This is to ensure that we can
> account for
> +                * the fact that most of the target evaluation work will be
> performed
> +                * in workers.
> +                */
> +               generate_gather_paths(root, current_rel, scanjoin_target);
> +
> +               /* Set or update cheapest_total_path and related fields */
> +               set_cheapest(current_rel);
>
> As I understand it (if correctly, thank the comments of Robert Haas and Tom
> Lane :), after that we cannot use the function apply_projection_to_path for
> paths in the current_rel->pathlist without risking that the cheapest path
> will change. And we have several calls to the function adjust_paths_for_srfs
> (which uses apply_projection_to_path for paths in the current_rel->pathlist)
> in grouping_planner after generating the gather paths:
>

I think in general that is true even without this patch and without
having parallel paths.  The main thing we are trying to cover with
this patch is that we try to generate parallel paths for top-level rel
after path target is computed so that the costing can take into
account the fact that bulk of target list evaluation can be done in
workers.  I think adjust_paths_for_srfs can impact costing for
parallel paths if the ProjectSet nodes are allowed to be pushed to
workers, but I don't see that happening (See
create_set_projection_path.).  If by any chance, you have an example
test to show the problem due to the point you have mentioned, then it
can be easier to see whether it can impact the selection of parallel
paths?


> 5)
> +        * If this is a baserel and not the top-level rel, consider
> gathering any
> +        * partial paths we may have created for it.  (If we tried to gather
> +        * inheritance children, we could end up with a very large number of
> +        * gather nodes, each trying to grab its own pool of workers, so
> don't do
> +        * this for otherrels.  Instead, we'll consider gathering partial
> paths
> +        * for the parent appendrel.).  We can check for joins by counting
> the
> +        * membership of all_baserels (note that this correctly counts
> inheritance
> +        * trees as single rels).  The gather path for top-level rel is
> generated
> +        * once path target is available.  See grouping_planner.
>          */
> -       if (rel->reloptkind == RELOPT_BASEREL)
> -               generate_gather_paths(root, rel);
> +       if (rel->reloptkind == RELOPT_BASEREL &&
> +               bms_membership(root->all_baserels) != BMS_SINGLETON)
> +               generate_gather_paths(root, rel, NULL);
>
> Do I understand correctly that here there's an assumption about
> root->query_level (which we don't need to check)?
>

I don't think we need to check query_level, but can you please be more
explicit about the point you have in mind?   Remember that we never
generate gather path atop any other gather path, if that is the
assumption you have in mind then we don't need any extra check for the
same.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Tue, Jan 2, 2018 at 6:38 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>  [ new patch ]

I think that grouping_planner() could benefit from a slightly more
extensive rearrangement.  With your patch applied, the order of
operations is:

1. compute the scan/join target
2. apply the scan/join target to all paths in current_rel's pathlist
3. generate gather paths, possibly adding more stuff to current_rel's pathlist
4. rerun set_cheapest
5. apply the scan/join target, if parallel safe, to all paths in the
current rel's partial_pathlist, for the benefit of upper planning
steps
6. clear the partial pathlist if the target list is not parallel safe

I at first thought this was outright broken because step #3 imposes
the scan/join target without testing it for parallel-safety, but then
I realized that generate_gather_paths will apply that target list by
using apply_projection_to_path, which makes an is_parallel_safe test
of its own.  But it doesn't seem good for step 3 to test the
parallel-safety of the target list separately for each path and then
have grouping_planner do it one more time for the benefit of upper
planning steps.  Instead, I suggest that we try to get rid of the
logic in apply_projection_to_path that knows about Gather and Gather
Merge specifically.  I think we can do that if grouping_planner does
this:

1. compute the scan/join target
2. apply the scan/join target, if parallel safe, to all paths in the
current rel's partial_pathlist
3. generate gather paths
4. clear the partial pathlist if the target list is not parallel safe
5. apply the scan/join target to all paths in current_rel's pathlist
6. rerun set_cheapest

That seems like a considerably more logical order of operations.  It
avoids not only the expense of testing the scanjoin_target for
parallel-safety multiple times, but the ugliness of having
apply_projection_to_path know about Gather and Gather Merge as a
special case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Sat, Jan 27, 2018 at 2:50 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Jan 2, 2018 at 6:38 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>  [ new patch ]
>
> I think that grouping_planner() could benefit from a slightly more
> extensive rearrangement.  With your patch applied, the order of
> operations is:
>
> 1. compute the scan/join target
> 2. apply the scan/join target to all paths in current_rel's pathlist
> 3. generate gather paths, possibly adding more stuff to current_rel's pathlist
> 4. rerun set_cheapest
> 5. apply the scan/join target, if parallel safe, to all paths in the
> current rel's partial_pathlist, for the benefit of upper planning
> steps
> 6. clear the partial pathlist if the target list is not parallel safe
>
> I at first thought this was outright broken because step #3 imposes
> the scan/join target without testing it for parallel-safety, but then
> I realized that generate_gather_paths will apply that target list by
> using apply_projection_to_path, which makes an is_parallel_safe test
> of its own.  But it doesn't seem good for step 3 to test the
> parallel-safety of the target list separately for each path and then
> have grouping_planner do it one more time for the benefit of upper
> planning steps.  Instead, I suggest that we try to get rid of the
> logic in apply_projection_to_path that knows about Gather and Gather
> Merge specifically.  I think we can do that if grouping_planner does
> this:
>
> 1. compute the scan/join target
> 2. apply the scan/join target, if parallel safe, to all paths in the
> current rel's partial_pathlist
> 3. generate gather paths
> 4. clear the partial pathlist if the target list is not parallel safe
> 5. apply the scan/join target to all paths in current_rel's pathlist
> 6. rerun set_cheapest
>
> That seems like a considerably more logical order of operations.  It
> avoids not only the expense of testing the scanjoin_target for
> parallel-safety multiple times, but the ugliness of having
> apply_projection_to_path know about Gather and Gather Merge as a
> special case.
>

If we want to get rid of Gather (Merge) checks in
apply_projection_to_path(), then we need some way to add a projection
path to the subpath of gather node even if that is capable of
projection as we do now.  I think changing the order of applying
scan/join target won't address that unless we decide to do it for
every partial path.  Another way could be that we handle that in
generate_gather_paths, but I think that won't be the idle place to add
projection.

If we want, we can compute the parallel-safety of scan/join target
once in grouping_planner and then pass it in apply_projection_to_path
to address your main concern.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Sun, Jan 28, 2018 at 10:13 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> If we want to get rid of Gather (Merge) checks in
> apply_projection_to_path(), then we need some way to add a projection
> path to the subpath of gather node even if that is capable of
> projection as we do now.  I think changing the order of applying
> scan/join target won't address that unless we decide to do it for
> every partial path.  Another way could be that we handle that in
> generate_gather_paths, but I think that won't be the idle place to add
> projection.
>
> If we want, we can compute the parallel-safety of scan/join target
> once in grouping_planner and then pass it in apply_projection_to_path
> to address your main concern.

I spent some time today hacking on this; see attached.  It needs more
work, but you can see what I have in mind.  It's not quite the same as
what I outlined before because that turned out to not quite work, but
it does remove most of the logic from apply_projection_to_path().

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Tue, Jan 30, 2018 at 3:30 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sun, Jan 28, 2018 at 10:13 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> If we want to get rid of Gather (Merge) checks in
>> apply_projection_to_path(), then we need some way to add a projection
>> path to the subpath of gather node even if that is capable of
>> projection as we do now.  I think changing the order of applying
>> scan/join target won't address that unless we decide to do it for
>> every partial path.  Another way could be that we handle that in
>> generate_gather_paths, but I think that won't be the idle place to add
>> projection.
>>
>> If we want, we can compute the parallel-safety of scan/join target
>> once in grouping_planner and then pass it in apply_projection_to_path
>> to address your main concern.
>
> I spent some time today hacking on this; see attached.  It needs more
> work, but you can see what I have in mind.
>

I can see what you have in mind, but I think we still need to change
the parallel safety flag of the path if  *_target is not parallel safe
either inside apply_projection_to_path or may be outside where it is
called.  Basically, I am talking about below code:

@@ -2473,57 +2469,6 @@ apply_projection_to_path(PlannerInfo *root,
{
..
- else if (path->parallel_safe &&
- !is_parallel_safe(root, (Node *) target->exprs))
- {
- /*
- * We're inserting a parallel-restricted target list into a path
- * currently marked parallel-safe, so we have to mark it as no longer
- * safe.
- */
- path->parallel_safe = false;
- }
-
..
}

I can take care of dealing with this unless you think otherwise.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Jan 31, 2018 at 11:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I can see what you have in mind, but I think we still need to change
> the parallel safety flag of the path if  *_target is not parallel safe
> either inside apply_projection_to_path or may be outside where it is
> called.

Hmm.  You have a point.

That's not the only problem, though.  With these changes, we need to
check every place where  apply_projection_to_path is used to see
whether we're losing the ability to push down the target list below
Gather (Merge) in some situations.  If we are, then we need to see if
we can supply the correct target list to the code that generates the
partial path, before Gather (Merge) is added.

There are the following call sites:

* grouping_planner.
* create_ordered_paths (x2).
* adjust_path_for_srfs.
* build_minmax_path.
* recurse_set_operations (x2).

The cases in recurse_set_operations() don't matter, because the path
whose target list we're adjusting is known not to be a Gather path.
In the first call, it's definitely a Subquery Scan, and in the second
case it'll be a path implementing some set operation.
grouping_planner() is the core thing the patch is trying to fix, and
as far as I can tell the logic changes there are adequate.  Also, the
second case in create_ordered_paths() is fixed: the patch changes
things so that it inserts a projection path before Gather Merge if
that's safe to do so.

The other cases aren't so clear.  In the case of the first call within
create_ordered_paths, there's no loss in the !is_sorted case because
apply_projection_to_path will be getting called on a Sort path.  But
when is_sorted is true, the current code can push a target list into a
Gather or Gather Merge that was created with some other target list,
and with the patch it can't.  I'm not quite sure what sort of query
would trigger that problem, but it seems like there's something to
worry about there.  Similarly I can't see any obvious reason why this
isn't a problem for adjust_path_for_srfs and build_minmax_path as
well, although I haven't tried to construct queries that hit those
cases, either.

Now, we could just give up on this approach and leave that code in
apply_projection_to_path, but what's bothering me is that, presumably,
any place where that code is actually getting used has the same
problem that we're trying to fix in grouping_planner: the tlist evals
costs are not being factored into the decision as to which path we
should choose, which might make a good parallel path lose to an
inferior non-parallel path.  It would be best to fix that throughout
the code base rather than only fixing the more common paths -- if we
can do so with a reasonable amount of work.

Here's a new version which (a) restores the code that you pointed out,
(b) always passes the target to generate_gather_paths() instead of
treating NULL as a special case, and (c) reworks some of the comments
you added.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Fri, Feb 2, 2018 at 12:15 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Jan 31, 2018 at 11:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> The other cases aren't so clear.  In the case of the first call within
> create_ordered_paths, there's no loss in the !is_sorted case because
> apply_projection_to_path will be getting called on a Sort path.  But
> when is_sorted is true, the current code can push a target list into a
> Gather or Gather Merge that was created with some other target list,
> and with the patch it can't.  I'm not quite sure what sort of query
> would trigger that problem, but it seems like there's something to
> worry about there.
>

I think the query plans which involve Gather Merge -> Parallel Index
Scan can be impacted.  For ex. cosider below case:

With parallel-paths-tlist-cost-rmh-v2.patch
postgres=# set cpu_operator_cost=0;
SET
postgres=# set parallel_setup_cost=0;set parallel_tuple_cost=0;set
min_parallel_table_scan_size=0;set
max_parallel_workers_per_gather=4;
SET
SET
SET
SET
postgres=# explain (costs off, verbose) select simple_func(aid) from
pgbench_accounts where aid > 1000 order by aid;
                                      QUERY PLAN
---------------------------------------------------------------------------------------
 Gather Merge
   Output: simple_func(aid), aid
   Workers Planned: 4
   ->  Parallel Index Only Scan using pgbench_accounts_pkey on
public.pgbench_accounts
         Output: aid
         Index Cond: (pgbench_accounts.aid > 1000)
(6 rows)

HEAD and parallel_paths_include_tlist_cost_v7
postgres=# explain (costs off, verbose) select simple_func(aid) from
pgbench_accounts where aid > 1000 order by aid;
                                      QUERY PLAN
---------------------------------------------------------------------------------------
 Gather Merge
   Output: (simple_func(aid)), aid
   Workers Planned: 4
   ->  Parallel Index Only Scan using pgbench_accounts_pkey on
public.pgbench_accounts
         Output: simple_func(aid), aid
         Index Cond: (pgbench_accounts.aid > 1000)
(6 rows)

For the above test, I have initialized pgbench with 100 scale factor.

It shows that with patch parallel-paths-tlist-cost-rmh-v2.patch, we
will lose the capability to push target list in some cases.  One lame
idea could be that at the call location, we detect if it is a Gather
(Merge) and then push the target list, but doing so at all the
locations doesn't sound to be a good alternative as compared to
another approach where we push target list in
apply_projection_to_path.

>  Similarly I can't see any obvious reason why this
> isn't a problem for adjust_path_for_srfs and build_minmax_path as
> well, although I haven't tried to construct queries that hit those
> cases, either.
>

Neither do I, but I can give it a try if we expect something different
than the results of above example.

> Now, we could just give up on this approach and leave that code in
> apply_projection_to_path, but what's bothering me is that, presumably,
> any place where that code is actually getting used has the same
> problem that we're trying to fix in grouping_planner: the tlist evals
> costs are not being factored into the decision as to which path we
> should choose, which might make a good parallel path lose to an
> inferior non-parallel path.
>

Your concern is valid, but isn't the same problem exists in another
approach as well, because in that also we can call
adjust_paths_for_srfs after generating gather path which means that we
might lose some opportunity to reduce the relative cost of parallel
paths due to tlists having SRFs.  Also, a similar problem can happen
in create_order_paths  for the cases as described in the example
above.

>  It would be best to fix that throughout
> the code base rather than only fixing the more common paths -- if we
> can do so with a reasonable amount of work.
>

Agreed, I think one way to achieve that is instead of discarding
parallel paths based on cost, we retain them till the later phase of
planning, something like what we do for ordered paths.  In that case,
the way changes have been done in the patch
parallel_paths_include_tlist_cost_v7 will work.  I think it will be
helpful for other cases as well if we keep the parallel paths as
alternative paths till later stage of planning (we have discussed it
during parallelizing subplans as well), however, that is a bigger
project on its own.  I don't think it will be too bad to go with what
we have in parallel_paths_include_tlist_cost_v7 with the intent that
in future when we do the other project, the other cases will be
automatically dealt.

I have updated the patch parallel_paths_include_tlist_cost_v7 by
changing some of the comments based on
parallel-paths-tlist-cost-rmh-v2.patch.

I have one additional comment on parallel-paths-tlist-cost-rmh-v2.patch
@@ -374,6 +374,14 @@ cost_gather(GatherPath *path, PlannerInfo *root,
  startup_cost += parallel_setup_cost;
  run_cost += parallel_tuple_cost * path->path.rows;

+ /* add tlist eval costs only if projecting */
+ if (path->path.pathtarget != path->subpath->pathtarget)
+ {
+ /* tlist eval costs are paid per output row, not per tuple scanned */
+ startup_cost += path->path.pathtarget->cost.startup;
+ run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
+ }

I think when the tlist eval costs are added, we should subtract the
previous cost added for subpath.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Ashutosh Bapat
Date:
I happened to look at the patch for something else. But here are some
comments. If any of those have been already discussed, please feel
free to ignore. I have gone through the thread cursorily, so I might
have missed something.

In grouping_planner() we call query_planner() first which builds the
join tree and creates paths, then calculate the target for scan/join
rel which is applied on the topmost scan join rel. I am wondering
whether we can reverse this order to calculate the target list of
scan/join relation and pass it to standard_join_search() (or the hook)
through query_planner(). standard_join_search() would then set this as
reltarget of the topmost relation and every path created for it will
have that target, applying projection if needed. This way we avoid
calling generate_gather_path() at two places. Right now
generate_gather_path() seems to be the only thing benefitting from
this but what about FDWs and custom paths whose costs may change when
targetlist changes. For custom paths I am considering GPU optimization
paths. Also this might address Tom's worry, "But having to do that in
a normal code path
suggests that something's not right about the design ... "

Here are some comments on the patch.

+                /*
+                 * Except for the topmost scan/join rel, consider gathering
+                 * partial paths.  We'll do the same for the topmost scan/join
This function only works on join relations. Mentioning scan rel is confusing.

index 6e842f9..5206da7 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -481,14 +481,21 @@ set_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
     }

+     *
+     * Also, if this is the topmost scan/join rel (that is, the only baserel),
+     * we postpone this until the final scan/join targelist is available (see

Mentioning join rel here is confusing since we deal with base relations here.

+        bms_membership(root->all_baserels) != BMS_SINGLETON)

set_tablesample_rel_pathlist() is also using this method to decide whether
there are any joins in the query. May be macro-ize this and use that macro at
these two places?

- * for the specified relation.  (Otherwise, add_partial_path might delete a
+ * for the specified relation. (Otherwise, add_partial_path might delete a

Unrelated change?

+    /* Add projection step if needed */
+    if (target && simple_gather_path->pathtarget != target)

If the target was copied someplace, this test will fail. Probably we want to
check containts of the PathTarget structure? Right now copy_pathtarget() is not
called from many places and all those places modify the copied target. So this
isn't a problem. But we can't guarantee that in future. Similar comment for
gather_merge path creation.

+        simple_gather_path = apply_projection_to_path(root,
+                                                      rel,
+                                                      simple_gather_path,
+                                                      target);
+

Why don't we incorporate those changes in create_gather_path() by passing it
the projection target instead of rel->reltarget? Similar comment for
gather_merge path creation.

+            /*
+             * Except for the topmost scan/join rel, consider gathering
+             * partial paths.  We'll do the same for the topmost scan/join rel

Mentioning scan rel is confusing here.


 deallocate tenk1_count;
+explain (costs off) select ten, costly_func(ten) from tenk1;

verbose output will show that the parallel seq scan's targetlist has
costly_func() in it. Isn't that what we want to test here?


-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> I happened to look at the patch for something else. But here are some
> comments. If any of those have been already discussed, please feel
> free to ignore. I have gone through the thread cursorily, so I might
> have missed something.
>
> In grouping_planner() we call query_planner() first which builds the
> join tree and creates paths, then calculate the target for scan/join
> rel which is applied on the topmost scan join rel. I am wondering
> whether we can reverse this order to calculate the target list of
> scan/join relation and pass it to standard_join_search() (or the hook)
> through query_planner().
>

I think the reason for doing in that order is that we can't compute
target's width till after query_planner().  See the below note in
code:

/*
* Convert the query's result tlist into PathTarget format.
*
* Note: it's desirable to not do this till after query_planner(),
* because the target width estimates can use per-Var width numbers
* that were obtained within query_planner().
*/
final_target = create_pathtarget(root, tlist);

Now, I think we can try to juggle the code in a way that the width can
be computed later, but the rest can be done earlier.  However, I think
that will be somewhat major change and still won't address all kind of
cases (like for ordered paths) unless we can try to get all kind of
targets pushed down in the call stack.  Also, I am not sure if that is
the only reason or there are some other assumptions about this calling
order as well.

>
> Here are some comments on the patch.
>

Thanks for looking into the patch.  As of now, we are evaluating the
right approach for this patch, so let's wait for Robert's reply.
After we agree on the approach, I will address your comments.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Ashutosh Bapat
Date:
On Thu, Feb 15, 2018 at 7:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> I happened to look at the patch for something else. But here are some
>> comments. If any of those have been already discussed, please feel
>> free to ignore. I have gone through the thread cursorily, so I might
>> have missed something.
>>
>> In grouping_planner() we call query_planner() first which builds the
>> join tree and creates paths, then calculate the target for scan/join
>> rel which is applied on the topmost scan join rel. I am wondering
>> whether we can reverse this order to calculate the target list of
>> scan/join relation and pass it to standard_join_search() (or the hook)
>> through query_planner().
>>
>
> I think the reason for doing in that order is that we can't compute
> target's width till after query_planner().  See the below note in
> code:
>
> /*
> * Convert the query's result tlist into PathTarget format.
> *
> * Note: it's desirable to not do this till after query_planner(),
> * because the target width estimates can use per-Var width numbers
> * that were obtained within query_planner().
> */
> final_target = create_pathtarget(root, tlist);
>
> Now, I think we can try to juggle the code in a way that the width can
> be computed later, but the rest can be done earlier.

/* Convenience macro to get a PathTarget with valid cost/width fields */
#define create_pathtarget(root, tlist) \
    set_pathtarget_cost_width(root, make_pathtarget_from_tlist(tlist))
create_pathtarget already works that way. We will need to split it.

Create the Pathtarget without widths. Apply width estimates once we
know the width of Vars somewhere here in query_planner()
    /*
     * We should now have size estimates for every actual table involved in
     * the query, and we also know which if any have been deleted from the
     * query by join removal; so we can compute total_table_pages.
     *
     * Note that appendrels are not double-counted here, even though we don't
     * bother to distinguish RelOptInfos for appendrel parents, because the
     * parents will still have size zero.
     *
The next step is building the join tree. Set the pathtarget before that.

> However, I think
> that will be somewhat major change

I agree.

>  still won't address all kind of
> cases (like for ordered paths) unless we can try to get all kind of
> targets pushed down in the call stack.

I didn't understand that.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Fri, Feb 16, 2018 at 9:29 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Thu, Feb 15, 2018 at 7:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
>> <ashutosh.bapat@enterprisedb.com> wrote:
>>> I happened to look at the patch for something else. But here are some
>>> comments. If any of those have been already discussed, please feel
>>> free to ignore. I have gone through the thread cursorily, so I might
>>> have missed something.
>>>
>>> In grouping_planner() we call query_planner() first which builds the
>>> join tree and creates paths, then calculate the target for scan/join
>>> rel which is applied on the topmost scan join rel. I am wondering
>>> whether we can reverse this order to calculate the target list of
>>> scan/join relation and pass it to standard_join_search() (or the hook)
>>> through query_planner().
>>>
>>
>> I think the reason for doing in that order is that we can't compute
>> target's width till after query_planner().  See the below note in
>> code:
>>
>> /*
>> * Convert the query's result tlist into PathTarget format.
>> *
>> * Note: it's desirable to not do this till after query_planner(),
>> * because the target width estimates can use per-Var width numbers
>> * that were obtained within query_planner().
>> */
>> final_target = create_pathtarget(root, tlist);
>>
>> Now, I think we can try to juggle the code in a way that the width can
>> be computed later, but the rest can be done earlier.
>
> /* Convenience macro to get a PathTarget with valid cost/width fields */
> #define create_pathtarget(root, tlist) \
>     set_pathtarget_cost_width(root, make_pathtarget_from_tlist(tlist))
> create_pathtarget already works that way. We will need to split it.
>
> Create the Pathtarget without widths. Apply width estimates once we
> know the width of Vars somewhere here in query_planner()
>     /*
>      * We should now have size estimates for every actual table involved in
>      * the query, and we also know which if any have been deleted from the
>      * query by join removal; so we can compute total_table_pages.
>      *
>      * Note that appendrels are not double-counted here, even though we don't
>      * bother to distinguish RelOptInfos for appendrel parents, because the
>      * parents will still have size zero.
>      *
> The next step is building the join tree. Set the pathtarget before that.
>
>> However, I think
>> that will be somewhat major change
>
> I agree.
>
>>  still won't address all kind of
>> cases (like for ordered paths) unless we can try to get all kind of
>> targets pushed down in the call stack.
>
> I didn't understand that.
>

The places where we use a target different than the target which is
pushed via query planner can cause a similar problem (ex. see the
usage of adjust_paths_for_srfs) because the cost of that target
wouldn't be taken into consideration for Gather's costing.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Ashutosh Bapat
Date:
On Sat, Feb 17, 2018 at 8:17 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Feb 16, 2018 at 9:29 AM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> On Thu, Feb 15, 2018 at 7:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
>>> <ashutosh.bapat@enterprisedb.com> wrote:
>>>> I happened to look at the patch for something else. But here are some
>>>> comments. If any of those have been already discussed, please feel
>>>> free to ignore. I have gone through the thread cursorily, so I might
>>>> have missed something.
>>>>
>>>> In grouping_planner() we call query_planner() first which builds the
>>>> join tree and creates paths, then calculate the target for scan/join
>>>> rel which is applied on the topmost scan join rel. I am wondering
>>>> whether we can reverse this order to calculate the target list of
>>>> scan/join relation and pass it to standard_join_search() (or the hook)
>>>> through query_planner().
>>>>
>>>
>>> I think the reason for doing in that order is that we can't compute
>>> target's width till after query_planner().  See the below note in
>>> code:
>>>
>>> /*
>>> * Convert the query's result tlist into PathTarget format.
>>> *
>>> * Note: it's desirable to not do this till after query_planner(),
>>> * because the target width estimates can use per-Var width numbers
>>> * that were obtained within query_planner().
>>> */
>>> final_target = create_pathtarget(root, tlist);
>>>
>>> Now, I think we can try to juggle the code in a way that the width can
>>> be computed later, but the rest can be done earlier.
>>
>> /* Convenience macro to get a PathTarget with valid cost/width fields */
>> #define create_pathtarget(root, tlist) \
>>     set_pathtarget_cost_width(root, make_pathtarget_from_tlist(tlist))
>> create_pathtarget already works that way. We will need to split it.
>>
>> Create the Pathtarget without widths. Apply width estimates once we
>> know the width of Vars somewhere here in query_planner()
>>     /*
>>      * We should now have size estimates for every actual table involved in
>>      * the query, and we also know which if any have been deleted from the
>>      * query by join removal; so we can compute total_table_pages.
>>      *
>>      * Note that appendrels are not double-counted here, even though we don't
>>      * bother to distinguish RelOptInfos for appendrel parents, because the
>>      * parents will still have size zero.
>>      *
>> The next step is building the join tree. Set the pathtarget before that.
>>
>>> However, I think
>>> that will be somewhat major change
>>
>> I agree.
>>
>>>  still won't address all kind of
>>> cases (like for ordered paths) unless we can try to get all kind of
>>> targets pushed down in the call stack.
>>
>> I didn't understand that.
>>
>
> The places where we use a target different than the target which is
> pushed via query planner can cause a similar problem (ex. see the
> usage of adjust_paths_for_srfs) because the cost of that target
> wouldn't be taken into consideration for Gather's costing.
>

Right. Right now apply_projection_to_path() or adjust_paths_for_srfs()
do not take into consideration the type of path whose cost is being
adjusted for the new targetlist. That works for most of the paths but
not all the paths like custom, FDW or parallel paths. The idea I am
proposing is to compute final targetlist before query planner so that
it's available when we create paths for the topmost scan/join
relation. That way, any path creation logic can then take advantage of
this list to compute costs, not just parallel paths.

--
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Ashutosh Bapat
Date:
On Mon, Feb 19, 2018 at 9:35 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Sat, Feb 17, 2018 at 8:17 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Fri, Feb 16, 2018 at 9:29 AM, Ashutosh Bapat
>> <ashutosh.bapat@enterprisedb.com> wrote:
>>> On Thu, Feb 15, 2018 at 7:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>> On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
>>>> <ashutosh.bapat@enterprisedb.com> wrote:
>>>>> I happened to look at the patch for something else. But here are some
>>>>> comments. If any of those have been already discussed, please feel
>>>>> free to ignore. I have gone through the thread cursorily, so I might
>>>>> have missed something.
>>>>>
>>>>> In grouping_planner() we call query_planner() first which builds the
>>>>> join tree and creates paths, then calculate the target for scan/join
>>>>> rel which is applied on the topmost scan join rel. I am wondering
>>>>> whether we can reverse this order to calculate the target list of
>>>>> scan/join relation and pass it to standard_join_search() (or the hook)
>>>>> through query_planner().
>>>>>
>>>>
>>>> I think the reason for doing in that order is that we can't compute
>>>> target's width till after query_planner().  See the below note in
>>>> code:
>>>>
>>>> /*
>>>> * Convert the query's result tlist into PathTarget format.
>>>> *
>>>> * Note: it's desirable to not do this till after query_planner(),
>>>> * because the target width estimates can use per-Var width numbers
>>>> * that were obtained within query_planner().
>>>> */
>>>> final_target = create_pathtarget(root, tlist);
>>>>
>>>> Now, I think we can try to juggle the code in a way that the width can
>>>> be computed later, but the rest can be done earlier.
>>>
>>> /* Convenience macro to get a PathTarget with valid cost/width fields */
>>> #define create_pathtarget(root, tlist) \
>>>     set_pathtarget_cost_width(root, make_pathtarget_from_tlist(tlist))
>>> create_pathtarget already works that way. We will need to split it.
>>>
>>> Create the Pathtarget without widths. Apply width estimates once we
>>> know the width of Vars somewhere here in query_planner()
>>>     /*
>>>      * We should now have size estimates for every actual table involved in
>>>      * the query, and we also know which if any have been deleted from the
>>>      * query by join removal; so we can compute total_table_pages.
>>>      *
>>>      * Note that appendrels are not double-counted here, even though we don't
>>>      * bother to distinguish RelOptInfos for appendrel parents, because the
>>>      * parents will still have size zero.
>>>      *
>>> The next step is building the join tree. Set the pathtarget before that.
>>>
>>>> However, I think
>>>> that will be somewhat major change
>>>
>>> I agree.
>>>
>>>>  still won't address all kind of
>>>> cases (like for ordered paths) unless we can try to get all kind of
>>>> targets pushed down in the call stack.
>>>
>>> I didn't understand that.
>>>
>>
>> The places where we use a target different than the target which is
>> pushed via query planner can cause a similar problem (ex. see the
>> usage of adjust_paths_for_srfs) because the cost of that target
>> wouldn't be taken into consideration for Gather's costing.
>>
>
> Right. Right now apply_projection_to_path() or adjust_paths_for_srfs()
> do not take into consideration the type of path whose cost is being
> adjusted for the new targetlist. That works for most of the paths but
> not all the paths like custom, FDW or parallel paths. The idea I am
> proposing is to compute final targetlist before query planner so that
> it's available when we create paths for the topmost scan/join
> relation. That way, any path creation logic can then take advantage of
> this list to compute costs, not just parallel paths.

In fact, we should do this not just for scan/join relations, but all
the upper relations as well. Upper relations too create parallel
paths, whose costs need to be adjusted after their targetlists are
updated by adjust_paths_for_srfs(). Similar adjustments are needed for
any FDWs, custom paths which cost targetlists differently.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Mon, Feb 19, 2018 at 9:56 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Mon, Feb 19, 2018 at 9:35 AM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> On Sat, Feb 17, 2018 at 8:17 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> On Fri, Feb 16, 2018 at 9:29 AM, Ashutosh Bapat
>>> <ashutosh.bapat@enterprisedb.com> wrote:
>>>> On Thu, Feb 15, 2018 at 7:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>>> On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
>>>>> <ashutosh.bapat@enterprisedb.com> wrote:
>>>>>> I happened to look at the patch for something else. But here are some
>>>>>> comments. If any of those have been already discussed, please feel
>>>>>> free to ignore. I have gone through the thread cursorily, so I might
>>>>>> have missed something.
>>>>>>
>>>>>> In grouping_planner() we call query_planner() first which builds the
>>>>>> join tree and creates paths, then calculate the target for scan/join
>>>>>> rel which is applied on the topmost scan join rel. I am wondering
>>>>>> whether we can reverse this order to calculate the target list of
>>>>>> scan/join relation and pass it to standard_join_search() (or the hook)
>>>>>> through query_planner().
>>>>>>
>>>>>
>>>>> I think the reason for doing in that order is that we can't compute
>>>>> target's width till after query_planner().  See the below note in
>>>>> code:
>>>>>
>>>>> /*
>>>>> * Convert the query's result tlist into PathTarget format.
>>>>> *
>>>>> * Note: it's desirable to not do this till after query_planner(),
>>>>> * because the target width estimates can use per-Var width numbers
>>>>> * that were obtained within query_planner().
>>>>> */
>>>>> final_target = create_pathtarget(root, tlist);
>>>>>
>>>>> Now, I think we can try to juggle the code in a way that the width can
>>>>> be computed later, but the rest can be done earlier.
>>>>
>>>> /* Convenience macro to get a PathTarget with valid cost/width fields */
>>>> #define create_pathtarget(root, tlist) \
>>>>     set_pathtarget_cost_width(root, make_pathtarget_from_tlist(tlist))
>>>> create_pathtarget already works that way. We will need to split it.
>>>>
>>>> Create the Pathtarget without widths. Apply width estimates once we
>>>> know the width of Vars somewhere here in query_planner()
>>>>     /*
>>>>      * We should now have size estimates for every actual table involved in
>>>>      * the query, and we also know which if any have been deleted from the
>>>>      * query by join removal; so we can compute total_table_pages.
>>>>      *
>>>>      * Note that appendrels are not double-counted here, even though we don't
>>>>      * bother to distinguish RelOptInfos for appendrel parents, because the
>>>>      * parents will still have size zero.
>>>>      *
>>>> The next step is building the join tree. Set the pathtarget before that.
>>>>
>>>>> However, I think
>>>>> that will be somewhat major change
>>>>
>>>> I agree.
>>>>
>>>>>  still won't address all kind of
>>>>> cases (like for ordered paths) unless we can try to get all kind of
>>>>> targets pushed down in the call stack.
>>>>
>>>> I didn't understand that.
>>>>
>>>
>>> The places where we use a target different than the target which is
>>> pushed via query planner can cause a similar problem (ex. see the
>>> usage of adjust_paths_for_srfs) because the cost of that target
>>> wouldn't be taken into consideration for Gather's costing.
>>>
>>
>> Right. Right now apply_projection_to_path() or adjust_paths_for_srfs()
>> do not take into consideration the type of path whose cost is being
>> adjusted for the new targetlist. That works for most of the paths but
>> not all the paths like custom, FDW or parallel paths. The idea I am
>> proposing is to compute final targetlist before query planner so that
>> it's available when we create paths for the topmost scan/join
>> relation. That way, any path creation logic can then take advantage of
>> this list to compute costs, not just parallel paths.
>
> In fact, we should do this not just for scan/join relations, but all
> the upper relations as well. Upper relations too create parallel
> paths, whose costs need to be adjusted after their targetlists are
> updated by adjust_paths_for_srfs(). Similar adjustments are needed for
> any FDWs, custom paths which cost targetlists differently.
>

I think any such change in planner can be quite tricky and can lead to
a lot of work.  I am not denying that it is not possible to think
along the lines you are suggesting, but OTOH, I don't see it as a
realistic approach for this patch where we can deal with the majority
of cases with the much smaller patch.  In future, if you are someone
can have a patch along those lines for some other purpose (considering
it is feasible to do so which I am not completely sure), then we can
adjust the things for parallel paths as well.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Feb 15, 2018 at 4:18 PM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
>
>

After recent commits, the patch doesn't get applied cleanly, so
rebased it and along the way addressed the comments raised by you.

> Here are some comments on the patch.
>
> +                /*
> +                 * Except for the topmost scan/join rel, consider gathering
> +                 * partial paths.  We'll do the same for the topmost scan/join
> This function only works on join relations. Mentioning scan rel is confusing.
>

Okay, removed the 'scan' word from the comment.


> index 6e842f9..5206da7 100644
> --- a/src/backend/optimizer/path/allpaths.c
> +++ b/src/backend/optimizer/path/allpaths.c
> @@ -481,14 +481,21 @@ set_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
>      }
>
> +     *
> +     * Also, if this is the topmost scan/join rel (that is, the only baserel),
> +     * we postpone this until the final scan/join targelist is available (see
>
> Mentioning join rel here is confusing since we deal with base relations here.
>

Okay, removed the 'join' word from the comment.

> +        bms_membership(root->all_baserels) != BMS_SINGLETON)
>
> set_tablesample_rel_pathlist() is also using this method to decide whether
> there are any joins in the query. May be macro-ize this and use that macro at
> these two places?
>

maybe, but I am not sure if it improves the readability.  I am open to
changing it if somebody else also feels it is better to macro-ize this
usage.

> - * for the specified relation.  (Otherwise, add_partial_path might delete a
> + * for the specified relation. (Otherwise, add_partial_path might delete a
>
> Unrelated change?
>

oops, removed.

> +    /* Add projection step if needed */
> +    if (target && simple_gather_path->pathtarget != target)
>
> If the target was copied someplace, this test will fail. Probably we want to
> check containts of the PathTarget structure? Right now copy_pathtarget() is not
> called from many places and all those places modify the copied target. So this
> isn't a problem. But we can't guarantee that in future. Similar comment for
> gather_merge path creation.
>

I am not sure if there is any use of copying the path target unless
you want to modify it , but anyway we use the check similar to what is
used in patch in the multiple places in code.  See
create_ordered_paths.  So, we need to change all those places first if
we sense any such danger.

> +        simple_gather_path = apply_projection_to_path(root,
> +                                                      rel,
> +                                                      simple_gather_path,
> +                                                      target);
> +
>
> Why don't we incorporate those changes in create_gather_path() by passing it
> the projection target instead of rel->reltarget? Similar comment for
> gather_merge path creation.
>

Nothing important, just for the sake of code consistency, some other
places in code uses it this way.  See create_ordered_paths. Also, I am
not sure if there is any advantage of doing it inside
create_gather_path.

> +            /*
> +             * Except for the topmost scan/join rel, consider gathering
> +             * partial paths.  We'll do the same for the topmost scan/join rel
>
> Mentioning scan rel is confusing here.
>

Okay, changed.

>
>  deallocate tenk1_count;
> +explain (costs off) select ten, costly_func(ten) from tenk1;
>
> verbose output will show that the parallel seq scan's targetlist has
> costly_func() in it. Isn't that what we want to test here?
>

Not exactly, we want to just test whether the parallel plan is
selected when the costly function is used in the target list.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Ashutosh Bapat
Date:
On Sun, Mar 11, 2018 at 5:49 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

>
>> +    /* Add projection step if needed */
>> +    if (target && simple_gather_path->pathtarget != target)
>>
>> If the target was copied someplace, this test will fail. Probably we want to
>> check containts of the PathTarget structure? Right now copy_pathtarget() is not
>> called from many places and all those places modify the copied target. So this
>> isn't a problem. But we can't guarantee that in future. Similar comment for
>> gather_merge path creation.
>>
>
> I am not sure if there is any use of copying the path target unless
> you want to modify it , but anyway we use the check similar to what is
> used in patch in the multiple places in code.  See
> create_ordered_paths.  So, we need to change all those places first if
> we sense any such danger.

Even if the test fails and we add a projection path here, while
creating the plan we avoid adding a Result node when the projection
target and underlying plan's target look same
(create_projection_plan()), so this works. An advantage with this
simple check (although it miscosts the projection) is that we don't do
expensive target equality check for every path. The expensive check
happens only on the chosen path.


>
>>
>>  deallocate tenk1_count;
>> +explain (costs off) select ten, costly_func(ten) from tenk1;
>>
>> verbose output will show that the parallel seq scan's targetlist has
>> costly_func() in it. Isn't that what we want to test here?
>>
>
> Not exactly, we want to just test whether the parallel plan is
> selected when the costly function is used in the target list.

Parallel plan may be selected whether or not costly function exists in
the targetlist, if the underlying scan is optimal with parallel scan.
AFAIU, this patch is about pushing down the costly functions into the
parllel scan's targetlist. I think that can be verified only by
looking at the targetlist of parallel seq scan plan.

The solution here addresses only parallel scan requirement. In future
if we implement a solution which also addresses requirements of FDW
and custom plan (i.e. ability to handle targetlists by FDW and custom
plan), the changes made here will need to be reverted. That would be a
painful exercsize.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Feb 14, 2018 at 5:37 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Your concern is valid, but isn't the same problem exists in another
> approach as well, because in that also we can call
> adjust_paths_for_srfs after generating gather path which means that we
> might lose some opportunity to reduce the relative cost of parallel
> paths due to tlists having SRFs.  Also, a similar problem can happen
> in create_order_paths  for the cases as described in the example
> above.

You're right.  I think cleaning all of this up for v11 is too much to
consider, but please tell me your opinion of the attached patch set.
Here, instead of the ripping the problematic logic out of
apply_projection_to_path, what I've done is just removed several of
the callers to apply_projection_to_path.  I think that the work of
removing other callers to that function could be postponed to a future
release, but we'd still get some benefit now, and this shows the
direction I have in mind.  I'm going to explain what the patches do
one by one, but out of order, because I backed into the need for the
earlier patches as a result of troubleshooting the later ones in the
series.  Note that the patches need to be applied in order even though
I'm explaining them out of order.

0003 introduces a new upper relation to represent the result of
applying the scan/join target to the topmost scan/join relation.  I'll
explain further down why this seems to be needed.  Since each path
must belong to only one relation, this involves using
create_projection_path() for the non-partial pathlist as we already do
for the partial pathlist, rather than apply_projection_to_path().
This is probably marginally more expensive but I'm hoping it doesn't
matter.  (However, I have not tested.)  Each non-partial path in the
topmost scan/join rel produces a corresponding path in the new upper
rel.  The same is done for partial paths if the scan/join target is
parallel-safe; otherwise we can't.

0004 causes the main part of the planner to skip calling
generate_gather_paths() from the topmost scan/join rel.  This logic is
mostly stolen from your patch.  If the scan/join target is NOT
parallel-safe, we perform generate_gather_paths() on the topmost
scan/join rel.  If the scan/join target IS parallel-safe, we do it on
the post-projection rel introduced by 0003 instead.  This is the patch
that actually fixes Jeff's original complaint.

0005 goes a bit further and removes a bunch of logic from
create_ordered_paths().  The removed logic tries to satisfy the
query's ordering requirement via cheapest_partial_path + Sort + Gather
Merge.  Instead, it adds an additional path to the new upper rel added
in 0003.  This again avoids a call to apply_projection_to_path() which
could cause projection to be pushed down after costing has already
been fixed.  Therefore, it gains the same advantages as 0004 for
queries that would this sort of plan.  Currently, this loses the
ability to set limit_tuples for the Sort path; that doesn't look too
hard to fix but I haven't done it.  If we decide to proceed with this
overall approach I'll see about getting it sorted out.

Unfortunately, when I initially tried this approach, a number of
things broke due to the fact that create_projection_path() was not
exactly equivalent to apply_projection_to_path().  This initially
surprised me, because create_projection_plan() contains logic to
eliminate the Result node that is very similar to the logic in
apply_projection_to_path().  If apply_projection_path() sees that the
subordinate node is projection-capable, it applies the revised target
list to the child; if not, it adds a Result node.
create_projection_plan() does the same thing.  However,
create_projection_plan() can lose the physical tlist optimization for
the subordinate node; it forces an exact projection even if the parent
node doesn't require this.  0001 fixes this, so that we get the same
plans with create_projection_path() that we would with
apply_projection_to_path().  I think this patch is a good idea
regardless of what we decide to do about the rest of this, because it
can potentially avoid losing the physical-tlist optimization in any
place where create_projection_path() is used.

It turns out that 0001 doesn't manage to preserve the physical-tlist
optimization when the projection path is attached to an upper
relation.  0002 fixes this.

If we decide to go forward with this approach, it may makes sense to
merge some of these when actually committing, but I found it useful to
separate them for development and testing purposes, and for clarity
about what was getting changed at each stage.  Please let me know your
thoughts.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Wed, Mar 14, 2018 at 12:01 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> 0003 introduces a new upper relation to represent the result of
> applying the scan/join target to the topmost scan/join relation.  I'll
> explain further down why this seems to be needed.  Since each path
> must belong to only one relation, this involves using
> create_projection_path() for the non-partial pathlist as we already do
> for the partial pathlist, rather than apply_projection_to_path().
> This is probably marginally more expensive but I'm hoping it doesn't
> matter.  (However, I have not tested.)
>

I think in the patch series this is the questionable patch wherein it
will always add an additional projection path (whether or not it is
required) to all Paths (partial and non-partial) for the scanjoin rel
and then later remove it (if not required) in create_projection_plan.
 As you are saying, I also think it might not matter much in the grand
scheme of things and if required we can test it as well, however, I
think it is better if some other people can also share their opinion
on this matter.

Tom, do you have anything to say?


  Each non-partial path in the
> topmost scan/join rel produces a corresponding path in the new upper
> rel.  The same is done for partial paths if the scan/join target is
> parallel-safe; otherwise we can't.
>
> 0004 causes the main part of the planner to skip calling
> generate_gather_paths() from the topmost scan/join rel.  This logic is
> mostly stolen from your patch.  If the scan/join target is NOT
> parallel-safe, we perform generate_gather_paths() on the topmost
> scan/join rel.  If the scan/join target IS parallel-safe, we do it on
> the post-projection rel introduced by 0003 instead.  This is the patch
> that actually fixes Jeff's original complaint.
>

Looks good, I feel you can include the test from my patch as well.

> 0005 goes a bit further and removes a bunch of logic from
> create_ordered_paths().  The removed logic tries to satisfy the
> query's ordering requirement via cheapest_partial_path + Sort + Gather
> Merge.  Instead, it adds an additional path to the new upper rel added
> in 0003.  This again avoids a call to apply_projection_to_path() which
> could cause projection to be pushed down after costing has already
> been fixed.  Therefore, it gains the same advantages as 0004 for
> queries that would this sort of plan.
>

After this patch, part of the sorts related work will be done in
create_ordered_paths and the other in grouping_planner, which looks
bit odd, otherwise, I don't see any problem with it.

>  Currently, this loses the
> ability to set limit_tuples for the Sort path; that doesn't look too
> hard to fix but I haven't done it.  If we decide to proceed with this
> overall approach I'll see about getting it sorted out.
>

Okay, that makes sense.

> Unfortunately, when I initially tried this approach, a number of
> things broke due to the fact that create_projection_path() was not
> exactly equivalent to apply_projection_to_path().  This initially
> surprised me, because create_projection_plan() contains logic to
> eliminate the Result node that is very similar to the logic in
> apply_projection_to_path().  If apply_projection_path() sees that the
> subordinate node is projection-capable, it applies the revised target
> list to the child; if not, it adds a Result node.
> create_projection_plan() does the same thing.  However,
> create_projection_plan() can lose the physical tlist optimization for
> the subordinate node; it forces an exact projection even if the parent
> node doesn't require this.  0001 fixes this, so that we get the same
> plans with create_projection_path() that we would with
> apply_projection_to_path().  I think this patch is a good idea
> regardless of what we decide to do about the rest of this, because it
> can potentially avoid losing the physical-tlist optimization in any
> place where create_projection_path() is used.
>
> It turns out that 0001 doesn't manage to preserve the physical-tlist
> optimization when the projection path is attached to an upper
> relation.  0002 fixes this.
>

I have done some basic verification of 0001 and 0002, will do further
review/tests, if I don't see any objection from anyone else about the
overall approach.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Fri, Mar 16, 2018 at 6:06 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think in the patch series this is the questionable patch wherein it
> will always add an additional projection path (whether or not it is
> required) to all Paths (partial and non-partial) for the scanjoin rel
> and then later remove it (if not required) in create_projection_plan.
>  As you are saying, I also think it might not matter much in the grand
> scheme of things and if required we can test it as well, however, I
> think it is better if some other people can also share their opinion
> on this matter.
>
> Tom, do you have anything to say?

I forgot to include part of the explanation in my previous email.  The
reason it has to work this way is that, of course, you can't include
the same path in the path list of relation B as you put into the path
of relation A; if you do, then you will be in trouble if a later
addition to the path list of relation B kicks that path out, because
it will get pfree'd, leaving a garbage pointer in the list for A,
which you may subsequently referenced.

At one point, I tried to solve the problem by sorting the cheapest
partial path from the scan/join rel and putting it back into the same
pathlist, but that also fails: in some cases, the new path dominates
all of the existing paths because it is better-sorted and only very
slightly more expensive.  So, when you call add_partial_path() for the
sort path, all of the previous paths -- including the one the sort
path is pointing to as its subpath! -- get pfree'd.

So without another relation, and the projection paths, I couldn't get
it to where I didn't have to modify paths after creating them.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Fri, Mar 16, 2018 at 3:36 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Wed, Mar 14, 2018 at 12:01 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>>
>> 0003 introduces a new upper relation to represent the result of
>> applying the scan/join target to the topmost scan/join relation.  I'll
>> explain further down why this seems to be needed.  Since each path
>> must belong to only one relation, this involves using
>> create_projection_path() for the non-partial pathlist as we already do
>> for the partial pathlist, rather than apply_projection_to_path().
>> This is probably marginally more expensive but I'm hoping it doesn't
>> matter.  (However, I have not tested.)
>>
>
> I think in the patch series this is the questionable patch wherein it
> will always add an additional projection path (whether or not it is
> required) to all Paths (partial and non-partial) for the scanjoin rel
> and then later remove it (if not required) in create_projection_plan.
>  As you are saying, I also think it might not matter much in the grand
> scheme of things and if required we can test it as well,
>

I have done some tests to see the impact of this patch on planning
time.  I took some simple statements and tried to compute the time
they took in planning.

Test-1
----------
DO $$
DECLARE count integer;
BEGIN
For count In 1..1000000 Loop
Execute 'explain Select ten from tenk1';
END LOOP;
END;
$$;

In the above block, I am explaining the simple statement which will
have just one path, so there will be one additional path projection
and removal cycle for this statement.  I have just executed the above
block in psql by having \timing option 'on' and the average timing for
ten runs on HEAD is  21292.388 ms, with patches (0001.* ~ 0003) is
22405.2466 ms and with patches (0001.* ~ 0005.*) is 22537.1362.  These
results indicate that there is approximately 5~6% of the increase in
planning time.

Test-2
----------
DO $$
DECLARE count integer;
BEGIN
For count In 1..1000000 Loop
Execute 'explain select hundred,ten from tenk1 order by hundred';
END LOOP;
END;
$$;

In the above block, I am explaining the statement which will have two
paths, so there will be two additional path projections and one
removal cycle for one of the selected paths for this statement. The
average timing for ten runs on HEAD is 32869.8343 ms, with patches
(0001.* ~ 0003) is 34068.0608 ms and with patches (0001.* ~ 0005.*) is
34097.4899 ms.  These results indicate that there is approximately
3~4% of the increase in optimizer time.  Now, ideally, this test
should have shown more impact as we are adding additional projection
path for two paths, but I think as the overall time for planning is
higher, the impact of additional work is not much visible.

I have done these tests on the Centos VM, so there is some variation
in test results.  Please find attached the detailed results of all the
tests.  I have not changed any configuration for these tests.  I think
before reaching any conclusion, it would be better if someone repeats
these tests and see if they also have a similar observation.  The
reason for doing the tests separately for first three patches (0001.*
~ 0003.*) is to see the impact of changes without any change related
to parallelism.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Sat, Mar 17, 2018 at 1:16 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Test-1
> ----------
> DO $$
> DECLARE count integer;
> BEGIN
> For count In 1..1000000 Loop
> Execute 'explain Select ten from tenk1';
> END LOOP;
> END;
> $$;
>
> In the above block, I am explaining the simple statement which will
> have just one path, so there will be one additional path projection
> and removal cycle for this statement.  I have just executed the above
> block in psql by having \timing option 'on' and the average timing for
> ten runs on HEAD is  21292.388 ms, with patches (0001.* ~ 0003) is
> 22405.2466 ms and with patches (0001.* ~ 0005.*) is 22537.1362.  These
> results indicate that there is approximately 5~6% of the increase in
> planning time.

Ugh.  I'm able to reproduce this, more or less -- with master, this
test took 42089.484 ms, 41935.849 ms, 42519.336 ms on my laptop, but
with 0001-0003 applied, 43925.959 ms, 43619.004 ms, 43648.426 ms.
However I have a feeling there's more going on here, because the
following patch on top of 0001-0003 made the time go back down to
42353.548, 41797.757 ms, 41891.194 ms.

diff --git a/src/backend/optimizer/plan/planner.c
b/src/backend/optimizer/plan/planner.c
index bf0b3e75ea..0542b3e40b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1947,12 +1947,19 @@ grouping_planner(PlannerInfo *root, bool
inheritance_update,
         {
             Path       *subpath = (Path *) lfirst(lc);
             Path       *path;
+            Path       *path2;

             Assert(subpath->param_info == NULL);
-            path = (Path *) create_projection_path(root, tlist_rel,
+            path2 = (Path *) create_projection_path(root, tlist_rel,
                                                    subpath, scanjoin_target);
-            add_path(tlist_rel, path);
+            path = (Path *) apply_projection_to_path(root, tlist_rel,
+                                                   subpath, scanjoin_target);
+            if (path == path2)
+                elog(ERROR, "oops");
+            lfirst(lc) = path;
         }
+        tlist_rel->pathlist = current_rel->pathlist;
+        current_rel->pathlist = NIL;

         /*
          * If we can produce partial paths for the tlist rel, for possible use

It seems pretty obvious that creating an extra projection path that is
just thrown away can't "really" be making this faster, so there's
evidently some other effect here involving how the code is laid out,
or CPU cache effects, or, uh, something.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Tue, Mar 20, 2018 at 1:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Mar 17, 2018 at 1:16 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Test-1
>> ----------
>> DO $$
>> DECLARE count integer;
>> BEGIN
>> For count In 1..1000000 Loop
>> Execute 'explain Select ten from tenk1';
>> END LOOP;
>> END;
>> $$;
>>
>> In the above block, I am explaining the simple statement which will
>> have just one path, so there will be one additional path projection
>> and removal cycle for this statement.  I have just executed the above
>> block in psql by having \timing option 'on' and the average timing for
>> ten runs on HEAD is  21292.388 ms, with patches (0001.* ~ 0003) is
>> 22405.2466 ms and with patches (0001.* ~ 0005.*) is 22537.1362.  These
>> results indicate that there is approximately 5~6% of the increase in
>> planning time.
>
> Ugh.  I'm able to reproduce this, more or less -- with master, this
> test took 42089.484 ms, 41935.849 ms, 42519.336 ms on my laptop, but
> with 0001-0003 applied, 43925.959 ms, 43619.004 ms, 43648.426 ms.
> However I have a feeling there's more going on here, because the
> following patch on top of 0001-0003 made the time go back down to
> 42353.548, 41797.757 ms, 41891.194 ms.
>
..
>
> It seems pretty obvious that creating an extra projection path that is
> just thrown away can't "really" be making this faster, so there's
> evidently some other effect here involving how the code is laid out,
> or CPU cache effects, or, uh, something.
>

Yeah, sometimes that kind of stuff change performance characteristics,
but I think what is going on here is that create_projection_plan is
causing the lower node to build physical tlist which takes some
additional time.  I have tried below change on top of the patch series
and it brings back the performance for me.

@@ -1580,7 +1580,7 @@ create_projection_plan(PlannerInfo *root,
ProjectionPath *best_path, int flags)
        List       *tlist;

        /* Since we intend to project, we don't need to constrain child tlist */
-       subplan = create_plan_recurse(root, best_path->subpath, 0);
+       subplan = create_plan_recurse(root, best_path->subpath, flags);

Another point I have noticed in
0001-Teach-create_projection_plan-to-omit-projection-wher  patch:

-create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
+create_projection_plan(PlannerInfo *root, ProjectionPath *best_path, int flags)
{
..
+ else if ((flags & CP_LABEL_TLIST) != 0)
+ {
+ tlist = copyObject(subplan->targetlist);
+ apply_pathtarget_labeling_to_tlist(tlist, best_path->path.pathtarget);
+ }
+ else
+ return subplan;
..
}

Before returning subplan, don't we need to copy the cost estimates
from best_path as is done in the same function after few lines.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Fri, Mar 23, 2018 at 12:12 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Yeah, sometimes that kind of stuff change performance characteristics,
> but I think what is going on here is that create_projection_plan is
> causing the lower node to build physical tlist which takes some
> additional time.  I have tried below change on top of the patch series
> and it brings back the performance for me.

I tried another approach inspired by this, which is to altogether skip
building the child scan tlist if it will just be replaced.  See 0006.
In testing here, that seems to be a bit better than your proposal, but
I wonder what your results will be.

> Before returning subplan, don't we need to copy the cost estimates
> from best_path as is done in the same function after few lines.

The new 0006 takes care of this, too.  Really, the new 0006 should be
merged into 0001, but I kept it separate for now.

So, rebased:

0001 - More or less as before.

0002 - More or less as before.

0003 - Rewritten in the wake of partitionwise aggregate, as the
tlist_rel must be partitioned in order for partitionwise aggregate to
work.  Quite pleasingly, this eliminates a bunch of Result nodes from
the partitionwise join test results.  Overall, I find this quite a bit
cleaner than the present code (leaving performance aside for the
moment).  In the process of writing this I realized that the
partitionwise aggregate code doesn't look like it handles SRFs
properly, so if this doesn't get committed we'll have to fix that
problem some other way.

0004 - A little different than before as a result of the extensive
changes in 0003.

0005 - Also different, and revealing another defect in partitionwise
aggregate, as noted in the commit message.

0006 - Introduce CP_IGNORE_TLIST; optimization of 0001.

0007 - Use NULL relids set for the toplevel tlist upper rel.  This
seems to be slightly faster than the other way.  This is an
optimization of 0003.

It looks in my testing like this still underperforms master on your
test case.  Do you get the same result?  Any other ideas?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Sat, Mar 24, 2018 at 8:41 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Mar 23, 2018 at 12:12 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Yeah, sometimes that kind of stuff change performance characteristics,
>> but I think what is going on here is that create_projection_plan is
>> causing the lower node to build physical tlist which takes some
>> additional time.  I have tried below change on top of the patch series
>> and it brings back the performance for me.
>
> I tried another approach inspired by this, which is to altogether skip
> building the child scan tlist if it will just be replaced.  See 0006.
> In testing here, that seems to be a bit better than your proposal, but
> I wonder what your results will be.
>
..
>
> It looks in my testing like this still underperforms master on your
> test case.  Do you get the same result?
>

For me, it is equivalent to the master.  The average of ten runs on
the master is 20664.3683 and with all the patches applied it is
20590.4734.  I think there is some run-to-run variation, but more or
less there is no visible degradation.  I think we have found the root
cause and eliminated it.  OTOH, I have found another case where new
patch series seems to degrade.

Test case
--------------
DO $$
DECLARE count integer;
BEGIN
For count In 1..1000000 Loop
Execute 'explain Select count(ten) from tenk1';
END LOOP;
END;
$$;

The average of ten runs on the master is 31593.9533 and with all the
patches applied it is 34008.7341.  The patch takes approximately 7.6%
more time.  I think this patch series is doing something costly in the
common code path.  I am also worried that the new code proposed by you
in 0003* patch might degrade planner performance for partitioned rels,
though I have not tested it yet.  It is difficult to say without
testing it, but before going there, I think we should first
investigate whats happening in the non-partitioned case.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Sat, Mar 24, 2018 at 9:40 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> For me, it is equivalent to the master.  The average of ten runs on
> the master is 20664.3683 and with all the patches applied it is
> 20590.4734.  I think there is some run-to-run variation, but more or
> less there is no visible degradation.  I think we have found the root
> cause and eliminated it.  OTOH, I have found another case where new
> patch series seems to degrade.

All right, I have scaled my ambitions back further.  Here is a revised
and slimmed-down version of the patch series.  If we forget about
"Remove explicit path construction logic in create_ordered_paths" for
now, then we don't really need a new upperrel.  So this patch just
modifies the toplevel scan/join rel in place, which should avoid a
bunch of overhead in add_path() and other places, while hopefully
still fixing the originally-reported problem.  I haven't tested this
beyond verifying that it passes the regression test, but I've run out
of time for today.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Tue, Mar 27, 2018 at 3:08 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Mar 24, 2018 at 9:40 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> For me, it is equivalent to the master.  The average of ten runs on
>> the master is 20664.3683 and with all the patches applied it is
>> 20590.4734.  I think there is some run-to-run variation, but more or
>> less there is no visible degradation.  I think we have found the root
>> cause and eliminated it.  OTOH, I have found another case where new
>> patch series seems to degrade.
>
> All right, I have scaled my ambitions back further.  Here is a revised
> and slimmed-down version of the patch series.
>

It still didn't help much.  I am seeing the similar regression in one
of the tests [1] posted previously.

>  If we forget about
> "Remove explicit path construction logic in create_ordered_paths" for
> now, then we don't really need a new upperrel.  So this patch just
> modifies the toplevel scan/join rel in place, which should avoid a
> bunch of overhead in add_path() and other places, while hopefully
> still fixing the originally-reported problem.
>

If we don't want to go with the upperrel logic, then maybe we should
consider just merging some of the other changes from my previous patch
in 0003* patch you have posted and then see if it gets rid of all the
cases where we are seeing a regression with this new approach.  I
think that with this approach you want to target the problem of
partitonwise aggregate, but maybe we can deal with it in a separate
patch.

[1] -
Test case
--------------
DO $$
DECLARE count integer;
BEGIN
For count In 1..1000000 Loop
Execute 'explain Select count(ten) from tenk1';
END LOOP;
END;
$$;

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Tue, Mar 27, 2018 at 1:45 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> If we don't want to go with the upperrel logic, then maybe we should
> consider just merging some of the other changes from my previous patch
> in 0003* patch you have posted and then see if it gets rid of all the
> cases where we are seeing a regression with this new approach.

Which changes are you talking about?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Tue, Mar 27, 2018 at 7:42 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Mar 27, 2018 at 1:45 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> If we don't want to go with the upperrel logic, then maybe we should
>> consider just merging some of the other changes from my previous patch
>> in 0003* patch you have posted and then see if it gets rid of all the
>> cases where we are seeing a regression with this new approach.
>
> Which changes are you talking about?

I realized that this version could be optimized in the case where the
scanjoin_target and the topmost scan/join rel have the same
expressions in the target list.  Here's a revised patch series that
does that.  For me, this is faster than master on your last test case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Tue, Mar 27, 2018 at 10:59 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Mar 27, 2018 at 7:42 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Tue, Mar 27, 2018 at 1:45 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> If we don't want to go with the upperrel logic, then maybe we should
>>> consider just merging some of the other changes from my previous patch
>>> in 0003* patch you have posted and then see if it gets rid of all the
>>> cases where we are seeing a regression with this new approach.
>>
>> Which changes are you talking about?
>
> I realized that this version could be optimized in the case where the
> scanjoin_target and the topmost scan/join rel have the same
> expressions in the target list.
>

Good idea, such an optimization will ensure that the cases reported
above will not have regression.  However isn't it somewhat beating the
idea you are using in the patch which is to avoid modifying the path
in-place?  In any case, I think one will still see regression in cases
where this optimization doesn't apply.  For example,

DO $$
DECLARE count integer;
BEGIN
For count In 1..1000000 Loop
Execute 'explain Select sum(thousand)from tenk1 group by ten';
END LOOP;
END;
$$;

The above block takes 43700.0289 ms on Head and 45025.3779 ms with the
patch which is approximately 3% regression.  In this case, the
regression is lesser than the previously reported cases, but I guess
we might see bigger regression if the number of columns is more or
with a different set of queries.   I think the cases which can
slowdown are where we need to use physical tlist in
create_projection_plan and the caller has requested CP_LABEL_TLIST.  I
have checked in regression tests and there seem to be more cases which
will be impacted.  Another such example from regression tests is:
select count(*) >= 0 as ok from pg_available_extension_versions;


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Wed, Mar 28, 2018 at 3:06 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Good idea, such an optimization will ensure that the cases reported
> above will not have regression.  However isn't it somewhat beating the
> idea you are using in the patch which is to avoid modifying the path
> in-place?

Sure, but you can't have everything.  I don't think modifying the
sortgroupref data in place is really quite the same thing as changing
the pathtarget in place; the sortgroupref stuff is some extra
information about the targets being computed, not really a change in
targets per se.  But in any case if you want to eliminate extra work
then we've gotta eliminate it...

> In any case, I think one will still see regression in cases
> where this optimization doesn't apply.  For example,
>
> DO $$
> DECLARE count integer;
> BEGIN
> For count In 1..1000000 Loop
> Execute 'explain Select sum(thousand)from tenk1 group by ten';
> END LOOP;
> END;
> $$;
>
> The above block takes 43700.0289 ms on Head and 45025.3779 ms with the
> patch which is approximately 3% regression.

Thanks for the analysis -- the observation that this seemed to affect
cases where CP_LABEL_TLIST gets passed to create_projection_plan
allowed me to recognize that I was doing an unnecessary copyObject()
call.  Removing that seems to have reduced this regression below 1% in
my testing.

Also, keep in mind that we're talking about extremely small amounts of
time here.  On a trivial query that you're not even executing, you're
seeing a difference of (45025.3779 - 43700.0289)/1000000 = 0.00132 ms
per execution.  Sure, it's still 3%, but it's 3% of the time in an
artificial case where you don't actually run the query.  In real life,
nobody runs EXPLAIN in a tight loop a million times without ever
running the query, because that's not a useful thing to do.  The
overhead on a realistic test case will be smaller.  Furthermore, at
least in my testing, there are now cases where this is faster than
master.  Now, I welcome further ideas for optimization, but a patch
that makes some cases slightly slower while making others slightly
faster, and also improving the quality of plans in some cases, is not
to my mind an unreasonable thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Thu, Mar 29, 2018 at 2:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Mar 28, 2018 at 3:06 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> The above block takes 43700.0289 ms on Head and 45025.3779 ms with the
>> patch which is approximately 3% regression.
>
> Thanks for the analysis -- the observation that this seemed to affect
> cases where CP_LABEL_TLIST gets passed to create_projection_plan
> allowed me to recognize that I was doing an unnecessary copyObject()
> call.  Removing that seems to have reduced this regression below 1% in
> my testing.
>

I think that is under acceptable range.  I am seeing few regression
failures with the patch series.  The order of targetlist seems to have
changed for Remote SQL.  Kindly find the failure report attached.  I
have requested my colleague Ashutosh Sharma to cross-verify this and
he is also seeing the same failures.

Few comments/suggestions:

1.
typedef enum UpperRelationKind
 {
  UPPERREL_SETOP, /* result of UNION/INTERSECT/EXCEPT, if any */
+ UPPERREL_TLIST, /* result of projecting final scan/join rel */
  UPPERREL_PARTIAL_GROUP_AGG, /* result of partial grouping/aggregation, if
  * any */
  UPPERREL_GROUP_AGG, /* result of grouping/aggregation, if any */
...
...
  /*
  * Save the various upper-rel PathTargets we just computed into
@@ -2003,6 +2004,7 @@ grouping_planner(PlannerInfo *root, bool
inheritance_update,
  root->upper_targets[UPPERREL_FINAL] = final_target;
  root->upper_targets[UPPERREL_WINDOW] = sort_input_target;
  root->upper_targets[UPPERREL_GROUP_AGG] = grouping_target;
+ root->upper_targets[UPPERREL_TLIST] = scanjoin_target;

It seems UPPERREL_TLIST is redundant in the patch now.  I think we can
remove it unless you have something else in mind.

2.
+ /*
+ * If the relation is partitioned, recurseively apply the same changes to
+ * all partitions and generate new Append paths. Since Append is not
+ * projection-capable, that might save a separate Result node, and it also
+ * is important for partitionwise aggregate.
+ */
+ if (rel->part_scheme && rel->part_rels)
  {


I think the handling of partitioned rels looks okay, but we might want
to once check the overhead of the same unless you are sure that this
shouldn't be a problem.  If you think, we should check it once, then
is it possible that we can do it as a separate patch as this doesn't
look to be directly linked to the main patch.  It can be treated as an
optimization for partitionwise aggregates.  I think we can treat it
along with the main patch as well, but it might be somewhat simpler to
verify it if we do it separately.


> Also, keep in mind that we're talking about extremely small amounts of
> time here.  On a trivial query that you're not even executing, you're
> seeing a difference of (45025.3779 - 43700.0289)/1000000 = 0.00132 ms
> per execution.  Sure, it's still 3%, but it's 3% of the time in an
> artificial case where you don't actually run the query.  In real life,
> nobody runs EXPLAIN in a tight loop a million times without ever
> running the query, because that's not a useful thing to do.
>

Agreed, but this was to ensure that we should not do this optimization
at the cost of adding significant cycles to the planner time.

>  The
> overhead on a realistic test case will be smaller.  Furthermore, at
> least in my testing, there are now cases where this is faster than
> master.  Now, I welcome further ideas for optimization, but a patch
> that makes some cases slightly slower while making others slightly
> faster, and also improving the quality of plans in some cases, is not
> to my mind an unreasonable thing.
>

Agreed.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] why not parallel seq scan for slow functions

From
Robert Haas
Date:
On Thu, Mar 29, 2018 at 12:55 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think that is under acceptable range.  I am seeing few regression
> failures with the patch series.  The order of targetlist seems to have
> changed for Remote SQL.  Kindly find the failure report attached.  I
> have requested my colleague Ashutosh Sharma to cross-verify this and
> he is also seeing the same failures.

Oops.  Those just require an expected output change.

> It seems UPPERREL_TLIST is redundant in the patch now.  I think we can
> remove it unless you have something else in mind.

Yes.

> I think the handling of partitioned rels looks okay, but we might want
> to once check the overhead of the same unless you are sure that this
> shouldn't be a problem.  If you think, we should check it once, then
> is it possible that we can do it as a separate patch as this doesn't
> look to be directly linked to the main patch.  It can be treated as an
> optimization for partitionwise aggregates.  I think we can treat it
> along with the main patch as well, but it might be somewhat simpler to
> verify it if we do it separately.

I don't think it should be a problem, although you're welcome to test
it if you're concerned about it.  I think it would probably be
penny-wise and pound-foolish to worry about the overhead of
eliminating the Result nodes, which can occur not only with
partition-wise aggregate but also with partition-wise join and, I
think, really any case where the top scan/join plan would be an Append
node.  We're talking about a very small amount of additional planning
time to potentially get a better plan.

I've committed all of these now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] why not parallel seq scan for slow functions

From
Amit Kapila
Date:
On Fri, Mar 30, 2018 at 1:41 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Mar 29, 2018 at 12:55 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I think that is under acceptable range.  I am seeing few regression
>> failures with the patch series.  The order of targetlist seems to have
>> changed for Remote SQL.  Kindly find the failure report attached.  I
>> have requested my colleague Ashutosh Sharma to cross-verify this and
>> he is also seeing the same failures.
>
> Oops.  Those just require an expected output change.
>
>> It seems UPPERREL_TLIST is redundant in the patch now.  I think we can
>> remove it unless you have something else in mind.
>
> Yes.
>
>> I think the handling of partitioned rels looks okay, but we might want
>> to once check the overhead of the same unless you are sure that this
>> shouldn't be a problem.  If you think, we should check it once, then
>> is it possible that we can do it as a separate patch as this doesn't
>> look to be directly linked to the main patch.  It can be treated as an
>> optimization for partitionwise aggregates.  I think we can treat it
>> along with the main patch as well, but it might be somewhat simpler to
>> verify it if we do it separately.
>
> I don't think it should be a problem, although you're welcome to test
> it if you're concerned about it.  I think it would probably be
> penny-wise and pound-foolish to worry about the overhead of
> eliminating the Result nodes, which can occur not only with
> partition-wise aggregate but also with partition-wise join and, I
> think, really any case where the top scan/join plan would be an Append
> node.  We're talking about a very small amount of additional planning
> time to potentially get a better plan.
>
> I've committed all of these now.
>

Cool,  I have closed the corresponding CF entry.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com