Thread: Consider low startup cost in add_partial_path

Consider low startup cost in add_partial_path

From
James Coleman
Date:
Over in the incremental sort patch discussion we found [1] a case
where a higher cost plan ends up being chosen because a low startup
cost partial path is ignored in favor of a lower total cost partial
path and a limit is a applied on top of that which would normal favor
the lower startup cost plan.

45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
`add_partial_path` with the comment:

> Neither do we need to consider startup costs:
> parallelism is only used for plans that will be run to completion.
> Therefore, this routine is much simpler than add_path: it needs to
> consider only pathkeys and total cost.

I'm not entirely sure if that is still true or not--I can't easily
come up with a scenario in which it's not, but I also can't come up
with an inherent reason why such a scenario cannot exist.

We could just continue to include this change as part of the
incremental sort patch itself, but it seemed worth it to me to break
it out for some more targeted discussion, and also include Robert as
the initial author of add_partial_path in the hopes that maybe we
could retrieve some almost 4-year-old memories on why this was
inherently true then, and maybe that would shed some light on whether
it's still inherently true.

I've attached a patch (by Tomas Vondra, also cc'd) to consider startup
cost in add_partial_path, but should we apply the patch we'll also
likely need to apply the same kind of change to
add_partial_path_precheck.

James Coleman

[1]: https://www.postgresql.org/message-id/20190720132244.3vgg2uynfpxh3me5%40development

Attachment

Re: Consider low startup cost in add_partial_path

From
Robert Haas
Date:
On Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:
> Over in the incremental sort patch discussion we found [1] a case
> where a higher cost plan ends up being chosen because a low startup
> cost partial path is ignored in favor of a lower total cost partial
> path and a limit is a applied on top of that which would normal favor
> the lower startup cost plan.
>
> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
> `add_partial_path` with the comment:
>
> > Neither do we need to consider startup costs:
> > parallelism is only used for plans that will be run to completion.
> > Therefore, this routine is much simpler than add_path: it needs to
> > consider only pathkeys and total cost.
>
> I'm not entirely sure if that is still true or not--I can't easily
> come up with a scenario in which it's not, but I also can't come up
> with an inherent reason why such a scenario cannot exist.

I think I just didn't think carefully about the Limit case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Consider low startup cost in add_partial_path

From
Tomas Vondra
Date:
On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:
>On Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:
>> Over in the incremental sort patch discussion we found [1] a case
>> where a higher cost plan ends up being chosen because a low startup
>> cost partial path is ignored in favor of a lower total cost partial
>> path and a limit is a applied on top of that which would normal favor
>> the lower startup cost plan.
>>
>> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
>> `add_partial_path` with the comment:
>>
>> > Neither do we need to consider startup costs:
>> > parallelism is only used for plans that will be run to completion.
>> > Therefore, this routine is much simpler than add_path: it needs to
>> > consider only pathkeys and total cost.
>>
>> I'm not entirely sure if that is still true or not--I can't easily
>> come up with a scenario in which it's not, but I also can't come up
>> with an inherent reason why such a scenario cannot exist.
>
>I think I just didn't think carefully about the Limit case.
>

Thanks! In that case I suggest we treat it as a separate patch/fix,
independent of the incremental sort patch. I don't want to bury it in
that patch series, it's already pretty large.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Consider low startup cost in add_partial_path

From
James Coleman
Date:
On Saturday, September 28, 2019, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:
On Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:
Over in the incremental sort patch discussion we found [1] a case
where a higher cost plan ends up being chosen because a low startup
cost partial path is ignored in favor of a lower total cost partial
path and a limit is a applied on top of that which would normal favor
the lower startup cost plan.

45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
`add_partial_path` with the comment:

> Neither do we need to consider startup costs:
> parallelism is only used for plans that will be run to completion.
> Therefore, this routine is much simpler than add_path: it needs to
> consider only pathkeys and total cost.

I'm not entirely sure if that is still true or not--I can't easily
come up with a scenario in which it's not, but I also can't come up
with an inherent reason why such a scenario cannot exist.

I think I just didn't think carefully about the Limit case.


Thanks! In that case I suggest we treat it as a separate patch/fix,
independent of the incremental sort patch. I don't want to bury it in
that patch series, it's already pretty large.

Now the trick is to figure out a way to demonstrate it in test :)

Basically we need:
Path A: Can short circuit with LIMIT but has high total cost
Path B: Can’t short circuit with LIMIT but has lower total cost

(Both must be parallel aware of course.)

Maybe ordering in B can be a sort node and A can be an index scan (perhaps with very high random page cost?) and force choosing a parallel plan?

I’m trying to describe this to jog my thoughts (not in front of my laptop right now so can’t try it out).

Any other ideas?

James

Re: Consider low startup cost in add_partial_path

From
James Coleman
Date:
On Sat, Sep 28, 2019 at 7:21 PM James Coleman <jtc331@gmail.com> wrote:
> Now the trick is to figure out a way to demonstrate it in test :)
>
> Basically we need:
> Path A: Can short circuit with LIMIT but has high total cost
> Path B: Can’t short circuit with LIMIT but has lower total cost
>
> (Both must be parallel aware of course.)

I'm adding one requirement, or clarifying it anyway: the above paths
must be partial paths, and can't just apply at the top level of the
parallel part of the plan. I.e., the lower startup cost has to matter
at a subtree of the parallel portion of the plan.

> Maybe ordering in B can be a sort node and A can be an index scan (perhaps with very high random page cost?) and
forcechoosing a parallel plan? 
>
> I’m trying to describe this to jog my thoughts (not in front of my laptop right now so can’t try it out).
>
> Any other ideas?

I've been playing with this a good bit, and I'm struggling to come up
with a test case. Because the issue only manifests in a subtree of the
parallel portion of the plan, a scan on a single relation won't do.
Merge join seems like a good area to look at because it requires
ordering, and that ordering can be either the result of an index scan
(short-circuit-able) or an explicit sort (not short-circuit-able). But
I've been unable to make that result in any different plans with
either 2 or 3 relations joined together, ordered, and a limit applied.

In all cases I've been starting with:

set enable_hashjoin = off;
set enable_nestloop = off;
set max_parallel_workers_per_gather = 4;
set min_parallel_index_scan_size = 0;
set min_parallel_table_scan_size = 0;
set parallel_setup_cost = 0;
set parallel_tuple_cost = 0;

I've also tried various combinations of random_page_cost,
cpu_index_tuple_cost, cpu_tuple_cost.

Interestingly I've noticed plans joining two relations that look like:

 Limit
   ->  Merge Join
         Merge Cond: (t1.pk = t2.pk)
         ->  Gather Merge
               Workers Planned: 4
               ->  Parallel Index Scan using t_pkey on t t1
         ->  Gather Merge
               Workers Planned: 4
               ->  Parallel Index Scan using t_pkey on t t2

Where I would have expected a Gather Merge above a parallelized merge
join. Is that reasonable to expect?

If there doesn't seem to be an obvious way to reproduce the issue
currently, but we know we have a reproduction example along with
incremental sort, what is the path forward for this? Is it reasonable
to try to commit it anyway knowing that it's a "correct" change and
been demonstrated elsewhere?

James



Re: Consider low startup cost in add_partial_path

From
Robert Haas
Date:
On Wed, Oct 2, 2019 at 10:22 AM James Coleman <jtc331@gmail.com> wrote:
> In all cases I've been starting with:
>
> set enable_hashjoin = off;
> set enable_nestloop = off;
> set max_parallel_workers_per_gather = 4;
> set min_parallel_index_scan_size = 0;
> set min_parallel_table_scan_size = 0;
> set parallel_setup_cost = 0;
> set parallel_tuple_cost = 0;
>
> I've also tried various combinations of random_page_cost,
> cpu_index_tuple_cost, cpu_tuple_cost.
>
> Interestingly I've noticed plans joining two relations that look like:
>
>  Limit
>    ->  Merge Join
>          Merge Cond: (t1.pk = t2.pk)
>          ->  Gather Merge
>                Workers Planned: 4
>                ->  Parallel Index Scan using t_pkey on t t1
>          ->  Gather Merge
>                Workers Planned: 4
>                ->  Parallel Index Scan using t_pkey on t t2
>
> Where I would have expected a Gather Merge above a parallelized merge
> join. Is that reasonable to expect?

Well, you told the planner that parallel_setup_cost = 0, so starting
workers is free. And you told the planner that parallel_tuple_cost =
0, so shipping tuples from the worker to the leader is also free. So
it is unclear why it should prefer a single Gather Merge over two
Gather Merges: after all, the Gather Merge is free!

If you use give those things some positive cost, even if it's smaller
than the default, you'll probably get a saner-looking plan choice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Consider low startup cost in add_partial_path

From
James Coleman
Date:
On Fri, Oct 4, 2019 at 8:36 AM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Wed, Oct 2, 2019 at 10:22 AM James Coleman <jtc331@gmail.com> wrote:
> > In all cases I've been starting with:
> >
> > set enable_hashjoin = off;
> > set enable_nestloop = off;
> > set max_parallel_workers_per_gather = 4;
> > set min_parallel_index_scan_size = 0;
> > set min_parallel_table_scan_size = 0;
> > set parallel_setup_cost = 0;
> > set parallel_tuple_cost = 0;
> >
> > I've also tried various combinations of random_page_cost,
> > cpu_index_tuple_cost, cpu_tuple_cost.
> >
> > Interestingly I've noticed plans joining two relations that look like:
> >
> >  Limit
> >    ->  Merge Join
> >          Merge Cond: (t1.pk = t2.pk)
> >          ->  Gather Merge
> >                Workers Planned: 4
> >                ->  Parallel Index Scan using t_pkey on t t1
> >          ->  Gather Merge
> >                Workers Planned: 4
> >                ->  Parallel Index Scan using t_pkey on t t2
> >
> > Where I would have expected a Gather Merge above a parallelized merge
> > join. Is that reasonable to expect?
>
> Well, you told the planner that parallel_setup_cost = 0, so starting
> workers is free. And you told the planner that parallel_tuple_cost =
> 0, so shipping tuples from the worker to the leader is also free. So
> it is unclear why it should prefer a single Gather Merge over two
> Gather Merges: after all, the Gather Merge is free!
>
> If you use give those things some positive cost, even if it's smaller
> than the default, you'll probably get a saner-looking plan choice.

That makes sense.

Right now I currently see trying to get this a separate test feels a
bit like a distraction.

Given there doesn't seem to be an obvious way to reproduce the issue
currently, but we know we have a reproduction example along with
incremental sort, what is the path forward for this? Is it reasonable
to try to commit it anyway knowing that it's a "correct" change and
been demonstrated elsewhere?

James



Re: Consider low startup cost in add_partial_path

From
Tomas Vondra
Date:
Hi,

For the record, here is the relevant part of the Incremental Sort patch
series, updating add_partial_path and add_partial_path_precheck to also
consider startup cost.

The changes in the first two patches are pretty straight-forward, plus
there's a proposed optimization in the precheck function to only run
compare_pathkeys if entirely necessary. I'm currently evaluating those
changes and I'll post the results to the incremental sort thread.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment