Thread: Planning time of Generic plan for a table partitioned into a lot

Planning time of Generic plan for a table partitioned into a lot

From
"Kato, Sho"
Date:
Hi,
I found that making a generic plan of SELECT/UPDATE/DELETE for a table partitioned into thousands is slow.
Especially, UPDATE/DELETE statement is too slow.

I'm afraid that I could not come up with a good idea, but how can I shorten the creation time of a generic plan?

The results are as follows.

*setup*

postgres=# create table t(id int) partition by range(id);
CREATE TABLE
postgres=# \o /dev/null
postgres=# select 'create table t_' || x || ' partition of t for values from (' || x || ') to (' || x+1 || ')'from
generate_series(1,8192) x;
 
postgres=# \gexec
postgres-# analyze;
ANALYZE

*explain analyze results*

postgres=# set plan_cache_mode = force_generic_plan;
SET
postgres=# prepare select_stmt(int) as select * from t where id = $1;
PREPARE
postgres=# explain analyze execute select_stmt(8192);
                                               QUERY PLAN                                               
--------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..343572.48 rows=106496 width=4) (actual time=0.015..0.015 rows=0 loops=1)
   Subplans Removed: 8191
   ->  Seq Scan on t_8192  (cost=0.00..41.88 rows=13 width=4) (actual time=0.013..0.013 rows=0 loops=1)
         Filter: (id = $1)
 Planning Time: 206.415 ms
 Execution Time: 0.742 ms
(6 rows)

postgres=# prepare update_stmt(int) as update t set id = id + 1 where id = $1;
PREPARE
postgres=# explain analyze execute update_stmt(8192);
                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
 Update on t  (cost=0.00..343306.24 rows=106496 width=10) (actual time=39.502..39.503 rows=0 loops=1)
   Update on t_1
   Update on t_2
   ...
   ->  Seq Scan on t_1  (cost=0.00..41.91 rows=13 width=10) (actual time=0.025..0.026 rows=0 loops=1)
         Filter: (id = $1)
   ->  Seq Scan on t_2  (cost=0.00..41.91 rows=13 width=10) (actual time=0.004..0.005 rows=0 loops=1)
         Filter: (id = $1)
   ...
 Planning Time: 14357.504 ms
 Execution Time: 397.652 ms
(24579 rows)

postgres=# prepare delete_stmt(int) as delete from t where id = $1;
PREPARE
postgres=# explain analyze execute delete_stmt(8192);
                                               QUERY PLAN
--------------------------------------------------------------------------------------------------------
 Delete on t  (cost=0.00..343040.00 rows=106496 width=6) (actual time=51.628..51.628 rows=0 loops=1)
   Delete on t_1
   Delete on t_2
   ...
   ->  Seq Scan on t_1  (cost=0.00..41.88 rows=13 width=6) (actual time=0.025..0.026 rows=0 loops=1)
         Filter: (id = $1)
   ->  Seq Scan on t_2  (cost=0.00..41.88 rows=13 width=6) (actual time=0.014..0.015 rows=0 loops=1)
         Filter: (id = $1)
   ...
 Planning Time: 14225.908 ms
 Execution Time: 405.605 ms
(24579 rows)

Of course, in case of plan_cache_mode = force_custom_plan, it is not problem because unnecessary paths are pruned by
speedingup planning with partitions patch[1].
 

However, if plan_cachemode is force_generic_plan, generic plan is made at the first execution of prepared statement.
If plan_cache_mode is auto(default), generic plan is made at the sixth execution.
So, with default setting, performance get lower at the sixth execution.
Even if you do not improve creation of generic plan, if the number of partition is large, it is better to recommend
force_custom_plan.
Thoughts?

[1]: https://www.postgresql.org/message-id/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp

Regards,
Sho Kato



Re: Planning time of Generic plan for a table partitioned into a lot

From
David Rowley
Date:
On Tue, 27 Nov 2018 at 23:05, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:
> I found that making a generic plan of SELECT/UPDATE/DELETE for a table partitioned into thousands is slow.
> Especially, UPDATE/DELETE statement is too slow.

It's quite well known and also documented [1] that this is slow. The
manual reads:

"Currently, pruning of partitions during the planning of an UPDATE or
DELETE command is implemented using the constraint exclusion method
(however, it is controlled by the enable_partition_pruning rather than
constraint_exclusion) — see the following section for details and
caveats that apply."

and later on the same page:

"All constraints on all children of the parent table are examined
during constraint exclusion, so large numbers of children are likely
to increase query planning time considerably. So the legacy
inheritance based partitioning will work well with up to perhaps a
hundred child tables; don't try to use many thousands of children."

That documentation should be getting adjusted by [2] as that patch
aims to improve the performance of UPDATE/DELETE planning, and also
improve planning performance for when partitions are pruned. Although
I'm not sure this will do much for you SELECT case, since you're not
pruning any partitions during planning.  There's been a discussion in
[3] about improving the performance of determining the relation's
size, which is known to be quite a bottleneck when generating a plan
which includes a partitioned table with a large number of partitions.

> I'm afraid that I could not come up with a good idea, but how can I shorten the creation time of a generic plan?

Since you mentioned the plan_cache_mode GUC, then I assume you're not
talking about any version of PostgreSQL that's been released, so if
you're looking for a way to make it faster in master then I'd suggest
helping with the review of [2]. If that patch does not meet your needs
then also help Thomas with [3]. If that's still not good enough then
you might need to do some research yourself. perf is your friend
there.

[1] https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE
[2] https://commitfest.postgresql.org/20/1778/
[3]
https://www.postgresql.org/message-id/flat/CAMyN-kCPin_stCMoXCVCq5J557e9-WEFPZTqdpO3j8wzoNVwNQ%40mail.gmail.com#e085c43b597b2775326afd9f3a2b6591

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Planning time of Generic plan for a table partitioned into a lot

From
Amit Langote
Date:
Hi Kato-san,

On 2018/11/27 19:05, Kato, Sho wrote:
> Of course, in case of plan_cache_mode = force_custom_plan, it is not problem because unnecessary paths are pruned by
speedingup planning with partitions patch[1].
 
> 
> However, if plan_cachemode is force_generic_plan, generic plan is made at the first execution of prepared statement.
> If plan_cache_mode is auto(default), generic plan is made at the sixth execution.
> So, with default setting, performance get lower at the sixth execution.

Keeping aside the fact that making a generic plan gets increasing more
expensive as the number of partitions increases, I'm a bit surprised that
you get a generic plan with plan_cache_mode = auto.  Isn't a generic plan
way too expensive in this case?

When I try your example, I always get a custom plan, because its cost is
pretty low and obviously so because it will contain only 1 partition and
even adding the cost of planning doesn't make it grow beyond a generic
plan's cost which contains 8192 partitions.  The following formula is used
to calculate the planning cost:

planning time = 1000.0 * cpu_operator_cost * (nrelations + 1)

where nrelations is the number of relations in the range table.

Here's what I get with various settings of plan caching.

--- force generic plan to see its cost

set plan_cache_mode = 'force_generic_plan';
set max_parallel_workers_per_gather = 0;
explain (timing off, analyze) execute select_stmt(8192);
                                      QUERY PLAN

──────────────────────────────────────
 Append  (cost=0.00..343572.48 rows=106496 width=4) (actual rows=0 loops=1)
   Subplans Removed: 8191
   ->  Seq Scan on t_8192  (cost=0.00..41.88 rows=13 width=4) (actual
rows=0 loops=1)
         Filter: (id = $1)
 Planning Time: 1217.543 ms
 Execution Time: 1.340 ms
(6 rows)

-- now look at the custom plan's cost

reset plan_cache_mode; -- resets to 'auto'

explain (timing off, analyze) execute select_stmt(8192);
                                      QUERY PLAN

──────────────────────────────────────
 Append  (cost=0.00..41.94 rows=13 width=4) (actual rows=0 loops=1)
   ->  Seq Scan on t_8192  (cost=0.00..41.88 rows=13 width=4) (actual
rows=0 loops=1)
         Filter: (id = 8192)
 Planning Time: 525.501 ms
 Execution Time: 1.104 ms
(5 rows)

So, the cost of custom plan is 41.94 + 1000 * 0.0025 * 8195 = 20529.44,
which is way less than 343572 (the generic plan cost).

-- force it to generic plan again to use the cached plan (no re-planning!)

set plan_cache_mode = 'force_generic_plan';

explain (timing off, analyze) execute select_stmt(8192);
                                      QUERY PLAN

──────────────────────────────────────
 Append  (cost=0.00..343572.48 rows=106496 width=4) (actual rows=0 loops=1)
   Subplans Removed: 8191
   ->  Seq Scan on t_8192  (cost=0.00..41.88 rows=13 width=4) (actual
rows=0 loops=1)
         Filter: (id = $1)
 Planning Time: 14.202 ms
 Execution Time: 1.841 ms
(6 rows)

You can see that the total time is the least when a cached plan is used,
which is only possible if a generic plan can be used (even if creating it
for the first time is very expensive.).  But its cost prevents it from
being automatically selected (plan_cache_mode = 'auto').  That may be one
thing we could fix in the future by considering run-time pruning in the
equation of Append costing, so that its cost is more or less the same as
the custom plan's cost.


Just as one more data point, if you apply the patch that you mentioned
[1], you can see that custom planning costs even less than that.

reset plan_cache_mode;

explain (timing off, analyze) execute select_stmt(8192);
                                      QUERY PLAN

───────────────────────────────────────
 Append  (cost=0.00..41.94 rows=13 width=4) (actual rows=0 loops=1)
   ->  Seq Scan on t_8192  (cost=0.00..41.88 rows=13 width=4) (actual
rows=0 loops=1)
         Filter: (id = 8192)
 Planning Time: 0.438 ms
 Execution Time: 0.121 ms
(5 rows)

It's cheaper than using a cached generic plan (without re-planning),
because the latter has to pay the cost of AcquireExecutorLocks which takes
longer as the number of partitions increases.  Perhaps something to try
fix fixing too.  Not planning should cost less than planning! :)

Thanks,
Amit

[1]
https://www.postgresql.org/message-id/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp



Re: Planning time of Generic plan for a table partitioned into a lot

From
Amit Langote
Date:
On 2018/11/28 13:46, Amit Langote wrote:
> It's cheaper than using a cached generic plan (without re-planning),
> because the latter has to pay the cost of AcquireExecutorLocks which takes
> longer as the number of partitions increases.  Perhaps something to try
> fix fixing too.  Not planning should cost less than planning! :)

Ah, I see that David has already thought about this issue.

(last paragraph of this email)
https://www.postgresql.org/message-id/CAKJS1f-ibmyn1W_UsdSmygjKOL6YgPyX0Mz54V_iD0HWWL_h%3Dg%40mail.gmail.com

Thanks,
Amit



RE: Planning time of Generic plan for a table partitioned into a lot

From
"Kato, Sho"
Date:
Hi, David

On Wednesday, November 28, 2018 1:23 PM David Rowley wrote:
> > I found that making a generic plan of SELECT/UPDATE/DELETE for a table
> partitioned into thousands is slow.
> > Especially, UPDATE/DELETE statement is too slow.
> 
> It's quite well known and also documented [1] that this is slow. The manual
> reads:

Thanks for the detailed explanation and sorry for my lack of explanation.
What I am worried about is that if users don't know the flow of PREPARE EXECUTE, query execution seems to be suddenly
slowand they will be in trouble.
 
Query execution seems no problem because choose_custom_plan() compare the costs of custom plans and generic plans and
selecta lower cost plan.
 
Actually, selected custom plan is faster than generic plan.
However, since a generic plan is made at sixth execution, it looks slow as a whole.

postgres=# prepare update_stmt(int) as update t set id = id + 1 where id = $1;
PREPARE
postgres=# \timing
Timing is on.
postgres=# execute update_stmt(8192);                                       
UPDATE 0
Time: 25.702 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 13.641 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 13.380 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 8.508 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 12.162 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 26517.487 ms (00:26.517)
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 13.208 ms

>Although I'm not
> sure this will do much for you SELECT case, since you're not pruning any
> partitions during planning.

Umm. enable_partition_pruning is on(default). Is there any settings?

> Since you mentioned the plan_cache_mode GUC, then I assume you're not
> talking about any version of PostgreSQL that's been released, so if you're
> looking for a way to make it faster in master then I'd suggest helping
> with the review of [2]. If that patch does not meet your needs then also
> help Thomas with [3]. If that's still not good enough then you might need
> to do some research yourself. perf is your friend there.

Yes, I look for a way to make it faster in master.
Thanks for your advice. I'll take a look.
I understand that since there is no run-time partition pruning of UPDATE/DELETE yet, planner creates plans for all
partitionswith force_generic_plan, which takes time to plan.
 

Regards,
Sho Kato
> -----Original Message-----
> From: David Rowley [mailto:david.rowley@2ndquadrant.com]
> Sent: Wednesday, November 28, 2018 1:23 PM
> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>
> Cc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
> Subject: Re: Planning time of Generic plan for a table partitioned into
> a lot
> 
> On Tue, 27 Nov 2018 at 23:05, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:
> > I found that making a generic plan of SELECT/UPDATE/DELETE for a table
> partitioned into thousands is slow.
> > Especially, UPDATE/DELETE statement is too slow.
> 
> It's quite well known and also documented [1] that this is slow. The manual
> reads:
> 
> "Currently, pruning of partitions during the planning of an UPDATE or
> DELETE command is implemented using the constraint exclusion method
> (however, it is controlled by the enable_partition_pruning rather than
> constraint_exclusion) — see the following section for details and caveats
> that apply."
> 
> and later on the same page:
> 
> "All constraints on all children of the parent table are examined during
> constraint exclusion, so large numbers of children are likely to increase
> query planning time considerably. So the legacy inheritance based
> partitioning will work well with up to perhaps a hundred child tables;
> don't try to use many thousands of children."
> 
> That documentation should be getting adjusted by [2] as that patch aims
> to improve the performance of UPDATE/DELETE planning, and also improve
> planning performance for when partitions are pruned. Although I'm not
> sure this will do much for you SELECT case, since you're not pruning any
> partitions during planning.  There's been a discussion in [3] about
> improving the performance of determining the relation's size, which is
> known to be quite a bottleneck when generating a plan which includes a
> partitioned table with a large number of partitions.
> 
> > I'm afraid that I could not come up with a good idea, but how can I
> shorten the creation time of a generic plan?
> 
> Since you mentioned the plan_cache_mode GUC, then I assume you're not
> talking about any version of PostgreSQL that's been released, so if you're
> looking for a way to make it faster in master then I'd suggest helping
> with the review of [2]. If that patch does not meet your needs then also
> help Thomas with [3]. If that's still not good enough then you might need
> to do some research yourself. perf is your friend there.
> 
> [1]
> https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PART
> ITIONING-DECLARATIVE
> [2] https://commitfest.postgresql.org/20/1778/
> [3]
> https://www.postgresql.org/message-id/flat/CAMyN-kCPin_stCMoXCVCq5J5
> 57e9-WEFPZTqdpO3j8wzoNVwNQ%40mail.gmail.com#e085c43b597b2775326afd9f
> 3a2b6591
> 
> --
>  David Rowley                   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services


RE: Planning time of Generic plan for a table partitioned into a lot

From
"Kato, Sho"
Date:
Hi, Amit

Thanks for your reply.
Your explanation is very easy to understand :)

On Wednesday, November 28, 2018 2:52 PM, Amit Langote wrote: 
>Keeping aside the fact that making a generic plan gets increasing more expensive as the number of partitions
increases,I'm a bit surprised that you get a generic plan with plan_cache_mode = auto.  Isn't a generic plan way too
expensivein this case?
 

Sorry for my lack of explanation. I didn't get a generic plan with plan_cache_mode = auto.
What I am worried about is that if users don't know the flow of PREPARE EXECUTE, query execution seems to be suddenly
slowand they will be in trouble.
 
Just as you said, generic plan is only made, and a custom plan is chosen.
But, as the time to make a general plan is added, it becomes slow as a whole.

>Ah, I see that David has already thought about this issue.
>
>(last paragraph of this email)
>https://www.postgresql.org/message-id/CAKJS1f-ibmyn1W_UsdSmygjKOL6YgPyX0Mz54V_iD0HWWL_h%3Dg%40mail.gmail.com

Oh great! I am also concerned about AcquireExecutorLocks.
Ok, I'll take a look.

regards,
Sho Kato
> -----Original Message-----
> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]
> Sent: Wednesday, November 28, 2018 2:52 PM
> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>;
> pgsql-hackers@lists.postgresql.org
> Cc: David Rowley <dgrowleyml@gmail.com>
> Subject: Re: Planning time of Generic plan for a table partitioned into
> a lot
> 
> On 2018/11/28 13:46, Amit Langote wrote:
> > It's cheaper than using a cached generic plan (without re-planning),
> > because the latter has to pay the cost of AcquireExecutorLocks which
> > takes longer as the number of partitions increases.  Perhaps something
> > to try fix fixing too.  Not planning should cost less than planning!
> > :)
> 
> Ah, I see that David has already thought about this issue.
> 
> (last paragraph of this email)
> https://www.postgresql.org/message-id/CAKJS1f-ibmyn1W_UsdSmygjKOL6Yg
> PyX0Mz54V_iD0HWWL_h%3Dg%40mail.gmail.com
> 
> Thanks,
> Amit
> 




Re: Planning time of Generic plan for a table partitioned into a lot

From
David Rowley
Date:
On Thu, 29 Nov 2018 at 20:40, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:
> Sorry for my lack of explanation. I didn't get a generic plan with plan_cache_mode = auto.
> What I am worried about is that if users don't know the flow of PREPARE EXECUTE, query execution seems to be suddenly
slowand they will be in trouble.
 
> Just as you said, generic plan is only made, and a custom plan is chosen.
> But, as the time to make a general plan is added, it becomes slow as a whole.

Like Amit, I also had in mind that you'd never get a generic plan due
to the cost appearing much higher, but of course, the planner must
actually attempt to build a generic plan before it realises that the
cost of it is unfavourable, which would only occur on the 6th
execution, any subsequent executions would realise the generic plan is
no good.  I don't quite see any way around that other than ensuring
you have plan_cache_mode as force_custom_plan, but then you're still
going against the manual's recommendations about not having thousands
of partitions.  The problem is only made worse in PG11 from PG10
because generating the custom plan has become faster than it
previously was due to the new partition pruning code which might make
it appear we can handle more partitions than we could previously, but
generating a generic plan for that many partitions being slow kinda
proves that's not the case.  The fact that we still have the warning
in the manual about not having thousands of partitions makes me not so
worried about this.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Planning time of Generic plan for a table partitioned into a lot

From
Amit Langote
Date:
On 2018/11/29 19:54, David Rowley wrote:
> The problem is only made worse in PG11 from PG10
> because generating the custom plan has become faster than it
> previously was due to the new partition pruning code which might make
> it appear we can handle more partitions than we could previously,
Actually, PG 11's pruning improvements don't change plancache.c's equation
of custom plan cost, that is, even if pruning may have gotten faster it
doesn't change the value cached_plan_cost comes up with.

Although, you're certainly right that users are well advised to trust the
documentation to not go beyond hundreds of partitions, even if they may
not care about all the internal details that make partitioning slow.

Thanks,
Amit



RE: Planning time of Generic plan for a table partitioned into a lot

From
"Kato, Sho"
Date:
On Thursday, November 29, 2018 7:55 PM  David Rowley wrote:
>I don't quite see any way around that other than ensuring you have plan_cache_mode as >force_custom_plan, but then
you'restill going against the manual's recommendations about not >having thousands of partitions.
 

Ok, I got it.

Regards,
Sho Kato
> -----Original Message-----
> From: David Rowley [mailto:david.rowley@2ndquadrant.com]
> Sent: Thursday, November 29, 2018 7:55 PM
> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>
> Cc: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>; PostgreSQL Hackers
> <pgsql-hackers@lists.postgresql.org>
> Subject: Re: Planning time of Generic plan for a table partitioned into
> a lot
> 
> On Thu, 29 Nov 2018 at 20:40, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:
> > Sorry for my lack of explanation. I didn't get a generic plan with
> plan_cache_mode = auto.
> > What I am worried about is that if users don't know the flow of PREPARE
> EXECUTE, query execution seems to be suddenly slow and they will be in
> trouble.
> > Just as you said, generic plan is only made, and a custom plan is chosen.
> > But, as the time to make a general plan is added, it becomes slow as
> a whole.
> 
> Like Amit, I also had in mind that you'd never get a generic plan due
> to the cost appearing much higher, but of course, the planner must actually
> attempt to build a generic plan before it realises that the cost of it
> is unfavourable, which would only occur on the 6th execution, any
> subsequent executions would realise the generic plan is no good.  I don't
> quite see any way around that other than ensuring you have plan_cache_mode
> as force_custom_plan, but then you're still going against the manual's
> recommendations about not having thousands of partitions.  The problem
> is only made worse in PG11 from PG10 because generating the custom plan
> has become faster than it previously was due to the new partition pruning
> code which might make it appear we can handle more partitions than we
> could previously, but generating a generic plan for that many partitions
> being slow kinda proves that's not the case.  The fact that we still have
> the warning in the manual about not having thousands of partitions makes
> me not so worried about this.
> 
> --
>  David Rowley                   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services


Re: Planning time of Generic plan for a table partitioned into a lot

From
David Rowley
Date:
On Fri, 30 Nov 2018 at 15:04, Amit Langote
<Langote_Amit_f8@lab.ntt.co.jp> wrote:
>
> On 2018/11/29 19:54, David Rowley wrote:
> > The problem is only made worse in PG11 from PG10
> > because generating the custom plan has become faster than it
> > previously was due to the new partition pruning code which might make
> > it appear we can handle more partitions than we could previously,
>
> Actually, PG 11's pruning improvements don't change plancache.c's equation
> of custom plan cost, that is, even if pruning may have gotten faster it
> doesn't change the value cached_plan_cost comes up with.

Unsure why you think I was implying that the plancache code had
changed. What I meant was, the faster pruning code means that PG11
appears more capable of handling more partitions than PG10 could
handle, but this really only goes as far as custom plans where many
partitions get pruned. When no pruning takes place, say, in a generic
plan where the partition key is being compared to some parameter, then
we've done nothing to improve the performance of planning for that.
This may result in someone doing some light testing and thinking PG11
can handle a higher number of partitions that we might advise them to
use, only to find themselves stumble later when trying to build a
generic plan for that number of partitions.   It appears to me that
this is what's happened in this case.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Planning time of Generic plan for a table partitioned into a lot

From
Amit Langote
Date:
On 2018/11/30 14:58, David Rowley wrote:
> On Fri, 30 Nov 2018 at 15:04, Amit Langote
> <Langote_Amit_f8@lab.ntt.co.jp> wrote:
>>
>> On 2018/11/29 19:54, David Rowley wrote:
>>> The problem is only made worse in PG11 from PG10
>>> because generating the custom plan has become faster than it
>>> previously was due to the new partition pruning code which might make
>>> it appear we can handle more partitions than we could previously,
>>
>> Actually, PG 11's pruning improvements don't change plancache.c's equation
>> of custom plan cost, that is, even if pruning may have gotten faster it
>> doesn't change the value cached_plan_cost comes up with.
> 
> Unsure why you think I was implying that the plancache code had
> changed.

Sorry I misinterpreted your sentence "...which might make it appear we can
handle more partitions than we could previously".  I thought you're saying
that *plancache* now thinks custom plans are better because they're
sightly faster.

> What I meant was, the faster pruning code means that PG11
> appears more capable of handling more partitions than PG10 could
> handle, but this really only goes as far as custom plans where many
> partitions get pruned.

Right.

> When no pruning takes place, say, in a generic
> plan where the partition key is being compared to some parameter, then
> we've done nothing to improve the performance of planning for that.

Yeah.  Even with patches for PG 12, this case will be only slightly faster.

> This may result in someone doing some light testing and thinking PG11
> can handle a higher number of partitions that we might advise them to
> use, only to find themselves stumble later when trying to build a
> generic plan for that number of partitions.   It appears to me that
> this is what's happened in this case.

Yeah, maybe we haven't explained in the documentation where generic plans
are described that making them for partitioned table is an expensive
affair.  Although, by definition, they are built once for a given query
and PG 11 with it's execution-time pruning can execute these plans pretty
quickly, which is an overall improvement.  But you'd obviously know that
much. :)

Thanks,
Amit



Re: Planning time of Generic plan for a table partitioned into a lot

From
David Rowley
Date:
On Fri, 30 Nov 2018 at 19:44, Amit Langote
<Langote_Amit_f8@lab.ntt.co.jp> wrote:
> Yeah, maybe we haven't explained in the documentation where generic plans
> are described that making them for partitioned table is an expensive
> affair.  Although, by definition, they are built once for a given query
> and PG 11 with it's execution-time pruning can execute these plans pretty
> quickly, which is an overall improvement.  But you'd obviously know that
> much. :)

Maybe a documents patch is a good idea. Your planner patches for
partitioning are only going to widen the performance gap between
generating a plan where many partitions are pruned vs generating a
generic plan where no partition could be pruned.  I can only imagine
this will cause problems for more people as that gap widens as it will
likely lead to more and more people with partitioned tables with high
numbers of partitions.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


RE: Planning time of Generic plan for a table partitioned into a lot

From
"Kato, Sho"
Date:
On Friday, November 30, 2018 3:44 PM, Amit Langote wrote:
> Yeah, maybe we haven't explained in the documentation where generic plans
> are described that making them for partitioned table is an expensive
> affair.

Will we improve creating the generic plan in the future?
For example, if there is UPDATE / DELETE run-time partition pruning, creating the generic plan will also be faster.

Although it may not be possible with PG 12, I think that it is necessary to improve it in the future.

Regards,
Sho Kato
> -----Original Message-----
> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]
> Sent: Friday, November 30, 2018 3:44 PM
> To: David Rowley <david.rowley@2ndquadrant.com>
> Cc: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>; PostgreSQL Hackers
> <pgsql-hackers@lists.postgresql.org>
> Subject: Re: Planning time of Generic plan for a table partitioned into
> a lot
> 
> On 2018/11/30 14:58, David Rowley wrote:
> > On Fri, 30 Nov 2018 at 15:04, Amit Langote
> > <Langote_Amit_f8@lab.ntt.co.jp> wrote:
> >>
> >> On 2018/11/29 19:54, David Rowley wrote:
> >>> The problem is only made worse in PG11 from PG10 because generating
> >>> the custom plan has become faster than it previously was due to the
> >>> new partition pruning code which might make it appear we can handle
> >>> more partitions than we could previously,
> >>
> >> Actually, PG 11's pruning improvements don't change plancache.c's
> >> equation of custom plan cost, that is, even if pruning may have
> >> gotten faster it doesn't change the value cached_plan_cost comes up
> with.
> >
> > Unsure why you think I was implying that the plancache code had
> > changed.
> 
> Sorry I misinterpreted your sentence "...which might make it appear we
> can handle more partitions than we could previously".  I thought you're
> saying that *plancache* now thinks custom plans are better because they're
> sightly faster.
> 
> > What I meant was, the faster pruning code means that PG11 appears more
> > capable of handling more partitions than PG10 could handle, but this
> > really only goes as far as custom plans where many partitions get
> > pruned.
> 
> Right.
> 
> > When no pruning takes place, say, in a generic plan where the
> > partition key is being compared to some parameter, then we've done
> > nothing to improve the performance of planning for that.
> 
> Yeah.  Even with patches for PG 12, this case will be only slightly faster.
> 
> > This may result in someone doing some light testing and thinking PG11
> > can handle a higher number of partitions that we might advise them to
> > use, only to find themselves stumble later when trying to build a
> > generic plan for that number of partitions.   It appears to me that
> > this is what's happened in this case.
> 
> Yeah, maybe we haven't explained in the documentation where generic plans
> are described that making them for partitioned table is an expensive
> affair.  Although, by definition, they are built once for a given query
> and PG 11 with it's execution-time pruning can execute these plans pretty
> quickly, which is an overall improvement.  But you'd obviously know that
> much. :)
> 
> Thanks,
> Amit
> 


Re: Planning time of Generic plan for a table partitioned into a lot

From
David Rowley
Date:
On Fri, 30 Nov 2018 at 20:15, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:
> Will we improve creating the generic plan in the future?

I think Amit's patch will improve the UPDATE/DELETE planning
performance even when nothing can be pruned.  Currently planning those
are quite inefficient as a separate grouping planner call is made for
each partition.

> For example, if there is UPDATE / DELETE run-time partition pruning, creating the generic plan will also be faster.

Adding run-time pruning code for that won't make the planner any
faster. There will be a very slight overhead to build the required
PartitionPruneInfo struct to attach to the plan in order to allow the
run-time pruning to work. This would mean that there'd only be a hope
of reducing the execution time of such a plan, not the time to build
the plan in the first place.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services