Re: Occasional performance issue after changing table partitions - Mailing list pgsql-performance

From Justin Pryzby
Subject Re: Occasional performance issue after changing table partitions
Date
Msg-id 20220712154310.GA13040@telsasoft.com
Whole thread Raw
In response to Re: Occasional performance issue after changing table partitions  (Nathan Ward <lists+postgresql@daork.net>)
List pgsql-performance
On Wed, Jul 13, 2022 at 03:13:46AM +1200, Nathan Ward wrote:
> I have been stepping through the various statements which are different between the two functions, and note that when
Ido math on a timestamp in a SELECT statement (i.e. _event_timestamp - INTERVAL ‘1 hour’),
 
> the planner takes 50ms or so - note that the result of the timestamp is used to search the partition key.
> If I declare a function which does the math in advance, stores it in a variable and then runs the SELECT, the planner
takesless than 1ms.
 
> Does this mean it’s calculating the timestamp for each partition, or something like that?

I'm not sure I understand what you're doing - the relevant parts of your
function text and query plan would help here.

Maybe auto_explain.log_nested_statements would be useful ?

Note that "partition pruning" can happen even if you don't have a literal
constant.  For example:
|explain(costs off) SELECT * FROM metrics WHERE start_time > now()::timestamp - '1 days'::interval;
| Append
|   Subplans Removed: 36

> I see Postgres 14 release notes has information about performance improvements in the planner for updates on tables
with"many partitions”. Is 444 partitions “many”?
 
> My updates are all impacting a single partition only.

It sounds like that'll certainly help you.  Another option is to update the
partition directly (which is what we do, to be able to use "ON CONFLICT").

I think with "old partitioning with inheritance", more than a few hundred
partitions was considered unreasonable, and plan-time suffered.

With relkind=p native/declarative partitioning, a few hundred is considered
reasonable, and a few thousand is still considered excessive - even if the
planner time is no issue, you'll still run into problems like "work-mem is
per-node", which works poorly when you might have 10x more nodes.

TBH, this doesn't sound related to your original issue.

-- 
Justin



pgsql-performance by date:

Previous
From: Nathan Ward
Date:
Subject: Re: Occasional performance issue after changing table partitions
Next
From: Rory Campbell-Lange
Date:
Subject: data consolidation: logical replication design considerations