If it were implemented in such a way that when the top level pruning
happens, a set of 3 sub partitions is selected from say 18 total and then at
the next level is selects the 3 matching sub partitions from each matched
group of 30 then you are only looking at 18+3*30 = 108 instead of 548 checks
to evaluate <example assumes monthly first level partitioning and daily sub
partitioning>. If this is not supported, then we will need to solve the
problem a different way - probably weekly partitions and refactor the code
to decrease updates by at least an order of magnitude. While we are in the
process of doing this, is there a way to make updates faster? Postgresql is
spending a lot of CPU cycles for each HOT update. We have
synchronous_commit turned off, commit siblings set to 5, commit_delay set to
50,000. With synchronous_commit off does it make any sense to be grouping
commits? Buffers written by the bgwriter vs checkpoint is 6 to 1. Buffers
written by clients vs buffers by checkpoint is 1 to 6. Is there anything
obvious here?
-Jerry
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, August 27, 2008 8:02 AM
To: Jerry Champlin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Is there a way to SubPartition?
Jerry Champlin <jchamplin@absolute-performance.com> writes:
> Is there a way to use multi-level inheritance to achieve sub
> partitioning that the query optimizer will recognize?
No, I don't think so. How would that make things any better anyway?
You're still going to end up with the same very large number of
partitions.
regards, tom lane