Thread: Large number of partitions of a table

Large number of partitions of a table

From
Victor Sudakov
Date:
Dear Colleagues,

Do you have success (or disaster) stories for having a large number of
partitions in a table (like maybe 10000) in a production database?

I've found a great article
https://www.depesz.com/2021/01/17/are-there-limits-to-partition-counts/
and basically it says 70000 partitions are OK but would like to hear
more opinions especially from production experience.

If a table itself has e.g. 50 indexes, partitioning it will create 10000
extra tables and 50*10000=500000 indexes, isn't it a heavy burden on the
system catalogs (pg_statistic etc). It may slow down ANALYZE or have
some other adverse effects.

Any comments welcome.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Tom Lane
Date:
Victor Sudakov <vas@sibptus.ru> writes:
> Do you have success (or disaster) stories for having a large number of
> partitions in a table (like maybe 10000) in a production database?
> I've found a great article
> https://www.depesz.com/2021/01/17/are-there-limits-to-partition-counts/
> and basically it says 70000 partitions are OK but would like to hear
> more opinions especially from production experience.

One thing that cannot be said too strongly is that it depends A LOT
on which Postgres major version you are working with.  The earliest
releases that had partitions had a lot of trouble with more than a
few dozen partitions.  It's gotten better with each release since
then, because we've been sanding down various rough edges over
time (and will continue to do so).  But it's still possible to burn
yourself, and always will be, for example if you issue a query that
requires scanning all the partitions not just a few.  Thus, your
results will also vary A LOT depending on your exact workload.

Since you've given no details about either of those points,
it's impossible to give any confident answer about how well
10K partitions would work for you.

The specific point that depesz was responding to in that blog
was the 64K-ish limit on rangetable entries in a query.  That is
a thing, as he could have shown by using queries that weren't
amenable to plan-time pruning.  (It's also an ex-thing, having
been fixed for v15 [1]; but that doesn't help you today.)
Now, if you use no queries that can't be pruned to a few
partitions, then it's academic for you.  But you can pretty
much guarantee that you will run into it some of the time
if you try to do real work with a 70K-partition table.

> If a table itself has e.g. 50 indexes, partitioning it will create 10000
> extra tables and 50*10000=500000 indexes, isn't it a heavy burden on the
> system catalogs (pg_statistic etc). It may slow down ANALYZE or have
> some other adverse effects.

Right.  Although there are not *hard* limits on the number of
database objects involved (mumble 4G OID limit mumble), you're
definitely risking performance issues when you get into thousands
of partitions.  If you need such a setup badly enough to deal
with restrictions on what you can do with the table, then do what
you gotta do ... but I suspect that overall, most people will be
happier with fewer partitions not more.

            regards, tom lane

[1] https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=e3ec3c00d



Re: Large number of partitions of a table

From
Ron
Date:
On 1/16/22 8:44 PM, Victor Sudakov wrote:
Dear Colleagues,

Do you have success (or disaster) stories for having a large number of
partitions in a table (like maybe 10000) in a production database?

I've found a great article
https://www.depesz.com/2021/01/17/are-there-limits-to-partition-counts/
and basically it says 70000 partitions are OK but would like to hear
more opinions especially from production experience.

If a table itself has e.g. 50 indexes, partitioning it will create 10000
extra tables and 50*10000=500000 indexes, isn't it a heavy burden on the
system catalogs (pg_statistic etc). It may slow down ANALYZE 

My experience is with range partitions on v12.

It will speed up ANALYZE and VACUUM, since those operations are performed against the (smaller) child tables, not the (empty) parent table.  If the table is partitioned chronologically and the older children never get modified, you won't need to analyze or vacuum them at all.

or have some other adverse effects.

Queries can easily be much much slower if the partition key is not in the WHERE clause.

--
Angular momentum makes the world go 'round.

Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Tom Lane wrote:
> Victor Sudakov <vas@sibptus.ru> writes:
> > Do you have success (or disaster) stories for having a large number of
> > partitions in a table (like maybe 10000) in a production database?
> > I've found a great article
> > https://www.depesz.com/2021/01/17/are-there-limits-to-partition-counts/
> > and basically it says 70000 partitions are OK but would like to hear
> > more opinions especially from production experience.
> 
> One thing that cannot be said too strongly is that it depends A LOT
> on which Postgres major version you are working with.  

Sorry, I should have mentioned this. I'm talking about 13.5.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Tom Lane wrote:
[dd]

> The specific point that depesz was responding to in that blog
> was the 64K-ish limit on rangetable entries in a query.  That is
> a thing, as he could have shown by using queries that weren't
> amenable to plan-time pruning.  (It's also an ex-thing, having
> been fixed for v15 [1]; but that doesn't help you today.)
> Now, if you use no queries that can't be pruned to a few
> partitions, then it's academic for you.  

The table will be partitioned `BY LIST (customer_id)` which is a unique
index. All queries will be using this index so no query should ever
have to use more than 1 partition.

This means basically I'm OK?

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Ron
Date:
On 1/16/22 11:38 PM, Victor Sudakov wrote:
Tom Lane wrote:
[dd]

The specific point that depesz was responding to in that blog
was the 64K-ish limit on rangetable entries in a query.  That is
a thing, as he could have shown by using queries that weren't
amenable to plan-time pruning.  (It's also an ex-thing, having
been fixed for v15 [1]; but that doesn't help you today.)
Now, if you use no queries that can't be pruned to a few
partitions, then it's academic for you.  
The table will be partitioned `BY LIST (customer_id)` which is a unique
index. All queries will be using this index 

Good.

so no query should ever have to use more than 1 partition.

I find it hard to believe that you'll never run a report against more customers than are in a single partition.


This means basically I'm OK?


--
Angular momentum makes the world go 'round.

Re: Large number of partitions of a table

From
Tom Lane
Date:
Ron <ronljohnsonjr@gmail.com> writes:
> On 1/16/22 11:38 PM, Victor Sudakov wrote:
>> so no query should ever have to use more than 1 partition.

> I find it hard to believe that you'll *never* run a report against more 
> customers than are in a single partition.

Yeah, I'm a little suspicious of that.  The other thing I'd be worried
about, given that the OP is using v13, is that UPDATE/DELETE planning
performance can be pretty awful with lots of partitions.  There were
some O(N^2) behaviors in there that we didn't get rid of till v14.

            regards, tom lane



Re: Large number of partitions of a table

From
"David G. Johnston"
Date:
On Mon, Jan 17, 2022 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Ron <ronljohnsonjr@gmail.com> writes:
> On 1/16/22 11:38 PM, Victor Sudakov wrote:
>> so no query should ever have to use more than 1 partition.

> I find it hard to believe that you'll *never* run a report against more
> customers than are in a single partition.

Yeah, I'm a little suspicious of that.

Is "never" really the correct threshold here?  Sure there may be a few queries that run over multiple customers but it's also likely that so long as they don't actually crash the system the performance will be acceptable.  Plus not every table would have to be partitioned on customer; I can imagine that for queries where I do care about multiple customers that I'd setup triggers on individual custom tables to update the consolidated table - then run the multi-customer query against that.

I say this having written a system, that has been running for over a decade, whose usage patterns for multi-tenancy are basically that (it doesn't use partitions though it's probably worse for not having done so).  In my case I would need multiple thousands of partitions, but indeed every main operational query would have the tenantID as a predicate.  That seems like a fundamental property of a multi-tenant system...

David J.

Re: Large number of partitions of a table

From
Tom Lane
Date:
"David G. Johnston" <david.g.johnston@gmail.com> writes:
> On Mon, Jan 17, 2022 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Ron <ronljohnsonjr@gmail.com> writes:
>>> I find it hard to believe that you'll *never* run a report against more
>>> customers than are in a single partition.

>> Yeah, I'm a little suspicious of that.

> Is "never" really the correct threshold here?

It is if you've pushed things to the point where, say, "select count(*)
from partitioned_table" will fail outright.  I don't have a lot of faith
in system designs that assume there will never be any ad-hoc queries
that are outside your normal pattern.

            regards, tom lane



Re: Large number of partitions of a table

From
Ron
Date:
On 1/17/22 9:14 AM, David G. Johnston wrote:
On Mon, Jan 17, 2022 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Ron <ronljohnsonjr@gmail.com> writes:
> On 1/16/22 11:38 PM, Victor Sudakov wrote:
>> so no query should ever have to use more than 1 partition.

> I find it hard to believe that you'll *never* run a report against more
> customers than are in a single partition.

Yeah, I'm a little suspicious of that.

Is "never" really the correct threshold here? 

That's what OP wrote, and that's what we're dubious about.  Users who know a bit too much to be dangerous write a scary about of unexpected queries.

--
Angular momentum makes the world go 'round.

Re: Large number of partitions of a table

From
Scott Ribe
Date:
What about schema per tenant instead of partition per tenant?





Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Ron wrote:
> On 1/16/22 11:38 PM, Victor Sudakov wrote:
> > Tom Lane wrote:
> > [dd]
> >
> >> The specific point that depesz was responding to in that blog
> >> was the 64K-ish limit on rangetable entries in a query.  That is
> >> a thing, as he could have shown by using queries that weren't
> >> amenable to plan-time pruning.  (It's also an ex-thing, having
> >> been fixed for v15 [1]; but that doesn't help you today.)
> >> Now, if you use no queries that can't be pruned to a few
> >> partitions, then it's academic for you.
> > The table will be partitioned `BY LIST (customer_id)` which is a unique
> > index. All queries will be using this index
> 
> Good.
> 
> > so no query should ever have to use more than 1 partition.
> 
> I find it hard to believe that you'll *never* run a report against more 
> customers than are in a single partition.

Thank you for raising this question, it can be of great interest.

What's the worst thing to happen if someone runs "SELECT COUNT(*) FROM t" where t has 10000 partitions?

1. The backend will crash?

2. The whole cluster will crash?

3. Only this particular query (spanning multiple partitions) will be very slow?

4. ?

Also, what if it is not a SELECT but an UPDATE query spanning multiple partitions? Does it make any difference?

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Tom Lane
Date:
Victor Sudakov <vas@sibptus.ru> writes:
> What's the worst thing to happen if someone runs "SELECT COUNT(*) FROM t" where t has 10000 partitions?

> 1. The backend will crash?

Well, yeah, actually.  An ill-advised query will blow out your backend's
memory consumption, potentially leading to a SIGKILL from the dreaded OOM
killer[1] (if you're on Linux), resulting in a backend crash and cluster
restart.  On other platforms, you will likely end up in swap purgatory,
which is actually worse because it destroys the whole platform's
responsiveness and there may not be any easy way to get out of it.

> 2. The whole cluster will crash?

Not unless the OOM killer decides to kill the postmaster.  That's
been seen to happen :-( but I think recent kernels have gotten
better about targeting the misbehaving backend not its parent process.
(Once upon a time they deliberately targeted the parent, which is
the main reason for the advice in [1] to disable the OOM killer.)

> 3. Only this particular query (spanning multiple partitions) will be very slow?

Guaranteed.

> 4. ?

You could run the shared lock table to full or nearly full, causing
concurrent queries to fail all the while your SELECT COUNT(*) is
leisurely proceeding towards completion.

> Also, what if it is not a SELECT but an UPDATE query spanning multiple partitions? Does it make any difference?

On v14, no.  Before that, you have the O(N^2) planner issues
I mentioned upthread.

All of these things can be worked around, but you have to ask
yourself if the benefits you get from using more rather than
fewer partitions are worth fighting these sorts of fires.

            regards, tom lane

[1] https://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT



Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Tom Lane wrote:
> Victor Sudakov <vas@sibptus.ru> writes:
> > What's the worst thing to happen if someone runs "SELECT COUNT(*) FROM t" where t has 10000 partitions?
> 
> > 1. The backend will crash?
> 
> Well, yeah, actually.  An ill-advised query will blow out your backend's
> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
> killer[1] (if you're on Linux), resulting in a backend crash and cluster
> restart.  

Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
even if t has 10000 partitions? I've just run it with 3000 partitions
and it does not even create temporary files let alone consume any
significant amount of RAM.

It is very difficult for me to even imagine a query that would consume
work_mem*N where N>4 or something.  It has to be a very sophisticated
query though documentation says this is possible. And the reasonable
amount of work_mem could be around 32MB.

Even considering parallelism, how would a query to a table with 10000
partitions consume enough RAM to cause OOM, what is the mechanism?

> 
> > 3. Only this particular query (spanning multiple partitions) will be very slow?
> 
> Guaranteed.
> 
> > 4. ?
> 
> You could run the shared lock table to full or nearly full, causing
> concurrent queries to fail all the while your SELECT COUNT(*) is
> leisurely proceeding towards completion.

What's the limit on the shared lock table? 

> > Also, what if it is not a SELECT but an UPDATE query spanning multiple partitions? Does it make any difference?
> 
> On v14, no.  Before that, you have the O(N^2) planner issues
> I mentioned upthread.
> 
> All of these things can be worked around, but you have to ask
> yourself if the benefits you get from using more rather than
> fewer partitions are worth fighting these sorts of fires.

Thank you for the valuable comments!

> 
> [1] https://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT

Reading this, I'm not sure if getting a "Cannot allocate memory" error
is really any better for Postgres than being killed by the OOM killer.
I'm not arguing here but just stating that I genuinely don't know.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Tom Lane
Date:
Victor Sudakov <vas@sibptus.ru> writes:
> Tom Lane wrote:
>> Well, yeah, actually.  An ill-advised query will blow out your backend's
>> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
>> killer[1] (if you're on Linux), resulting in a backend crash and cluster
>> restart.  

> Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
> even if t has 10000 partitions?

Sure, COUNT(*)'s runtime memory consumption is negligible.
But you're not thinking about overhead --- specifically,

1. 10000 relcache entries for the base tables.

2. If you have N indexes per table, N*10000 relcache entries for
the indexes.  (The planner will probably have looked at all those
indexes, even if it didn't find any use for them.)

3. 10000 SeqScan plan nodes and associated rangetable entries,

4. Likewise, 10000 instances of executor per-node state.

5. 10000 lock table entries (both shared and local lock tables).

6. Probably a few per-relation things I didn't think of.

Admittedly, each of those individual data structures is going
to be at most in the few-kilobytes range, so this is going to total
somewhere in the hundreds of megabytes, which you can probably
tolerate on modern hardware --- unless some dozens of sessions
are all doing the same kind of thing.

Basically, this is likely to work until it doesn't ... and
when it doesn't, you're going to be in the office at 3AM
trying to wrangle a recalcitrant system.  Just sayin'.

            regards, tom lane



Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Tom Lane wrote:
> Victor Sudakov <vas@sibptus.ru> writes:
> > Tom Lane wrote:
> >> Well, yeah, actually.  An ill-advised query will blow out your backend's
> >> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
> >> killer[1] (if you're on Linux), resulting in a backend crash and cluster
> >> restart.  
> 
> > Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
> > even if t has 10000 partitions?
> 
> Sure, COUNT(*)'s runtime memory consumption is negligible.
> But you're not thinking about overhead --- specifically,
> 
> 1. 10000 relcache entries for the base tables.
> 
> 2. If you have N indexes per table, N*10000 relcache entries for
> the indexes.  (The planner will probably have looked at all those
> indexes, even if it didn't find any use for them.)
> 
> 3. 10000 SeqScan plan nodes and associated rangetable entries,
> 
> 4. Likewise, 10000 instances of executor per-node state.
> 
> 5. 10000 lock table entries (both shared and local lock tables).
> 
> 6. Probably a few per-relation things I didn't think of.

I see your point about all that query-related stuff. I hope the
testing of queries in a staging environment should help to detect such
situations. 

What about the system catalogs however? Will the extra 10000
tables and 500000 indexes negatively impact the performance of the
system catalogs? Are there any caveats you could think of?

> 
> Basically, this is likely to work until it doesn't ... 

Oh, this can be said about many things beyond the very trivial.
Unless there are immediately visible and well-known limitations, I
would risk it.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Ron
Date:
On 1/18/22 2:19 AM, Victor Sudakov wrote:
> Tom Lane wrote:
>> Victor Sudakov <vas@sibptus.ru> writes:
>>> Tom Lane wrote:
>>>> Well, yeah, actually.  An ill-advised query will blow out your backend's
>>>> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
>>>> killer[1] (if you're on Linux), resulting in a backend crash and cluster
>>>> restart.
>>> Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
>>> even if t has 10000 partitions?
>> Sure, COUNT(*)'s runtime memory consumption is negligible.
>> But you're not thinking about overhead --- specifically,
>>
>> 1. 10000 relcache entries for the base tables.
>>
>> 2. If you have N indexes per table, N*10000 relcache entries for
>> the indexes.  (The planner will probably have looked at all those
>> indexes, even if it didn't find any use for them.)
>>
>> 3. 10000 SeqScan plan nodes and associated rangetable entries,
>>
>> 4. Likewise, 10000 instances of executor per-node state.
>>
>> 5. 10000 lock table entries (both shared and local lock tables).
>>
>> 6. Probably a few per-relation things I didn't think of.
> I see your point about all that query-related stuff. I hope the
> testing of queries in a staging environment should help to detect such
> situations.
>
> What about the system catalogs however? Will the extra 10000
> tables and 500000 indexes negatively impact the performance of the
> system catalogs? Are there any caveats you could think of?

EXPLAIN plans are going to be hilariously gigantic, which means that query 
planning would take a loooong time,  And the query planner (in v12, at 
least) can generate some pretty bad plans in partitioned tables; I bet there 
are edge cases in the QP code that don't work well with 10000 partitions and 
50000 indices.

-- 
Angular momentum makes the world go 'round.



Re: Large number of partitions of a table

From
Mladen Gogala
Date:
On 1/18/22 09:47, Ron wrote:
EXPLAIN plans are going to be hilariously gigantic, which means that query planning would take a loooong time,  And the query planner (in v12, at least) can generate some pretty bad plans in partitioned tables; I bet there are edge cases in the QP code that don't work well with 10000 partitions and 50000 indices.

These things sometimes happen. On the oracle-l, there is a thread in progress about a generated SQL with 130,000 lines which fails parsing because parsing exhausts the available memory. In such cases, I am less inclined to look for a fault with the database software and more inclined to inflict an unusual and cruel punishment on whoever has designed the data model.

-- 
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com

Re: Large number of partitions of a table

From
Ron
Date:
On 1/18/22 6:09 PM, Mladen Gogala wrote:
> On 1/18/22 09:47, Ron wrote:
>> EXPLAIN plans are going to be hilariously gigantic, which means that 
>> query planning would take a loooong time,  And the query planner (in v12, 
>> at least) can generate some pretty bad plans in partitioned tables; I bet 
>> there are edge cases in the QP code that don't work well with 10000 
>> partitions and 50000 indices.
>
> These things sometimes happen. On the oracle-l, there is a thread in 
> progress about a generated SQL with 130,000 lines which fails parsing 
> because parsing exhausts the available memory.
>

But there wasn't any programming.  The company saved so much money!!!

> In such cases, I am less inclined to look for a fault with the database 
> software and more inclined to inflict an unusual and cruel punishment on 
> whoever has designed the data model.
>

OOP was definitely involved, in some way or form...

-- 
Angular momentum makes the world go 'round.



Re: Large number of partitions of a table

From
Ketan Popat
Date:
Hi Victor,

Thanks for starting a thread on a large number of partitions with PostgreSQL.

If you are okay, I would like to start with understanding the problem rather than the solution.
What I have understood so far is, you have a multi-tenant application and you are using PostgreSQL 13.5 as a data store for the same.

Few questions there:
  • How many tenants are you going to have to start with? What do your 3-5 years projections in terms of number of tenants look like?
  • What is per tenant data volume? Both Avg and Max
  • What is going to be your table size range over 3-5 years?
    • How many small (< 100 G), medium (100-300 G) and large tables (>300 GB)
  • What is going to be max table size?
  • Have you considered HASH partitioning based on your tenant id over list?
  • What is going to be your query SLA (Avg and P99)?
  • Are you using the community edition of PostgreSQL or Aurora?
  • So far my understanding is no cross tenant queries from end user/OLTP traffic however feel free to correct as this is key design consideration
  • Do you have separate data lake/data mart for cross tenant for internal/analytical needs?
  • What does the break up of read vs write look like?
    • Within write, can you share break up of insert, update, delete?
    • How complex your select queries would look like? In terms of numbers of tables being joined, working data set in terms of rows and volumes, data aggregation
Once we have above input, we can have a better data point oriented discussions around (a) partition pruning for select, updates and delete and plan time overhead (b) data clustering aspects (c) vacuum aspects of medium and large tables (d) depending upon scale of your operation, consider leveraging multi-schema and/or multiple databases to limit number of partitions per table provided your application can redirect a tenant to appropriate schema/databases.

Thanks,
Ketan




On Tue, Jan 18, 2022 at 5:49 PM Ron <ronljohnsonjr@gmail.com> wrote:
On 1/18/22 6:09 PM, Mladen Gogala wrote:
> On 1/18/22 09:47, Ron wrote:
>> EXPLAIN plans are going to be hilariously gigantic, which means that
>> query planning would take a loooong time,  And the query planner (in v12,
>> at least) can generate some pretty bad plans in partitioned tables; I bet
>> there are edge cases in the QP code that don't work well with 10000
>> partitions and 50000 indices.
>
> These things sometimes happen. On the oracle-l, there is a thread in
> progress about a generated SQL with 130,000 lines which fails parsing
> because parsing exhausts the available memory.
>

But there wasn't any programming.  The company saved so much money!!!

> In such cases, I am less inclined to look for a fault with the database
> software and more inclined to inflict an unusual and cruel punishment on
> whoever has designed the data model.
>

OOP was definitely involved, in some way or form...

--
Angular momentum makes the world go 'round.


Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Ron wrote:

[dd]

> >
> > What about the system catalogs however? Will the extra 10000
> > tables and 500000 indexes negatively impact the performance of the
> > system catalogs? Are there any caveats you could think of?
> 
> EXPLAIN plans are going to be hilariously gigantic, which means that query 
> planning would take a loooong time,  

No, I'm not observing this for my test queries. With
enable_partition_pruning=on (which is the default) all the queries
I've tested look neat and short.

> And the query planner (in v12, at 
> least) can generate some pretty bad plans in partitioned tables; I bet there 
> are edge cases in the QP code that don't work well with 10000 partitions and 
> 50000 indices.

Actually "5.11.6. Best Practices for Declarative Partitioning" says
"The query planner is generally able to handle partition hierarchies
with up to a few thousand partitions fairly well, provided that
typical queries allow the query planner to prune all but a small
number of partitions."

I admit 10000 partitions is slightly more than "a few thousand". "A
few" sounds to me like 3-5.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet



Re: Large number of partitions of a table

From
Victor Sudakov
Date:
Ketan Popat wrote:
> Hi Victor,
> 
> Thanks for starting a thread on a large number of partitions with
> PostgreSQL.
> 
> If you are okay, I would like to start with understanding the problem
> rather than the solution.
> What I have understood so far is, you have a multi-tenant application and
> you are using PostgreSQL 13.5 as a data store for the same.
> 
> Few questions there:

Hello Ketan,

I'm sorry this information is too private to share.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet