Thread: Postgresql partitioning limit - possible bottlenecks

Postgresql partitioning limit - possible bottlenecks

From
David Susa
Date:
Hi,

We have a new customer who has an application written in JEE using PostgreSQL 8.4. The database designer originally determined that the best way to handle multitenancy was to create a partition for each new "client" in two critical tables (table_a and table_b). 

So lets say there are 500 clients created, there are 1000 partitions, (table_a_1 ... table_a_500, table_b_1...table_b_500).
Those tables each have arround 8 million records each.

Reading postgres documentation I found that suggested limit for partitioning goes arround 100 tables, so I guess this must be affecting performance somehow. (here)

Good thing is that every query that uses these 2 tables can actually be directed to the exact partition by client ID. So I guess that helps with query planning time?

The question is then, how can we mitigate performance issues in the meantime? Are there configuration parameters that we can tune? Will more computational power help?


Thank you.