Postgresql partitioning limit - possible bottlenecks - Mailing list pgsql-admin

From David Susa
Subject Postgresql partitioning limit - possible bottlenecks
Date
Msg-id CAOoE0Pm8MLshfuT048+Ec9ZgGLe+p9-XC3368cxRET8THZP01A@mail.gmail.com
Whole thread Raw
List pgsql-admin
Hi,

We have a new customer who has an application written in JEE using PostgreSQL 8.4. The database designer originally determined that the best way to handle multitenancy was to create a partition for each new "client" in two critical tables (table_a and table_b). 

So lets say there are 500 clients created, there are 1000 partitions, (table_a_1 ... table_a_500, table_b_1...table_b_500).
Those tables each have arround 8 million records each.

Reading postgres documentation I found that suggested limit for partitioning goes arround 100 tables, so I guess this must be affecting performance somehow. (here)

Good thing is that every query that uses these 2 tables can actually be directed to the exact partition by client ID. So I guess that helps with query planning time?

The question is then, how can we mitigate performance issues in the meantime? Are there configuration parameters that we can tune? Will more computational power help?


Thank you.

pgsql-admin by date:

Previous
From: "Mathis, Jason"
Date:
Subject: Re: What monitoring tools to watch Pgsql 8.4?
Next
From: John Scalia
Date:
Subject: hugepage configuration for V.9.4.0