Thread: Partitions number limitation ?
Hi ! To improve our software insert datas performance, we have decided to use partition architecture in our database. It works well for the following volume : 45 main tables and 288 inherited daughter tables for each, that is a total of 12960 partitions. I have some trouble now with another customer's database with the following volume : 87 main tables and 288 tables for each, that is 25056 partitions. Is there some kind of limit in postgresql about the number of partitions ? Do you know some tuning in the conf files to improve postgresql management of so many tables ? I have already used different tablespaces, one for each main table and its 288 partitions. I have many datas to insert into these tables (that is ten or hundred of thousands every five minutes for each main group). Do you think it can be done with a hardware raid 0 SATA disk (it's the case today ...) ? Thank you all for help. Best regards Sylvain Caillet
On Wed, Sep 3, 2008 at 4:00 AM, <s.caillet@free.fr> wrote: > Hi ! > > To improve our software insert datas performance, we have decided to use > partition architecture in our database. It works well for the following volume : > 45 main tables and 288 inherited daughter tables for each, that is a total of > 12960 partitions. > > I have some trouble now with another customer's database with the following > volume : 87 main tables and 288 tables for each, that is 25056 partitions. > > Is there some kind of limit in postgresql about the number of partitions ? Do > you know some tuning in the conf files to improve postgresql management of so > many tables ? I have already used different tablespaces, one for each main table > and its 288 partitions. What do you mean PostgreSQL management of the partitions. Triggers, rules, application based partitioning? Rules aren't fast enough with this many partitions and if you can easily add partitioning in your application and write directly to the right child table it might be much faster. The size of your disk array depends very much on how quickly you'll be saturating your CPUS, either in the app layer or CPU layer to maintain your partitioning. If an app needs to insert 100,000 rows into ONE partition, and it knows which one, it's likely to be way faster to have the app do it instead of pgsql. The app has to think once, the database 100,000 times.
s.caillet@free.fr wrote: > Is there some kind of limit in postgresql about the number of partitions ? Do > you know some tuning in the conf files to improve postgresql management of so > many tables ? I have already used different tablespaces, one for each main table > and its 288 partitions. Postgres is not really designed for performance of partitions, so you have to manage that yourself. I am working on a project with a similar design and found that the super table has its limitations. At some point the db just aborts a query if there are to many partitions. I seem to remeber I have worked with up to 100K partitions, but managed them individually instead of through the super table. Just a tip: if the table gets data inserted once and then mainly read after that, its faster to create the index for the partition after the insert. Another tip: use COPY to insert data instead of INSERT, its about 3-5 times faster, it is supported by the C driver and a patched JDBC driver regards tom