Re: Re: Need help in reclaiming disk space by deleting the selected records - Mailing list pgsql-general

From Yelai, Ramkumar IN BLR STS
Subject Re: Re: Need help in reclaiming disk space by deleting the selected records
Date
Msg-id 13D0F6C9B3073A4999E61CAAD61AE7ECC45635C125@INBLRK77M2MSX.in002.siemens.net
Whole thread Raw
In response to Re: Re: Need help in reclaiming disk space by deleting the selected records  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
List pgsql-general

-----Original Message-----
From: Albe Laurenz [mailto:laurenz.albe@wien.gv.at]
Sent: Friday, September 28, 2012 1:07 PM
To: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Cc: scrawford@pinpointresearch.com; andres@2ndquadrant.com
Subject: RE: [GENERAL] Re: Need help in reclaiming disk space by deleting the selected records

Yelai, Ramkumar wrote:
>> Anything exceeding a few hundred partitions is not considered a good
idea.
>> The system needs to keep track of all the tables, and query planning
for such a partitioned table
>> might be expensive.
>>
>> 1440 is probably pushing the limits, but maybe somebody with more
experience can say more.

> By mistake I added 1440 tables, but it is incorrect, below is the
total number of tables
>
> 7 base tables X 120 months = 840 child tables.
>
> As per your statement, If I create these many table then it will
affect the performance. But as per
> the document
(http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html)
> constraint_exclusion will improve query performance. Please clarify me
here how query planning will be
> expensive?

The planner will have to decide which of the 840 tables to access.

> I have one more strategy that instead of creating 10 years, I'd like
to use batch processing like
> create 2 years of tables ( 240 tables ) and when we are going above 2
years we will create next 2 year
> table and update the trigger or use 5 years (480 tables ) instead of 2
years.
>
> The above approach will not create a more partitioned table and if
user wants space they can truncate
> the old tables. Please let me know is this good approach?

I don't understand that in detail.

I would recommend that you prototype some of these variants and run some performance tests.  That's the only good way
toknow what will perform well in your environment. 

Yours,
Laurenz Albe

Thanks Laurenz Albe.

After I went through the below articles, I understand query plans about partition table and its limits.

http://stackoverflow.com/questions/6104774/how-many-table-partitions-is-too-many-in-postgres
http://postgresql.1045698.n5.nabble.com/Table-partitioning-td3410542.html


As per our functionality ( 7 tables are represents 7 Unique archiving logs ), we will not be querying 840 tables or 7
basetables at same time. i.e each unique archiving logs table will have only 120 child tables, hence planner will have
todevice which of 120 table to access. 

In addition to this, at any time I will be reading only one partition table among 120 tables as per our computations.
Inthis computation we will not use any joins or combining the partition tables.  

As Laurenz said, I will do some prototype and I will check the query plans based on our queries.

Please let me know if you have any points are suggestions.

Thanks & regards,
Ramkumar






pgsql-general by date:

Previous
From: Andrew Hastie
Date:
Subject: Re: pg_upgrade: out of memory
Next
From: Steve Atkins
Date:
Subject: Re: Securing .pgpass File?