Re: Partitioning an existing table - Mailing list pgsql-general

From Raghavendra
Subject Re: Partitioning an existing table
Date
Msg-id BANLkTikxo-S3o06hx_mMjcC40CVOHE-45A@mail.gmail.com
Whole thread Raw
In response to Re: Partitioning an existing table  (Vick Khera <vivek@khera.org>)
List pgsql-general
so now when your running application goes to query the table, it gets doubles?  if you do it in transactions, then how long are you going to cause the master table to be locked when doing such a bulk delete?

my point is to minimize service interruption, and that means moving small hunks at a time to minimize the locks needed.


Agreed, if you are pointing to the application..

The partitioning documentation in PG is very clear on how to partition
a new table. Create child tables, and have triggers that manage
INSERT, UPDATE and DELETE commands.
How about doing this with existing massive tables? (Over 120 million rows)
I could create a new parent table with child tables, and then INSERT
all these millions of rows to put them into the right partition. But
is that recommended?

Here, I would go with COPY command rather than INSERT. Firstly, setup the partition/child tables with relevant triggers and calling function on it. Use COPY FROM command pointing to parent table by calling the .csv file(created on MASSIVE table).  Triggers will push the data to the respective child tables. Faster and efficient way.  

Best Regards,
Raghavendra
EnterpriseDB Corporation
The Enterprise Postgres Company

pgsql-general by date:

Previous
From: Greg Smith
Date:
Subject: Re: 10 missing features
Next
From: Andrew Sullivan
Date:
Subject: Re: 10 missing features