so now when your running application goes to query the table, it gets doubles? if you do it in transactions, then how long are you going to cause the master table to be locked when doing such a bulk delete?
my point is to minimize service interruption, and that means moving small hunks at a time to minimize the locks needed.
Agreed, if you are pointing to the application..
The partitioning documentation in PG is very clear on how to partition a new table. Create child tables, and have triggers that manage INSERT, UPDATE and DELETE commands. How about doing this with existing massive tables? (Over 120 million rows) I could create a new parent table with child tables, and then INSERT all these millions of rows to put them into the right partition. But is that recommended?
Here, I would go with COPY command rather than INSERT. Firstly, setup the partition/child tables with relevant triggers and calling function on it. Use COPY FROM command pointing to parent table by calling the .csv file(created on MASSIVE table). Triggers will push the data to the respective child tables. Faster and efficient way.