I would first recommend looking into partitioning for managing data retention like this.
I don’t think you need partitioning. The is so many ways to slice this problem.
You need some form of logical replication. I’ve used both of these methods in the past to do exactly the issue at hand. These are not the only solutions.
1, Queue tables via triggers and then asynchronous replicate those transaction to the archive database in real time.
2. Messages queues; Modern application make use of massage queues; Just attach the archive database to the same message queue.
I don’t see a need for partitioning. I managed a Muti-terabyte content delivery platform where the data was only licensed between 30 to 90 days. Records where being purged more or less just as fast as new records were being created. In essence the database was completely reloaded with new data every 30-90 days depending on the data stream. We did no use any form of partitioned tables and had no performance issues.