Re: Exporting TBs of data in Oracle schema/database into CSV formatto migrate into PostgreSQL - Mailing list pgsql-admin

From Marlene Villanueva
Subject Re: Exporting TBs of data in Oracle schema/database into CSV formatto migrate into PostgreSQL
Date
Msg-id CAA0OsZJC+3Dwnmdsif4agqmxaVtwu0MRrkv-2cKw6FDEMenY4A@mail.gmail.com
Whole thread Raw
In response to Re: Exporting TBs of data in Oracle schema/database into CSV formatto migrate into PostgreSQL  (Avinash Kumar <avinash.vallarapu@gmail.com>)
List pgsql-admin

On Mon, Apr 20, 2020, 8:39 AM Avinash Kumar <avinash.vallarapu@gmail.com> wrote:
Hi,
On Mon, Apr 20, 2020 at 12:31 PM Scott Ribe <scott_ribe@elevated-dev.com> wrote:
So, the big question is: how much downtime can you afford on the current Oracle DB?

Yes, you should investigate the most efficient way to transfer data, but with TBs it will still be significant time. If you can afford the down time, great--kill all other connections to Oracle and go for it. If not, that's more complicated since you'll have to figure out a way to track and apply changes that happen during the copy...
If downtime is a concern, then, start a transaction on PG, using isolation level - REPEATABLE READ, get the SCN number from Oracle and copy the existing data from each table (table by table) using FDW for the initial data load. And then, if you have Oracle GG license, start the replicat from that SCN so that the data gets to sync eventually. 
And then, when all the tables are in Sync, go ahead and switch your Apps. If we dont have GG license, we need to look for some other extensions or Kafka or something else that could do this job. Remember, migration is always challenging when lots of data is involved, and there never exists a direct magic. 


--
Regards,
Avinash Vallarapu
+1-902-221-5976

pgsql-admin by date:

Previous
From: Avinash Kumar
Date:
Subject: Re: Exporting TBs of data in Oracle schema/database into CSV formatto migrate into PostgreSQL
Next
From: jian xu
Date:
Subject: Re: checkpoint process use too much memory