Re: migration of 100+ tables - Mailing list pgsql-general

From Adrian Klaver
Subject Re: migration of 100+ tables
Date
Msg-id 06e27bdf-e046-32f8-af1c-55a60c21a88b@aklaver.com
Whole thread Raw
In response to migration of 100+ tables  (Julie Nishimura <juliezain@hotmail.com>)
Responses Re: migration of 100+ tables  (Julie Nishimura <juliezain@hotmail.com>)
List pgsql-general
On 3/10/19 5:53 PM, Julie Nishimura wrote:
> Hello friends, I will need to migrate 500+ tables  from one server (8.3) 
> to another (9.3). I cannot dump and load the entire database due to 
> storage limitations (because the source is > 20 TB, and the target is 
> about 1.5 TB).
> 
> I was thinking about using pg_dump with customized -t flag, then use 
> restore. The table names will be in the list, or I could dump their 
> names in a table.  What would be your suggestions on how to do it more 
> efficiently?

The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html

> 
> Thank you for your ideas, this is great to have you around, guys!
> 
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com


pgsql-general by date:

Previous
From: Julie Nishimura
Date:
Subject: migration of 100+ tables
Next
From: Julie Nishimura
Date:
Subject: Re: migration of 100+ tables