Re: pg_dump with 1100 schemas being a bit slow - Mailing list pgsql-general

From Loic d'Anterroches
Subject Re: pg_dump with 1100 schemas being a bit slow
Date
Msg-id 8e2f2cb20910070925j55700594n2e37e818f5053348@mail.gmail.com
Whole thread Raw
In response to Re: pg_dump with 1100 schemas being a bit slow  ("Massa, Harald Armin" <chef@ghum.de>)
List pgsql-general
Harald,

>>settings up each time. The added benefit of doing a per schema dump is
>>that I provide it to the users directly, that way they have a full
>>export of their data.
>
> you should try the timing with
>
> pg_dump --format=c  completedatabase.dmp
>
> and then generating the separte schemas in an extra step like
>
> pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp
>
> I found that even with maximum compression
>
> pg_dump --format=c --compress=9
>
> the pg_dump compression was quicker then  dump + gzip/bzip/7z compression
> afterwards.
>
> And after the dumpfile is created, pg_restore will leave your database
> alone.
> (make sure to put completedatabase.dmp on a separate filesystem). You can
> even try to run more then one pg_restore --file in parallel.

Yummy! The speed of a full dump and the benefits of the per schema
dump for the users. I will try this one tonight when the load is low.
I will keep you informed of the results.

Thanks a lot for all the good ideas, pointers!
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

pgsql-general by date:

Previous
From: "Loic d'Anterroches"
Date:
Subject: Re: pg_dump with 1100 schemas being a bit slow
Next
From: "Joshua D. Drake"
Date:
Subject: Re: pg_dump with 1100 schemas being a bit slow