Re: pg_dump with 1100 schemas being a bit slow - Mailing list pgsql-general

From Massa, Harald Armin
Subject Re: pg_dump with 1100 schemas being a bit slow
Date
Msg-id e3e180dc0910070900w754c63d1wf80032f1cceb2700@mail.gmail.com
Whole thread Raw
In response to Re: pg_dump with 1100 schemas being a bit slow  ("Loic d'Anterroches" <diaeresis@gmail.com>)
Responses Re: pg_dump with 1100 schemas being a bit slow
List pgsql-general
Loic,

>settings up each time. The added benefit of doing a per schema dump is
>that I provide it to the users directly, that way they have a full
>export of their data.

you should try the timing with

pg_dump --format=c  completedatabase.dmp

and then generating the separte schemas in an extra step like

pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp

I found that even with maximum compression

pg_dump --format=c --compress=9

the pg_dump compression was quicker then  dump + gzip/bzip/7z compression afterwards.

And after the dumpfile is created, pg_restore will leave your database alone.
(make sure to put completedatabase.dmp on a separate filesystem). You can even try to run more then one pg_restore --file in parallel.

Best wishes,

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
%s is too gigantic of an industry to bend to the whims of reality

pgsql-general by date:

Previous
From: Bill Moran
Date:
Subject: Re: pg_dump with 1100 schemas being a bit slow
Next
From: "Loic d'Anterroches"
Date:
Subject: Re: pg_dump with 1100 schemas being a bit slow