Re: pg_dump with 1100 schemas being a bit slow - Mailing list pgsql-general

From Bill Moran
Subject Re: pg_dump with 1100 schemas being a bit slow
Date
Msg-id 20091007115454.5b5e369a.wmoran@potentialtech.com
Whole thread Raw
In response to Re: pg_dump with 1100 schemas being a bit slow  ("Loic d'Anterroches" <diaeresis@gmail.com>)
Responses Re: pg_dump with 1100 schemas being a bit slow
List pgsql-general
In response to "Loic d'Anterroches" <diaeresis@gmail.com>:

> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > "Loic d'Anterroches" <diaeresis@gmail.com> writes:
> >> Each night I am running:
> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> >> /path/to/backups/%s/%s-%s.sql.gz
> >> this for each installation, so 1100 times. Substitution strings are to
> >> timestamp and get the right schema.

Have you tested the speed without the gzip?

We found that compressing the dump takes considerably longer than pg_dump
does, but pg_dump can't release its locks until gzip has completely
processed all of the data, because of the pipe.

By doing the pg_dump in a different step than the compression, we were
able to eliminate our table locking issues, i.e.:

pg_dump --blobs --schema=%s --no-acl -U postgres indefero > /path/to/backups/%s/%s-%s.sql && gzip
/path/to/backups/%s/%s-%s.sql

Of course, you'll need enough disk space to store the uncompressed
dump while gzip works.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

pgsql-general by date:

Previous
From: "Loic d'Anterroches"
Date:
Subject: Re: pg_dump with 1100 schemas being a bit slow
Next
From: "Massa, Harald Armin"
Date:
Subject: Re: pg_dump with 1100 schemas being a bit slow