Re: pg_dump with 1100 schemas being a bit slow - Mailing list pgsql-general

From Joshua D. Drake
Subject Re: pg_dump with 1100 schemas being a bit slow
Date
Msg-id 1254932961.11374.24.camel@jd-desktop.iso-8859-1.charter.com
Whole thread Raw
In response to pg_dump with 1100 schemas being a bit slow  ("Loic d'Anterroches" <diaeresis@gmail.com>)
List pgsql-general
On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
> Hello,

> My problem is that the dump increased steadily with the number of
> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
> now ballooning at 120MB of memory usage when running the dump.
>

And it will continue to. The number of locks that are needing to be
acquired will consistently increase the amount of time it takes to
backup the database as you add schemas and objects. This applies to
whether or not you are running a single dump per schema or a global dump
with -Fc.

I agree with the other participants in this thread that it makes more
sense for you to use -Fc but your speed isn't going to change all that
much overall.

Joshua D. Drake

--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - Salamander


pgsql-general by date:

Previous
From: Kynn Jones
Date:
Subject: Re: How to troubleshoot authentication failure?
Next
From: paresh masani
Date:
Subject: Re: Need help in spi_prepare errors