Jeff Janes wrote
> On Thu, Nov 8, 2012 at 1:04 AM, Denis <
> socsam@
> > wrote:
>>
>> Still I can't undesrtand why pg_dump has to know about all the tables?
>
> Strictly speaking it probably doesn't need to. But it is primarily
> designed for dumping entire databases, and the efficient way to do
> that is to read it all into memory in a few queries and then sort out
> the dependencies, rather than tracking down every dependency
> individually with one or more trips back to the database. (Although
> it still does make plenty of trips back to the database per
> table/sequence, for acls, defaults, attributes.
>
> If you were to rewrite pg_dump from the ground up to achieve your
> specific needs (dumping one schema, with no dependencies between to
> other schemata) you could probably make it much more efficient. But
> then it wouldn't be pg_dump, it would be something else.
>
> Cheers,
>
> Jeff
>
>
> --
> Sent via pgsql-performance mailing list (
> pgsql-performance@
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
Please don't think that I'm trying to nitpick here, but pg_dump has options
for dumping separate tables and that's not really consistent with the idea
that "pg_dump is primarily designed for dumping entire databases".
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5731900.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.