Denis <socsam@gmail.com> writes:
> Here is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it
> takes from 10 to 15 seconds when I am doing backup.
> Sort (cost=853562.04..854020.73 rows=183478 width=219) (actual
> time=5340.477..5405.604 rows=183924 loops=1)
Hmmm ... so the problem here isn't that you've got 2600 schemas, it's
that you've got 183924 tables. That's going to take some time no matter
what.
It does seem like we could make some small changes to optimize that
query a little bit, but they're not going to result in any amazing
improvement overall, because pg_dump still has to deal with all the
tables it's getting back. Fundamentally, I would ask whether you really
need so many tables. It seems pretty likely that you have lots and lots
of basically-identical tables. Usually it would be better to redesign
such a structure into fewer tables with more index columns.
> Here is the output of "pg_dump -s" test.dump
> <http://postgresql.1045698.n5.nabble.com/file/n5730877/test.dump>
This dump contains only 1 schema and 43 tables, so I don't think it's
for the database you're having trouble with ...
regards, tom lane