Hi everyone,
We have a production database (postgresql 9.0) with more than 20,000 schemas
and 40Gb size. In the past we had all that information in just one schema
and pg_dump used to work just fine (2-3 hours to dump everything). Then we
decided to split the database into schemas, which makes a lot of sense for
the kind of information we store and the plans we have for the future. The
problem now is that pg_dump takes forever to finish (more than 24 hours) and
we just can't have consistent daily backups like we had in the past. When I
try to dump just one schema with almost nothing in it, it takes 12 minutes.
When I try to dump a big schema with lots of information, it takes 14
minutes. So pg_dump is clearly lost in the middle of so many schemas. The
load on the machine is low (it is a hot standby replica db) and we have good
configurations for memory, cache, shared_buffers and everything else. The
performance of the database itself is good, it is only pg_dump that is
inefficient for the task. I have found an old discussion back in 2007 that
seems to be quite related to this problem:
http://postgresql.1045698.n5.nabble.com/5-minutes-to-pg-dump-nothing-tp1888814.html
It seems that pg_dump hasn't been tested with a huge number of schemas like
that. Does anyone have a solution or suggestions? Do you know if there are
patches specific for this case?
Thanks in advance,
Hugo
-----
Official Nabble Administrator - we never ask for passwords.
--
View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.