Re: pg_dump and thousands of schemas - Mailing list pgsql-performance

From Hugo
Subject Re: pg_dump and thousands of schemas
Date
Msg-id 1338091933763-5710183.post@n5.nabble.com
Whole thread Raw
In response to Re: pg_dump and thousands of schemas  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_dump and thousands of schemas  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-performance
Here is a sample dump that takes a long time to be written by pg_dump:
http://postgresql.1045698.n5.nabble.com/file/n5710183/test.dump.tar.gz
test.dump.tar.gz
(the file above has 2.4Mb, the dump itself has 66Mb)

This database has 2,311 schemas similar to those in my production database.
All schemas are empty, but pg_dump still takes 3 hours to finish it on my
computer. So now you can imagine my production database with more than
20,000 schemas like that. Can you guys take a look and see if the code has
room for improvements? I generated this dump with postgresql 9.1 (which is
what I have on my local computer), but my production database uses
postgresql 9.0. So it would be great if improvements could be delivered to
version 9.0 as well.

Thanks a lot for all the help!

Hugo

--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5710183.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

pgsql-performance by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: Seqscan slowness and stored procedures
Next
From: Ivan Voras
Date:
Subject: Re: Seqscan slowness and stored procedures