Re: pg_dump and thousands of schemas - Mailing list pgsql-performance

From Hugo
Subject Re: pg_dump and thousands of schemas
Date
Msg-id 1337921645663-5709975.post@n5.nabble.com
Whole thread Raw
In response to Re: pg_dump and thousands of schemas  (Bruce Momjian <bruce@momjian.us>)
Responses Re: pg_dump and thousands of schemas
Re: pg_dump and thousands of schemas
List pgsql-performance
Thanks for the replies. The number of relations in the database is really
high (~500,000) and I don't think we can shrink that. The truth is that
schemas bring a lot of advantages to our system and postgresql doesn't show
signs of stress with them. So I believe it should also be possible for
pg_dump to handle them with the same elegance.

Dumping just one schema out of thousands was indeed an attempt to find a
faster way to backup the database. I don't mind creating a shell script or
program that dumps every schema individually as long as each dump is fast
enough to keep the total time within a few hours. But since each dump
currently takes at least 12 minutes, that just doesn't work. I have been
looking at the source of pg_dump in order to find possible improvements, but
this will certainly take days or even weeks. We will probably have to use
'tar' to compress the postgresql folder as the backup solution for now until
we can fix pg_dump or wait for postgresql 9.2 to become the official version
(as long as I don't need a dump and restore to upgrade the db).

If anyone has more suggestions, I would like to hear them. Thank you!

Regards,
Hugo


--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5709975.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

pgsql-performance by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: pg_dump and thousands of schemas
Next
From: Ondrej Ivanič
Date:
Subject: Re: pg_dump and thousands of schemas