Re: pg_dump and thousands of schemas - Mailing list pgsql-performance

From Hugo
Subject Re: pg_dump and thousands of schemas
Date
Msg-id 1343799184312-5718532.post@n5.nabble.com
Whole thread Raw
In response to Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
List pgsql-performance
Hi guys,

I just want to let you know that we have created our own solution to dump
and restore our databases. The tool was written in java and the source is
now on Github (MIT license): https://github.com/tig100/JdbcPgBackup

The main approach was to cache all database objects - schemas, tables,
indexes, etc., and instead of having postgres do joins between the
pg_catalog tables (which include thousands of schemas in pg_namespace and
millions of columns in pg_attribute), we do full table scans and then find
which schema or table an object belongs to by looking it up in a hash map in
java, based on schema and table oid's. The dump is not transactionally safe,
so it should be performed on a replica db only (WAL importing disabled), not
on a live db. Some statistics:

Dump 11,000 schemas = 3 hours.
Dump 28,000 schemas = 8 hours.

You can read more about the tool on the github page.

Best regards,
Hugo




--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5718532.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

pgsql-performance by date:

Previous
From: Craig James
Date:
Subject: Re: ZFS vs. UFS
Next
From: "Kevin Grittner"
Date:
Subject: Re: Using ctid column changes plan drastically