Re: pg_dump and thousands of schemas - Mailing list pgsql-performance

From Tom Lane
Subject Re: pg_dump and thousands of schemas
Date
Msg-id 12147.1338306742@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
Responses Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
List pgsql-performance
Tatsuo Ishii <ishii@postgresql.org> writes:
> So I did qucik test with old PostgreSQL 9.0.2 and current (as of
> commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed
> database I created 100,000 tables, and each has two integer
> attributes, one of them is a primary key. Creating tables were
> resonably fast as expected (18-20 minutes). This created a 1.4GB
> database cluster.

> pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty
> long time as the customer complained. Now what was current?  Well it
> took 125 minutes. Ps showed that most of time was spent in backend.

Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
management in the server.  What I fixed so far on the pg_dump side
should be enough to let partial dumps run at reasonable speed even if
the whole database contains many tables.  But if psql is taking
AccessShareLock on lots of tables, there's still a problem.

            regards, tom lane

pgsql-performance by date:

Previous
From: Job
Date:
Subject: Strong slowdown on huge tables
Next
From: Tom Lane
Date:
Subject: Re: Strong slowdown on huge tables