Re: pg_dump and thousands of schemas - Mailing list pgsql-performance

From Jeff Janes
Subject Re: pg_dump and thousands of schemas
Date
Msg-id CAMkU=1zdr7eOEcbopM6c-+zT1aTaWXsTyA_5ZkZ4rgG7EkxMPQ@mail.gmail.com
Whole thread Raw
In response to Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
Responses Re: pg_dump and thousands of schemas
List pgsql-performance
On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:
>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
>> management in the server.  What I fixed so far on the pg_dump side
>> should be enough to let partial dumps run at reasonable speed even if
>> the whole database contains many tables.  But if psql is taking
>> AccessShareLock on lots of tables, there's still a problem.
>
> Ok, I modified the part of pg_dump where tremendous number of LOCK
> TABLE are issued. I replace them with single LOCK TABLE with multiple
> tables. With 100k tables LOCK statements took 13 minutes in total, now
> it only takes 3 seconds. Comments?

Could you rebase this?  I tried doing it myself, but must have messed
it up because it got slower rather than faster.

Thanks,

Jeff

pgsql-performance by date:

Previous
From: Robert Klemme
Date:
Subject: Re: partitioning performance question
Next
From: Rural Hunter
Date:
Subject: Re: how to change the index chosen in plan?