Re: pg_dump and thousands of schemas - Mailing list pgsql-performance

From Jeff Janes
Subject Re: pg_dump and thousands of schemas
Date
Msg-id CAMkU=1yi6f8VKMuwbED=Ogux51Tw-DMQabt580+1vg5_wf-YVA@mail.gmail.com
Whole thread Raw
In response to Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
Responses Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
List pgsql-performance
On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:
>> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:
>>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
>>>>> management in the server.  What I fixed so far on the pg_dump side
>>>>> should be enough to let partial dumps run at reasonable speed even if
>>>>> the whole database contains many tables.  But if psql is taking
>>>>> AccessShareLock on lots of tables, there's still a problem.
>>>>
>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK
>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple
>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now
>>>> it only takes 3 seconds. Comments?
>>>
>>> Could you rebase this?  I tried doing it myself, but must have messed
>>> it up because it got slower rather than faster.
>>
>> OK, I found the problem.  In fixing a merge conflict, I had it execute
>> the query every time it appended a table, rather than just at the end.
>>
>> With my proposed patch in place, I find that for a full default dump
>> your patch is slightly faster with < 300,000 tables, and slightly
>> slower with > 300,000.  The differences are generally <2% in either
>> direction.  When it comes to back-patching and partial dumps, I'm not
>> really sure what to test.
>>
>> For the record, there is still a quadratic performance on the server,
>> albeit with a much smaller constant factor than the Reassign one.  It
>> is in get_tabstat_entry.  I don't know if is worth working on that in
>> isolation--if PG is going to try to accommodate 100s of thousands of
>> table, there probably needs to be a more general way to limit the
>> memory used by all aspects of the rel caches.
>
> I would like to test your patch and w/without my patch. Could you
> please give me the patches? Or do you have your own git repository?

The main patch is in the commit fest as "Resource Owner reassign Locks
for the sake of pg_dump"

My re-basing of your patch is attached.

Cheers,

Jeff

Attachment

pgsql-performance by date:

Previous
From: Shaun Thomas
Date:
Subject: Re: Performance of pg_basebackup
Next
From: Tatsuo Ishii
Date:
Subject: Re: pg_dump and thousands of schemas