Re: [PERFORM] pg_dump and thousands of schemas - Mailing list pgsql-hackers

From Tatsuo Ishii
Subject Re: [PERFORM] pg_dump and thousands of schemas
Date
Msg-id 20120531.142901.341963320798831982.t-ishii@sraoss.co.jp
Whole thread Raw
In response to Re: [PERFORM] pg_dump and thousands of schemas  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
> I'm not excited by this patch.  It dodges the O(N^2) lock behavior for
> the initial phase of acquiring the locks, but it does nothing for the
> lock-related slowdown occurring in all pg_dump's subsequent commands.
> I think we really need to get in the server-side fix that Jeff Janes is
> working on, and then re-measure to see if something like this is still
> worth the trouble.

Well, even with current backend, locking 100,000 tables has been done
in 3 seconds in my test. So even if Jeff Janes's fix is succeeded, I
guess it will just save 3 seconds in my case. and if number of tables
is smaller, the saving will smaller. This suggests that most of time
for processing LOCK has been spent in communication between pg_dump
and backend. Of course this is just my guess, though.

> I am also a tad concerned about whether we might not
> have problems with parsing memory usage, or some such, with thousands of
> tables being listed in a single command.

That's easy to fix. Just divide each LOCK statements into multiple
LOCK statements.

My big concern is, even if the locking part is fixed (either by Jeff
Jane's fix or by me) still much time in pg_dump is spent for SELECTs
against system catalogs. The fix will be turn many SELECTs into single
SELECT, probably using big IN clause for tables oids.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

pgsql-hackers by date:

Previous
From: "Erik Rijkers"
Date:
Subject: Re: FailedAssertion("!(PrivateRefCount[i] == 0)", File: "bufmgr.c", Line: 1741
Next
From: Tom Lane
Date:
Subject: Re: Bug in new buffering GiST build code