On Thu, Nov 8, 2012 at 4:33 PM, Bruce Momjian <bruce@momjian.us> wrote:
> On Thu, Nov 8, 2012 at 03:46:09PM -0800, Jeff Janes wrote:
>> On Wed, Nov 7, 2012 at 6:17 PM, Bruce Momjian <bruce@momjian.us> wrote:
>> > As a followup to Magnus's report that pg_upgrade was slow for many
>> > tables, I did some more testing with many tables, e.g.:
>> >
>> ...
>> >
>> > Any ideas? I am attaching my test script.
>>
>> Have you reviewed the thread at:
>> http://archives.postgresql.org/pgsql-performance/2012-09/msg00003.php
>> ?
>>
>> There is a known N^2 behavior when using pg_dump against pre-9.3 servers.
>
> I am actually now dumping git head/9.3, so I assume all the problems we
> know about should be fixed.
Are sure the server you are dumping out of is head?
Using head's pg_dump, but 9.2.1 server, it takes me 179.11 seconds to
dump 16,000 tables (schema only) like your example, and it is
definitely quadratic.
But using head's pg_dump do dump tables out of head's server, it only
took 24.95 seconds, and the quadratic term is not yet important,
things still look linear.
But even the 179.11 seconds is several times faster than your report
of 757.8, so I'm not sure what is going on there. I don't think my
laptop is particularly fast:
Intel(R) Pentium(R) CPU B960 @ 2.20GHz
Is the next value, increment, etc. for a sequence stored in a catalog,
or are they stored in the 8kb file associated with each sequence? If
they are stored in the file, than it is shame that pg_dump goes to the
effort of extracting that info if pg_upgrade is just going to
overwrite it anyway.
Cheers,
Jeff