As a followup to Magnus's report that pg_upgrade was slow for many
tables, I did some more testing with many tables, e.g.:
CREATE TABLE test991 (x SERIAL);
I ran it for 0, 1k, 2k, ... 16k tables, and got these results:
tables pg_dump restore pg_upgrade(increase)
0 0.30 0.24 11.73(-)
1000 6.46 6.55 28.79(2.45x)
2000 29.82 20.96 69.75(2.42x)
4000 95.70 115.88 289.82(4.16x)
8000 405.38 505.93 1168.60(4.03x)
16000 1702.23 2197.56 5022.82(4.30x)
Things look fine through 2k, but at 4k the duration of pg_dump, restore,
and pg_upgrade (which is mostly a combination of these two) is 4x,
rather than the 2x as predicted by the growth in the number of tables.
To see how bad it is, 16k tables is 1.3 hours, and 32k tables would be
5.6 hours by my estimates.
You can see the majority of pg_upgrade duration is made up of the
pg_dump and the schema restore, so I can't really speed up pg_upgrade
without speeding those up, and the 4x increase is in _both_ of those
operations, not just one.
Also, for 16k, I had to increase max_locks_per_transaction or the dump
would fail, which kind of surprised me.
I tested 9.2 and git head, but they produced identical numbers. I did
use synchronous_commit=off.
Any ideas? I am attaching my test script.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +