pgsql: Fix O(N^2) behavior in pg_dump for large numbers of owned sequen - Mailing list pgsql-committers

From Tom Lane
Subject pgsql: Fix O(N^2) behavior in pg_dump for large numbers of owned sequen
Date
Msg-id E1SE3GV-0000r7-AX@gemulon.postgresql.org
Whole thread Raw
List pgsql-committers
Fix O(N^2) behavior in pg_dump for large numbers of owned sequences.

The loop that matched owned sequences to their owning tables required time
proportional to number of owned sequences times number of tables; although
this work was only expended in selective-dump situations, which is probably
why the issue wasn't recognized long since.  Refactor slightly so that we
can perform this work after the index array for findTableByOid has been
set up, reducing the time to O(M log N).

Per gripe from Mike Roest.  Since this is a longstanding performance bug,
backpatch to all supported versions.

Branch
------
REL9_0_STABLE

Details
-------
http://git.postgresql.org/pg/commitdiff/b77da19930e6b6f0e8ff0f721e59713e3709eea1

Modified Files
--------------
src/bin/pg_dump/common.c  |    3 +++
src/bin/pg_dump/pg_dump.c |   41 +++++++++++++++++++++++------------------
src/bin/pg_dump/pg_dump.h |    1 +
3 files changed, 27 insertions(+), 18 deletions(-)


pgsql-committers by date:

Previous
From: Tom Lane
Date:
Subject: pgsql: Rename frontend keyword arrays to avoid conflict with backend.
Next
From: Tom Lane
Date:
Subject: pgsql: Fix O(N^2) behavior in pg_dump for large numbers of owned sequen