Re: pg_dump on older version of postgres eating huge - Mailing list pgsql-general

From Tom Lane
Subject Re: pg_dump on older version of postgres eating huge
Date
Msg-id 29927.1079730017@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_dump on older version of postgres eating huge  (Steve Krall <swalker@iglou.com>)
List pgsql-general
Steve Krall <swalker@iglou.com> writes:
> You can get the file here( 20 megs uncompressed, 130K compressed ):
> http://www.papajohns.com/postgres/postgres.log.bz2
> While this dump was running, top reported that pg_dump was taking up
> around 500-550megs.  Then the machine stopped responding.

Hmm.  The trace looks completely unexceptional --- it's just running
through your tables collecting index and trigger info (the loop in
getTables() in pg_dump.c).  You do seem to have rather a lot of
triggers, but not 500 megs worth.

Digging in the 7.1 source code, I notice that there is a small leak in
this loop: the query results from the two index-related queries are
never freed.  There should be a "PQclear(res2);" at line 2313 and
another at line 2386.  (Each at the end of the scope of the "res2" local
variables; the line numbers might be a bit different in 7.1.2 than in
the 7.1.3 code I'm looking at.)  However the trace shows that these
queries are executed a couple hundred times apiece, and the waste from
the unfreed query results shouldn't exceed a couple K each, so this
doesn't explain hundreds of megs of bloat either.  Still you might try
fixing it and see if it makes a difference.

The next move I can think of is to "kill -ABRT" the pg_dump run after
it's gotten to some moderate size (50Meg at most) and then manually poke
through the resulting core file to get a sense of what it's filling
memory with.

            regards, tom lane

pgsql-general by date:

Previous
From: Steve Krall
Date:
Subject: Re: pg_dump on older version of postgres eating huge
Next
From: Richard Huxton
Date:
Subject: Re: Generate char surrogate key