Thread: Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot

Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot

From
Ari Halberstadt
Date:
Tom Lane <tgl@sss.pgh.pa.us> noted that MAXQUERYLEN's value in pg_dump is
5000. Some of my fields are the maximum length for a text field.

Using the 5/26 snapshot, I increased MAXQUERYLEN to 16384 and it completed
without crashing. I also tried it at 8192 but it still crashed at that size.

The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The
core file is 13.8MB, which sounds like a memory leak in pg_dump.

-- Ari Halberstadt mailto:ari@shore.net <http://www.magiccookie.com/>
PGP public key available at <http://www.magiccookie.com/pgpkey.txt>




Ari Halberstadt <ari@shore.net> writes:
> Tom Lane <tgl@sss.pgh.pa.us> noted that MAXQUERYLEN's value in pg_dump is
> 5000. Some of my fields are the maximum length for a text field.

There are two bugs here: dumpClasses_dumpData() should not be making any
assumption at all about the maximum size of a tuple field, and pg_dump's
value for MAXQUERYLEN ought to match the backend's.  I hadn't realized
that it wasn't using the same query buffer size as the backend does ---
this might possibly explain some other complaints we've seen about being
unable to dump complex table or rule definitions.

Will fix both problems this evening.

> The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The
> core file is 13.8MB, which sounds like a memory leak in pg_dump.

Not necessarily --- are the large text fields in a multi-megabyte table?
When you're using -D, pg_dump just does a "SELECT * FROM table" and then
iterates through the returned result, which must hold the whole table.
(This is another reason why I prefer not to use -d/-D ... the COPY
method doesn't require buffering the whole table inside pg_dump.)

Some day we should enhance libpq to allow a select result to be received
and processed in chunks smaller than the whole result.
        regards, tom lane


Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot

From
Ari Halberstadt
Date:
Tom Lane <tgl@sss.pgh.pa.us> wrote:
>...
>Will fix both problems this evening.

Thanks!

>> The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The
>> core file is 13.8MB, which sounds like a memory leak in pg_dump.
>
>Not necessarily --- are the large text fields in a multi-megabyte table?

Yes, it's a 15MB file for the table.

>When you're using -D, pg_dump just does a "SELECT * FROM table" and then
>iterates through the returned result, which must hold the whole table.
>(This is another reason why I prefer not to use -d/-D ... the COPY
>method doesn't require buffering the whole table inside pg_dump.)

The -d/-D options are out now for my nightly backups. (Foolish of me to
have used them with backups in the first place!)

-- Ari Halberstadt mailto:ari@shore.net <http://www.magiccookie.com/>
PGP public key available at <http://www.magiccookie.com/pgpkey.txt>