Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot - Mailing list pgsql-hackers

From Tom Lane
Subject Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot
Date
Msg-id 13778.927744354@sss.pgh.pa.us
Whole thread Raw
In response to Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot  (Ari Halberstadt <ari@shore.net>)
Responses Re: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot
List pgsql-hackers
Ari Halberstadt <ari@shore.net> writes:
> Tom Lane <tgl@sss.pgh.pa.us> noted that MAXQUERYLEN's value in pg_dump is
> 5000. Some of my fields are the maximum length for a text field.

There are two bugs here: dumpClasses_dumpData() should not be making any
assumption at all about the maximum size of a tuple field, and pg_dump's
value for MAXQUERYLEN ought to match the backend's.  I hadn't realized
that it wasn't using the same query buffer size as the backend does ---
this might possibly explain some other complaints we've seen about being
unable to dump complex table or rule definitions.

Will fix both problems this evening.

> The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The
> core file is 13.8MB, which sounds like a memory leak in pg_dump.

Not necessarily --- are the large text fields in a multi-megabyte table?
When you're using -D, pg_dump just does a "SELECT * FROM table" and then
iterates through the returned result, which must hold the whole table.
(This is another reason why I prefer not to use -d/-D ... the COPY
method doesn't require buffering the whole table inside pg_dump.)

Some day we should enhance libpq to allow a select result to be received
and processed in chunks smaller than the whole result.
        regards, tom lane


pgsql-hackers by date:

Previous
From: wieck@debis.com (Jan Wieck)
Date:
Subject: Re: [HACKERS] report for Win32 port
Next
From: Bruce Momjian
Date:
Subject: NUMERIC regression test?