Re: pg_dump large-file support > 16GB - Mailing list pgsql-general

From Tom Lane
Subject Re: pg_dump large-file support > 16GB
Date
Msg-id 6443.1111157914@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_dump large-file support > 16GB  (Rafael Martinez <r.m.guerrero@usit.uio.no>)
Responses Re: pg_dump large-file support > 16GB
Re: pg_dump large-file support > 16GB
List pgsql-general
Rafael Martinez <r.m.guerrero@usit.uio.no> writes:
> On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
>> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
>> different if you just write to stdout instead of using --file?

> - In this example, it is a plain text (--format=3Dp).
> - If I write to stdout and redirect to a file, the dump finnish without
> problems and I get a dump-text-file over 16GB without problems.

In that case, you have a glibc or filesystem bug and you should be
reporting it to Red Hat.  The *only* difference between writing to
stdout and writing to a --file option is that in one case we use
the preopened "stdout" FILE* and in the other case we do
fopen(filename, "w").  Your report therefore is stating that there
is something broken about fopen'd files.

            regards, tom lane

pgsql-general by date:

Previous
From: Joe Maldonado
Date:
Subject: pg_atributes index space question
Next
From: Michael Fuhr
Date:
Subject: Re: plpython function problem workaround