Re: pg_dump large-file support > 16GB - Mailing list pgsql-general

From Aly Dharshi
Subject Re: pg_dump large-file support > 16GB
Date
Msg-id 4239C14E.90101@telus.net
Whole thread Raw
In response to Re: pg_dump large-file support > 16GB  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_dump large-file support > 16GB  (Rafael Martinez <r.m.guerrero@usit.uio.no>)
List pgsql-general
Would it help to use a different filesystem like SGI's XFS ? Would it be
possible to even implement that at you site at this stage ?

Tom Lane wrote:
> Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
>
>>We are trying to dump a 30GB+ database using pg_dump with the --file
>>option. In the beginning everything works fine, pg_dump runs and we get
>>a dumpfile. But when this file becomes 16GB it disappears from the
>>filesystem, pg_dump continues working without giving an error until it
>>finnish (even when the file does not exist)(The filesystem has free
>>space).
>
>
> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
> different if you just write to stdout instead of using --file?
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

--
Aly Dharshi
aly.dharshi@telus.net

     "A good speech is like a good dress
      that's short enough to be interesting
      and long enough to cover the subject"

pgsql-general by date:

Previous
From: Michal Hlavac
Date:
Subject: GUID data type support
Next
From: Michael Fuhr
Date:
Subject: Re: GUID data type support