Re: large file limitation - Mailing list pgsql-general

From Tom Lane
Subject Re: large file limitation
Date
Msg-id 11359.1011405107@sss.pgh.pa.us
Whole thread Raw
In response to Re: large file limitation  (Jan Wieck <janwieck@yahoo.com>)
Responses Re: large file limitation  (Jan Wieck <janwieck@yahoo.com>)
Re: large file limitation  (Andrew Sullivan <andrew@libertyrms.info>)
List pgsql-general
Jan Wieck <janwieck@yahoo.com> writes:
>>> I suppose I need to recompile Postgres now on the system now that it
>>> accepts large files.
>>
>> Yes.

>     No.  PostgreSQL is totally fine with that limit, it will just
>     segment huge tables into separate files of 1G max each.

The backend is fine with it, but "pg_dump >outfile" will choke when
it gets past 2Gb of output (at least, that is true on Solaris).

I imagine "pg_dump | split" would do as a workaround, but don't have
a Solaris box handy to verify.

I can envision building 32-bit-compatible stdio packages that don't
choke on large files unless you actually try to do ftell or fseek beyond
the 2G boundary.  Solaris' implementation, however, evidently fails
hard at the boundary.

            regards, tom lane

pgsql-general by date:

Previous
From: Jan Wieck
Date:
Subject: Re: large file limitation
Next
From: Jan Wieck
Date:
Subject: Re: large file limitation