Re: large file limitation - Mailing list pgsql-general

From Jan Wieck
Subject Re: large file limitation
Date
Msg-id 200201190156.g0J1u6a07441@saturn.janwieck.net
Whole thread Raw
In response to Re: large file limitation  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: large file limitation  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: large file limitation  (Andrew Sullivan <andrew@libertyrms.info>)
List pgsql-general
Tom Lane wrote:
> Jan Wieck <janwieck@yahoo.com> writes:
> >>> I suppose I need to recompile Postgres now on the system now that it
> >>> accepts large files.
> >>
> >> Yes.
>
> >     No.  PostgreSQL is totally fine with that limit, it will just
> >     segment huge tables into separate files of 1G max each.
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).
>
> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.
>
> I can envision building 32-bit-compatible stdio packages that don't
> choke on large files unless you actually try to do ftell or fseek beyond
> the 2G boundary.  Solaris' implementation, however, evidently fails
> hard at the boundary.

    Meaning  what?  That  even  if  he'd  recompile PostgreSQL to
    support large files, the "pg_dump >outfile" would still choke
    ... duh!


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: large file limitation
Next
From: Tom Lane
Date:
Subject: Re: large file limitation