Re: large file limitation - Mailing list pgsql-general

From Andrew Sullivan
Subject Re: large file limitation
Date
Msg-id 20020119134650.B8903@mail.libertyrms.com
Whole thread Raw
In response to Re: large file limitation  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On Fri, Jan 18, 2002 at 08:51:47PM -0500, Tom Lane wrote:
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).

Right.  Sorry if I wasn't clear about that; I know that Postgres
itself never writes a file bigger than 1 Gig, but pg_dump and
pg_restore can easily pass that limit.

> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.

It will.  If you check 'man largefiles' on Solaris (7 anyway; I don't
know about other versions) it will tell you what basic Solaris system
binaries are large file aware.  /usr/bin/split is one of them, as is
/usr/bin/compress.  We are working in a hosted environment, and I
didn't completely trust the hosts not to drop one of the files when
sending them to tape, or I would have used split instead of
recompiling.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


pgsql-general by date:

Previous
From: Ryan Kirkpatrick
Date:
Subject: How does one return rows from plpgsql functions?
Next
From: Andrew Sullivan
Date:
Subject: Re: large file limitation