On Fri, Jan 18, 2002 at 08:51:47PM -0500, Tom Lane wrote:
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).
Right. Sorry if I wasn't clear about that; I know that Postgres
itself never writes a file bigger than 1 Gig, but pg_dump and
pg_restore can easily pass that limit.
> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.
It will. If you check 'man largefiles' on Solaris (7 anyway; I don't
know about other versions) it will tell you what basic Solaris system
binaries are large file aware. /usr/bin/split is one of them, as is
/usr/bin/compress. We are working in a hosted environment, and I
didn't completely trust the hosts not to drop one of the files when
sending them to tape, or I would have used split instead of
recompiling.
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M6K 3E3
+1 416 646 3304 x110