Re: Backups WAS: 2 gig file size limit - Mailing list pgsql-hackers

From Joseph Shraibman
Subject Re: Backups WAS: 2 gig file size limit
Date
Msg-id 3B4A42D9.C448D456@selectacast.net
Whole thread Raw
In response to 2 gig file size limit  (Naomi Walker <nwalker@eldocomp.com>)
List pgsql-hackers
Lamar Owen wrote:
>
> On Friday 06 July 2001 18:51, Naomi Walker wrote:
> > If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> > might cause us to hit the limit?
>
> Since PostgreSQL automatically segments its internal data files to get around
> such limits, the only place you will hit this limit will be when making
> backups using pg_dump or pg_dumpall.  You may need to pipe the output of

Speaking of which.

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state.  Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio.  http://www.targabot.com

pgsql-hackers by date:

Previous
From: "David Bennett"
Date:
Subject: RE: New SQL Datatype RECURRINGCHAR
Next
From: "Timothy H. Keitt"
Date:
Subject: libpq autoconf scripts?