Re: [GENERAL] 2 gig file size limit - Mailing list pgsql-hackers

From Neil Conway
Subject Re: [GENERAL] 2 gig file size limit
Date
Msg-id 2585.192.168.40.6.994807025.squirrel@klamath.dyndns.org
Whole thread Raw
In response to 2 gig file size limit  (Naomi Walker <nwalker@eldocomp.com>)
List pgsql-hackers
(This question was answered several days ago on this list; please check
the list archives before posting. I believe it's also in the FAQ.)

> If PostgreSQL is run on a system that has a file size limit (2
> gig?), where  might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes,
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg
to another source, such as if you pg_dump the contents of a database >
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
size of the dump. If that's still not enough, you can dump individual
tables (with -t) or use 'split' to divide the dump into several files.

Cheers,

Neil


pgsql-hackers by date:

Previous
From: "Rod Taylor"
Date:
Subject: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results
Next
From: Tom Lane
Date:
Subject: Re: handling NULLS in GiST