> (This question was answered several days ago on this list; please check
> the list archives before posting. I believe it's also in the FAQ.)
>
> > If PostgreSQL is run on a system that has a file size limit (2
> > gig?), where might cause us to hit the limit?
>
> Postgres will never internally use files (e.g. for tables, indexes,
> etc) larger than 1GB -- at that point, the file is split.
>
> However, you might run into problems when you export the data from Pg
> to another source, such as if you pg_dump the contents of a database >
> 2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
> size of the dump. If that's still not enough, you can dump individual
> tables (with -t) or use 'split' to divide the dump into several files.
I just added the second part of this sentense to the FAQ to try and make
it more visible:
The maximum table size of 16TB does not require large file
support from the operating system. Large tables are stored as
multiple 1GB files so file system size limits are not important.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026