Re: [GENERAL] 2 gig file size limit - Mailing list pgsql-hackers

From Bruce Momjian
Subject Re: [GENERAL] 2 gig file size limit
Date
Msg-id 200107110101.f6B11nC23950@candle.pha.pa.us
Whole thread Raw
List pgsql-hackers
> (This question was answered several days ago on this list; please check
> the list archives before posting. I believe it's also in the FAQ.)
>
> > If PostgreSQL is run on a system that has a file size limit (2
> > gig?), where  might cause us to hit the limit?
>
> Postgres will never internally use files (e.g. for tables, indexes,
> etc) larger than 1GB -- at that point, the file is split.
>
> However, you might run into problems when you export the data from Pg
> to another source, such as if you pg_dump the contents of a database >
> 2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
> size of the dump. If that's still not enough, you can dump individual
> tables (with -t) or use 'split' to divide the dump into several files.

I just added the second part of this sentense to the FAQ to try and make
it more visible:

    The maximum table size of 16TB does not require large file
    support from the operating system. Large tables are stored as
    multiple 1GB files so file system size limits are not important.


--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: SOMAXCONN (was Re: Solaris source code)
Next
From: Ian Lance Taylor
Date:
Subject: Re: SOMAXCONN (was Re: Solaris source code)