On 27 Dec 2000, Michael Graff wrote:
> In the admin guide, under the section "Large Databases" is the
> following paragraph:
>
> Since Postgres allows tables larger than the maximum file size
> on your system, it can be problematic to dump the table to a
> file, since the resulting file will likely be larger than the
> maximum size allowed by your system. As pg_dump writes to the
> standard output, you can just use standard *nix tools to work
> around this possible problem.
>
> This is a generalization of, most likely, a failing in linux.
>
> NetBSD (which I use) will allow file sizes up to 2^64 -- I don't think
> anyone has generated a postgresql database that large yet.
>
> You might want to qualify that with "Operating systems which support
> 64-bit file sizes (such as NetBSD) will have no problem with large
> databases" or "some operating systems are limited to 2-gigabyte files
> (such as linux)"
Actually, it much stranger than that. The ext2fs filesystem can store
large files, but the filesystem layer on i386 will not. The filesystem
layer on Alphas can. Apparently, the worry was that 64 point pointers
would slow the i386 version down too much since it is only a 32 bit CPU.
However, the various xBSD flavours do support large files, and so does
Solaris, on all platforms they support.
Also NFSv2 is limited to 2GB, even if the client and server have no
issues. It is a protocol thing that is fixed in NFSv3. I doubt that
anyone is putting their postgres databases on a NFS server, but you never
know.
> --Michael
Tom