Thread: Rather incorrect text in admin guide
In the admin guide, under the section "Large Databases" is the following paragraph: Since Postgres allows tables larger than the maximum file size on your system, it can be problematic to dump the table to a file, since the resulting file will likely be larger than the maximum size allowed by your system. As pg_dump writes to the standard output, you can just use standard *nix tools to work around this possible problem. This is a generalization of, most likely, a failing in linux. NetBSD (which I use) will allow file sizes up to 2^64 -- I don't think anyone has generated a postgresql database that large yet. You might want to qualify that with "Operating systems which support 64-bit file sizes (such as NetBSD) will have no problem with large databases" or "some operating systems are limited to 2-gigabyte files (such as linux)" --Michael
On 27 Dec 2000, Michael Graff wrote: > In the admin guide, under the section "Large Databases" is the > following paragraph: > > Since Postgres allows tables larger than the maximum file size > on your system, it can be problematic to dump the table to a > file, since the resulting file will likely be larger than the > maximum size allowed by your system. As pg_dump writes to the > standard output, you can just use standard *nix tools to work > around this possible problem. > > This is a generalization of, most likely, a failing in linux. > > NetBSD (which I use) will allow file sizes up to 2^64 -- I don't think > anyone has generated a postgresql database that large yet. > > You might want to qualify that with "Operating systems which support > 64-bit file sizes (such as NetBSD) will have no problem with large > databases" or "some operating systems are limited to 2-gigabyte files > (such as linux)" Actually, it much stranger than that. The ext2fs filesystem can store large files, but the filesystem layer on i386 will not. The filesystem layer on Alphas can. Apparently, the worry was that 64 point pointers would slow the i386 version down too much since it is only a 32 bit CPU. However, the various xBSD flavours do support large files, and so does Solaris, on all platforms they support. Also NFSv2 is limited to 2GB, even if the client and server have no issues. It is a protocol thing that is fixed in NFSv3. I doubt that anyone is putting their postgres databases on a NFS server, but you never know. > --Michael Tom
Michael Graff wrote: > > In the admin guide, under the section "Large Databases" is the > following paragraph: > > Since Postgres allows tables larger than the maximum file size > on your system, it can be problematic to dump the table to a > file, since the resulting file will likely be larger than the > maximum size allowed by your system. As pg_dump writes to the > standard output, you can just use standard *nix tools to work > around this possible problem. > > This is a generalization of, most likely, a failing in linux. > > NetBSD (which I use) will allow file sizes up to 2^64 -- I don't think > anyone has generated a postgresql database that large yet. > > You might want to qualify that with "Operating systems which support > 64-bit file sizes (such as NetBSD) will have no problem with large > databases" or "some operating systems are limited to 2-gigabyte files > (such as linux)" Or more correctly "(such as some versions of linux)" --------- Hannu
Michael Graff writes: > In the admin guide, under the section "Large Databases" is the > following paragraph: > > Since Postgres allows tables larger than the maximum file size > on your system, it can be problematic to dump the table to a > file, since the resulting file will likely be larger than the > maximum size allowed by your system. As pg_dump writes to the > standard output, you can just use standard *nix tools to work > around this possible problem. > > This is a generalization of, most likely, a failing in linux. The actual maximum file size of a system at hand is irrelevant to the correctness of the cited paragraph. Postgres will still allow tables larger than the maximum file size and the illustrated approaches to dumping still apply. > NetBSD (which I use) will allow file sizes up to 2^64 -- I don't think > anyone has generated a postgresql database that large yet. Possibly, but how is this relevant to what *could* be done? -- Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/
On Sun, 31 Dec 2000, Ralf Mattes wrote: > On Fri, Dec 29, 2000 at 07:05:41PM -0800, Tom Samplonius wrote: > > Also NFSv2 is limited to 2GB, even if the client and server have no > > issues. It is a protocol thing that is fixed in NFSv3. I doubt that > > anyone is putting their postgres databases on a NFS server, but you never > > know. > > It would be a bad idea to put the database on NFS but the paragraph > talks about possible problems during pg_dump and it doesn't seem > too unreasonable to dump a database to an NFS-mounted area. Well NetApp has gone through quite a bit of work getting their NFS filers Oracle and MS-SQL certified. There are extensive tech notes on the Netapp site about Oracle and MS-SQL on NFS servers. > Ralf Mattes > > P.S: Happy new year :-) Tom
I can only imagine how they got databases working through NFS. Having the backend on one server and the files on another is really quite risky. > > On Sun, 31 Dec 2000, Ralf Mattes wrote: > > > On Fri, Dec 29, 2000 at 07:05:41PM -0800, Tom Samplonius wrote: > > > Also NFSv2 is limited to 2GB, even if the client and server have no > > > issues. It is a protocol thing that is fixed in NFSv3. I doubt that > > > anyone is putting their postgres databases on a NFS server, but you never > > > know. > > > > It would be a bad idea to put the database on NFS but the paragraph > > talks about possible problems during pg_dump and it doesn't seem > > too unreasonable to dump a database to an NFS-mounted area. > > Well NetApp has gone through quite a bit of work getting their NFS > filers Oracle and MS-SQL certified. There are extensive tech notes on the > Netapp site about Oracle and MS-SQL on NFS servers. > > > Ralf Mattes > > > > P.S: Happy new year :-) > > > Tom > > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Mon, 1 Jan 2001, Bruce Momjian wrote: > I can only imagine how they got databases working through NFS. Having > the backend on one server and the files on another is really quite risky. Well, both Microsoft and Oracle support NFS mounted database device files with NetApp filers. Tom
> > On Mon, 1 Jan 2001, Bruce Momjian wrote: > > > I can only imagine how they got databases working through NFS. Having > > the backend on one server and the files on another is really quite risky. > > Well, both Microsoft and Oracle support NFS mounted database device > files with NetApp filers. Quite a trick. NFS, being state-less, is really a bad platform for such things. I know there is NFS locking, but even that is not 100%, if I remember correctly. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Tue, 2 Jan 2001, Bruce Momjian wrote: > > > > On Mon, 1 Jan 2001, Bruce Momjian wrote: > > > > > I can only imagine how they got databases working through NFS. Having > > > the backend on one server and the files on another is really quite risky. > > > > Well, both Microsoft and Oracle support NFS mounted database device > > files with NetApp filers. > > Quite a trick. NFS, being state-less, is really a bad platform for such > things. I know there is NFS locking, but even that is not 100%, if I > remember correctly. NFS is state-less if you use UDP connections ... most modern Unices support TCP NFS as well, providing you a stateful connection ...
Peter Eisentraut <peter_e@gmx.net> writes: > > NetBSD (which I use) will allow file sizes up to 2^64 -- I don't think > > anyone has generated a postgresql database that large yet. > > Possibly, but how is this relevant to what *could* be done? Mainly that the whole section seems targeted at older OSs which cannot write large files which pg_dumpall can generate. Stating that this is only a requirement on some OSs seems to make sense. --Michael
On Fri, Dec 29, 2000 at 07:05:41PM -0800, Tom Samplonius wrote: > Also NFSv2 is limited to 2GB, even if the client and server have no > issues. It is a protocol thing that is fixed in NFSv3. I doubt that > anyone is putting their postgres databases on a NFS server, but you never > know. It would be a bad idea to put the database on NFS but the paragraph talks about possible problems during pg_dump and it doesn't seem too unreasonable to dump a database to an NFS-mounted area. Ralf Mattes P.S: Happy new year :-)
On Tue, 2 Jan 2001, The Hermit Hacker wrote: > > > > I can only imagine how they got databases working through NFS. Having > > > > the backend on one server and the files on another is really quite risky. > > > > > > Well, both Microsoft and Oracle support NFS mounted database device > > > files with NetApp filers. > > > > Quite a trick. NFS, being state-less, is really a bad platform for such > > things. I know there is NFS locking, but even that is not 100%, if I > > remember correctly. > > NFS is state-less if you use UDP connections ... most modern Unices > support TCP NFS as well, providing you a stateful connection ... A stateless procotol riding on a stateful protocol, is still stateless. Think HTTP. The fact that NFS is stateless or not, does not matter for database application. In the supported configuration, only one server is allowed to mount the database at one time anyway. The only issue is making sure that NFS writes actually get written when the NFS servers says they have been written to ensure database consistancy. Many NFS implementations ack NFS writes before they are written NetApp uses a battery backed RAM as a write cache. Tom