Thread: Tablespaces and NFS
Hi, Anyone has tried a setup combining tablespaces with NFS-mounted partitions? I'm considering the idea as a performance-booster --- our problem is that we are renting our dedicated server from a hoster that does not offer much flexibility in terms of custom hardware configuration; so, the *ideal* alternative to load the machine with 4 or 6 hard drives and use tablespaces is off the table (no pun intended). We could, however, set up a few additional servers where we could configure NFS shares, mount them on the main PostgreSQL server, and configure tablespaces to "load balance" the access to disk. Would you estimate that this will indeed boost performance?? (our system does lots of writing to DB --- in all forms: inserts, updates, and deletes) As a corollary question: what about the WALs and tablespaces?? Are the WALs "distributed" when we setup a tablespace and create tables in it? (that is, are the WALs corresponding to the tables in a tablespace stored in the directory corresponding to the tablespace? Or is it only the data, and the WAL keeps being the one and only?) Thanks, Carlos --
Carlos Moreno wrote: > Anyone has tried a setup combining tablespaces with NFS-mounted partitions? There has been some discussion of this recently, you can find it in the archives (http://archives.postgresql.org/). Theword seems to be that NFS can lead to data corruption. Craig
On 9/19/07, Carlos Moreno <moreno_pg@mochima.com> wrote: > Hi, > > Anyone has tried a setup combining tablespaces with NFS-mounted partitions? > > I'm considering the idea as a performance-booster --- our problem is > that we are > renting our dedicated server from a hoster that does not offer much > flexibility > in terms of custom hardware configuration; so, the *ideal* alternative > to load > the machine with 4 or 6 hard drives and use tablespaces is off the table > (no pun > intended). > > We could, however, set up a few additional servers where we could configure > NFS shares, mount them on the main PostgreSQL server, and configure > tablespaces to "load balance" the access to disk. > > Would you estimate that this will indeed boost performance?? (our system > does lots of writing to DB --- in all forms: inserts, updates, and deletes) > > As a corollary question: what about the WALs and tablespaces?? Are the > WALs "distributed" when we setup a tablespace and create tables in it? > (that is, are the WALs corresponding to the tables in a tablespace stored > in the directory corresponding to the tablespace? Or is it only the > data, and > the WAL keeps being the one and only?) > > Thanks, > > Carlos About 5 months ago, I did an experiment serving tablespaces out of AFS, another shared file system. You can read my full post at http://archives.postgresql.org/pgsql-admin/2007-04/msg00188.php On the whole, you're not going to see a performance improvement running tablespaces on NFS (unless the disk system on the NFS server is a lot faster) since you have to go through the network as well as NFS, both of which add overhead. Usually, locking mechanisms on shared file systems don't play nice with databases. You're better off using something else to load balance or replicate data. Peter P.S. Why not just set up those servers you're planning on using as NFS shares as your postgres server(s)?
> > About 5 months ago, I did an experiment serving tablespaces out of > AFS, another shared file system. > > You can read my full post at > http://archives.postgresql.org/pgsql-admin/2007-04/msg00188.php Thanks for the pointer! I had done a search on the archives, but didn't find this one (strange, since I included the keywords tablespace and NFS, both of which show up in your message). Anyway... One detail I don't understand --- why do you claim that "You can't take advantage of the shared file system because you can't share tablespaces among clusters or servers" ??? With NFS, I could mount, say, /mnt/nfs/fs1 to be served by NFS server #1, and then create tablespace nfs1 location '/mnt/nfs/fs1' ... Why wouldn't that work?? (or was the comment specific to AFS?) BTW, I'm not too worried by the lack of security with NFS, since both the "main" postgres machine and the potential NFS servers that I would use would be completely "private" machines (in that there are no users and no other services are running in there). I would set up a strict firewall policy so that the NFS server only accepts connections from the main postgres machine. Back to your comment: > On the whole, you're not going to see a performance improvement > running tablespaces on NFS (unless the disk system on the NFS server > is a lot faster) This seems to be the killer point --- mainly because the network connection is a 100Mbps (around 10 MB/sec --- less than 1/4 of the performance we'd expect from an internal hard drive). If at least it was a Gigabit connection, I might still be tempted to retry the experiment. I was thinking that *maybe* the latencies and contention due to heads movements (in the order of the millisec) would take precedence and thus, a network-distributed cluster of hard drives would end up winning. > P.S. Why not just set up those servers you're planning on using as NFS > shares as your postgres server(s)? We're clear that that would be the *optimal* solution --- problem is, there's a lot of client-side software that we would have to change; I'm first looking for a "transparent" solution in which I could distribute the load at a hardware level, seeing the DB server as a single entity --- the ideal solution, of course, being the use of tablespaces with 4 or 6 *internal* hard disks (but that's not an option with our current web hoster). Anyway, I'll keep working on alternative solutions --- I think I have enough evidence to close this NFS door. Thanks!
> Anyway... One detail I don't understand --- why do you claim that > "You can't take advantage of the shared file system because you can't > share tablespaces among clusters or servers" ??? I say that because you can't set up two servers to point to the same tablespace (i.e. you can't have server A and server B both point to the tablespace in /mnt/nfs/postgres/), which basically defeats one of the main purposes of using a shared file system, seeing, using, and editing files from anywhere. This is ill-advised and probably won't work for 2 reasons. - Postgres tablespaces require empty directories to for initialization. If you create a tablespace on server A, it puts files in the previously empty directory. If you then try to create a tablespace on server B pointing to the same location, it won't work since the directory is no longer empty. You can get around this, in theory, but you'd either have to directly mess with system tables or fool Postgres into thinking that each server independently created that tablespace (to which anyone will say, NO!!!!). - If you do manage to fool postgres into having two servers pointing at the same tablespace, the servers really, REALLY won't play nice with these shared resources, since they have no knowledge of each other (i mean, two clusters on the same server don't play nice with memory). Basically, if they compete for the same file, either I/O will be EXTREMELY slow because of file-locking mechanisms in the file system, or you open things up to race conditions and data corruption. In other words: BAD!!!! I know this doesn't fully apply to you, but I thought I should explain my points betters since you asked so nicely :-) > This seems to be the killer point --- mainly because the network > connection is a 100Mbps (around 10 MB/sec --- less than 1/4 of > the performance we'd expect from an internal hard drive). If at > least it was a Gigabit connection, I might still be tempted to > retry the experiment. I was thinking that *maybe* the latencies > and contention due to heads movements (in the order of the millisec) > would take precedence and thus, a network-distributed cluster of > hard drives would end up winning. If you get decently fast disks, or put some slower disks in RAID 10, you'll easily get >100 MB/sec (and that's a conservative estimate). Even with a Gbit network, you'll get, in theory 128 MB/sec, and that's assuming that the NFS'd disks aren't a bottleneck. > We're clear that that would be the *optimal* solution --- problem > is, there's a lot of client-side software that we would have to > change; I'm first looking for a "transparent" solution in which > I could distribute the load at a hardware level, seeing the DB > server as a single entity --- the ideal solution, of course, > being the use of tablespaces with 4 or 6 *internal* hard disks > (but that's not an option with our current web hoster). I sadly don't know enough networking to tell you tell the client software "no really, I'm over here." However, one of the things I'm fond of is using a module to store connection strings, and dynamically loading said module on the client side. For instance, with Perl I use... use DBI; use DBD::Pg; use My::DBs; my $dbh = DBI->connect($My::DBs::mydb); Assuming that the module and its entries are kept up to date, it will "just work." That way, there's only 1 module to change instead of n client apps. I can have a new server with a new name up without changing any client code. > Anyway, I'll keep working on alternative solutions --- I think > I have enough evidence to close this NFS door. That's probably for the best.
Thanks again, Peter, for expanding on these points. Peter Koczan wrote: >> Anyway... One detail I don't understand --- why do you claim that >> "You can't take advantage of the shared file system because you can't >> share tablespaces among clusters or servers" ??? >> > > I say that because you can't set up two servers to point to the same > tablespace My bad! Definitely --- I was only looking at it through the point of view of my current problem at hand, so I misinterpreted what you said; it is clear and unambiguous, and I agree that there is little debate about it; in my mind, since I'm talking about *one* postgres server spreading its storage across several filesystems, I didn't understand why you seemed to be claiming that that can not be combined with tablespaces ... > I know this doesn't fully apply to you, but I thought I should explain > my points betters since you asked so nicely :-) > :-) It's appreaciated! > If you get decently fast disks, or put some slower disks in RAID 10, > you'll easily get >100 MB/sec (and that's a conservative estimate). > Even with a Gbit network, you'll get, in theory 128 MB/sec, and that's > assuming that the NFS'd disks aren't a bottleneck. > But still, with 128MB/sec (modulo some possible NFS bottlenecks), I would be a bit more optimistic, and would actually be tempted to retry your experiment with my setup. After all, with the setup that we have *today*, I don't think I get a sustained transfer rate above 80 or 90MB/sec from the hard drives (as far as I know, they're plain vanilla Enterpreise-Grade SATA2 servers, which I believe don't get further than 90MB/sec S.T.R.) > I sadly don't know enough networking to tell you tell the client > software "no really, I'm over here." However, one of the things I'm > fond of is using a module to store connection strings, and dynamically > loading said module on the client side. For instance, with Perl I > use... > > use DBI; > use DBD::Pg; > use My::DBs; > > my $dbh = DBI->connect($My::DBs::mydb); > > Assuming that the module and its entries are kept up to date, it will > "just work." That way, there's only 1 module to change instead of n > client apps. Oh no, but the problem we'd have would be at the level of the database design and access --- for instance, some of the tables that I think are bottlenecking (the ones I would like to spread with tablespaces) are quite interconnected to each other --- foreign keys come and go; on the client applications, many transaction blocks include several of those tables --- if I were to spread those tables across several backends, I'm not sure the changes would be easy :-( ) > I can have a new server with a new name up without > changing any client code. > But then, you're talking about replicating data so that multiple client-apps can pick one out the several available "quasi-read-only" servers, I'm guessing? >> Anyway, I'll keep working on alternative solutions --- I think >> I have enough evidence to close this NFS door. >> > > That's probably for the best. > Yep --- still closing that door!! The points I'm arguing in this message is just in the spirit of discussing and better understanding the issue. I'm still convinced with your evidence. Thanks, Carlos --