Thread: postgreSQL on NAS/SAN?
Hello, is it possible, to use a postgreSQL database on a NAS or a SAN? I somewhere read, that you should not install a database to a RAID5 but the most NAS and SAN I know, are using RAID5. Does anyone know aout anything like this? Daniel -- postgreSQL on Netware - the red elephant http://postgresql.dseichter.org Last update: 26th May 2003
On Sun, 15 Jun 2003, Daniel Seichter wrote: > Hello, > is it possible, to use a postgreSQL database on a NAS or a SAN? I > somewhere read, that you should not install a database to a RAID5 but the > most NAS and SAN I know, are using RAID5. > Does anyone know aout anything like this? RAID5 is fine for a database. It provides a fair compromise between speed, safety, and economy. If you need more speed, you might need to go to a RAID 1+0 (or 0+1). running postgresql on a NAS or SAN is quite doable, but you should test your configuration carefully. Note that many NAS units report write completion upon receipt of the data (i.e. before it's actually written) so you may have data integrity issues should the power go out in the middle of a transaction. SANs are generally more robust than NAS, but I'm not that familiar with running a database on one. One thing you CANNOT do is allow two postmasters to write to the same data store. That WILL corrupt your database and cause problems.
Hello Scott, > RAID5 is fine for a database. It provides a fair compromise between > speed, safety, and economy. If you need more speed, you might need to go > to a RAID 1+0 (or 0+1). Ok, well, because a progress-person (not postgresql) said, that it will be not good for running a (general, not only progress) database on a RAID5 System. > running postgresql on a NAS or SAN is quite doable, but you should test > your configuration carefully. Note that many NAS units report write > completion upon receipt of the data (i.e. before it's actually written) so > you may have data integrity issues should the power go out in the middle > of a transaction. Ok, then we should use a SAN, if we need to use one. > One thing you CANNOT do is allow two postmasters to write to the same data > store. That WILL corrupt your database and cause problems. This means, that postgreSQL isn't for configuring clusters? We don't need one, but we do not know what the future brings :o( Daniel
On Tue, 17 Jun 2003, Daniel Seichter wrote: > Hello Scott, > > > RAID5 is fine for a database. It provides a fair compromise between > > speed, safety, and economy. If you need more speed, you might need to go > > to a RAID 1+0 (or 0+1). > Ok, well, because a progress-person (not postgresql) said, that it will be > not good for running a (general, not only progress) database on a RAID5 > System. It really all depends. If it's a report database with only a tiny percentage of accesses being write oriented then RAID5 is a great solution. If it's primarily transactional with lots of writing, then RAID5 starts to be less of an attractive option. Generally, the more drives you throw at a RAID5 the better it will perform, whereas a simple 4 disk setup under RAID1+0 can usually run quite fast. > > running postgresql on a NAS or SAN is quite doable, but you should test > > your configuration carefully. Note that many NAS units report write > > completion upon receipt of the data (i.e. before it's actually written) so > > you may have data integrity issues should the power go out in the middle > > of a transaction. > Ok, then we should use a SAN, if we need to use one. Or make sure if you use a NAS it isn't set to say it wrote the data before it actually did. > > One thing you CANNOT do is allow two postmasters to write to the same data > > store. That WILL corrupt your database and cause problems. > This means, that postgreSQL isn't for configuring clusters? We don't need > one, but we do not know what the future brings :o( Currently, any clustering / failover / replication is an add on. If you were to want to have two Postgresql servers with replication and failover between them, they would each need their own data store. That store could be on the same storage system, they would just have to be in different directories. Each replication solution for postgresql has it's advantages / disadvantages. Are you looking more for failover, load balancing, hot spare?
Hello Scott, > Are you looking more for failover, load balancing, hot > spare? I am looking for a hot spare, so if one server crashed, the second will "spare" it, because if this database will be down (down is meant for longer than 2 hours) more than two other databases will not continue working (they could continue working, but without new data, so it will be senseless). But at the moment it is all more in my mind than on bits and bytes, because we are in the phase of planning. Daniel
On 17 Jun 2003 at 15:41, Daniel Seichter wrote: > Hello Scott, > > > Are you looking more for failover, load balancing, hot > > spare? > I am looking for a hot spare, so if one server crashed, the second will > "spare" it, because if this database will be down (down is meant for longer > than 2 hours) more than two other databases will not continue working (they > could continue working, but without new data, so it will be senseless). If you have upto 2 hours to work with, may be you could go for asynchronous replication solutions based on replicated checkpointed WAL segments. Using those solutions+round robin DNS plus heartbeat service should yield what you are looking for.. Bye Shridhar -- Senate, n.: A body of elderly gentlemen charged with high duties and misdemeanors. -- Ambrose Bierce
On Tue, Jun 17, 2003 at 15:41:45 +0200, Daniel Seichter <daniel@dseichter.de> wrote: > Hello Scott, > > > Are you looking more for failover, load balancing, hot > > spare? > I am looking for a hot spare, so if one server crashed, the second will > "spare" it, because if this database will be down (down is meant for longer > than 2 hours) more than two other databases will not continue working (they > could continue working, but without new data, so it will be senseless). Once the orignal postmaster has stopped running (say because its server died) you could run a different postmaster (on say another server) and access the same data on your storage system. But if you do this you will want some sort of safety system so that two postmasters can't accidentally run at the same time. The normal interlock won't work for you because it keeps a PID file and checks to see if the pid in that file (if any) is still running. That doesn't work accross servers.
There is a system which uses your serial port so that if server B detects server A goes down, it will send a signal over the serial port which disconnects server A's power supply. That way, server B never "accidentally" takes over for server A when in fact server A is still running. I don't remember where these are sold, but they were mentioned in the MissionCriticalLinux system documentation. Jon On Tue, 17 Jun 2003, Bruno Wolff III wrote: > On Tue, Jun 17, 2003 at 15:41:45 +0200, > Daniel Seichter <daniel@dseichter.de> wrote: > > Hello Scott, > > > > > Are you looking more for failover, load balancing, hot > > > spare? > > I am looking for a hot spare, so if one server crashed, the second will > > "spare" it, because if this database will be down (down is meant for longer > > than 2 hours) more than two other databases will not continue working (they > > could continue working, but without new data, so it will be senseless). > > Once the orignal postmaster has stopped running (say because its server > died) you could run a different postmaster (on say another server) and > access the same data on your storage system. But if you do this you > will want some sort of safety system so that two postmasters can't > accidentally run at the same time. The normal interlock won't work for you > because it keeps a PID file and checks to see if the pid in that file (if any) > is still running. That doesn't work accross servers. > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org >
On Tue, Jun 17, 2003 at 06:57:15AM -0600, scott.marlowe wrote: > Currently, any clustering / failover / replication is an add on. If you > were to want to have two Postgresql servers with replication and failover > between them, they would each need their own data store. That store could > be on the same storage system, they would just have to be in different > directories. Why? This is only needed if both are active that is for load balancing. The usual failover case of a hot-stand-by does not require this. You can make the backup machine start its postmaster as soon as the other one crashes. Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De ICQ: 179140304, AIM: michaelmeskes, Jabber: meskes@jabber.org Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
On Tue, Jun 17, 2003 at 03:41:45PM +0200, Daniel Seichter wrote: > I am looking for a hot spare, so if one server crashed, the second will > "spare" it, because if this database will be down (down is meant for longer > than 2 hours) more than two other databases will not continue working (they > could continue working, but without new data, so it will be senseless). Not sure what you mean. Shall the second machine take over? Since this should be hot 2 hours is a lot of time. Using a private network you can detect failures almost immediately. I do recommend a a local checking like watchdog or mon, so a restart is tried before the takeover. And I'd make sure the primary machine stays down. Has been done before. :-) Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De ICQ: 179140304, AIM: michaelmeskes, Jabber: meskes@jabber.org Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
On Tue, 2003-06-17 at 12:05, Michael Meskes wrote: > On Tue, Jun 17, 2003 at 03:41:45PM +0200, Daniel Seichter wrote: > > I am looking for a hot spare, so if one server crashed, the second will > > "spare" it, because if this database will be down (down is meant for longer > > than 2 hours) more than two other databases will not continue working (they > > could continue working, but without new data, so it will be senseless). > > Not sure what you mean. Shall the second machine take over? Since this > should be hot 2 hours is a lot of time. Using a private network you can > detect failures almost immediately. > > I do recommend a a local checking like watchdog or mon, so a restart is > tried before the takeover. And I'd make sure the primary machine stays > down. This is going to sound bad to users of Open Source OSs and databases, but for all work that has to go into clustering machines and making databases work with them... Why not use a clustered-by-design OS like VMS? It is very easy to put a couple of dual-Alpha boxen cluster-connected via fiber to SCSI devices. A cluster-aware relational database like Rdb runs on all nodes of a cluster in a totally shared-disk environment. While both nodes are working fine, half of the work goes to either node, and if one node goes down, the other node still does all the work. -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | "Oh, great altar of passive entertainment, bestow upon me | | thy discordant images at such speed as to render linear | | thought impossible" (Calvin, regarding TV) | +-----------------------------------------------------------
On Tue, Jun 17, 2003 at 01:11:49PM -0500, Ron Johnson wrote: > > I do recommend a a local checking like watchdog or mon, so a restart is > > tried before the takeover. And I'd make sure the primary machine stays > > down. > > This is going to sound bad to users of Open Source OSs and databases, > but for all work that has to go into clustering machines and making > databases work with them... > ... Which indeed is load balancing again. I thought we were talking about a simple failover solution. Yes, I know VMS can do that as well. :-) Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De ICQ: 179140304, AIM: michaelmeskes, Jabber: meskes@jabber.org Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
Ron Johnson <ron.l.johnson@cox.net> writes: > On Tue, 2003-06-17 at 12:05, Michael Meskes wrote: >> On Tue, Jun 17, 2003 at 03:41:45PM +0200, Daniel Seichter wrote: >> > I am looking for a hot spare, so if one server crashed, the >> > second will "spare" it, because if this database will be down >> > (down is meant for longer than 2 hours) more than two other >> > databases will not continue working (they could continue working, >> > but without new data, so it will be senseless). >> >> Not sure what you mean. Shall the second machine take over? Since >> this should be hot 2 hours is a lot of time. Using a private >> network you can detect failures almost immediately. >> >> I do recommend a a local checking like watchdog or mon, so a >> restart is tried before the takeover. And I'd make sure the primary >> machine stays down. > > This is going to sound bad to users of Open Source OSs and > databases, but for all work that has to go into clustering machines > and making databases work with them... > > Why not use a clustered-by-design OS like VMS? It is very easy to > put a couple of dual-Alpha boxen cluster-connected via fiber to SCSI > devices. A cluster-aware relational database like Rdb runs on all > nodes of a cluster in a totally shared-disk environment. While both > nodes are working fine, half of the work goes to either node, and if > one node goes down, the other node still does all the work. I can't speak for everyone else, but I can tell you *my* reasons for going with PostgreSQL as opposed to a fancier solution like RDB on VMS, or Oracle on Solaris, or DB2 on whatever IBM platform sounds interesting today. PostgreSQL does what I need it to do without breaking the bank. Sure, it's a little extra work getting PostgreSQL to do something like hot failover (or load balancing), but when you can't afford the other options you make do with what you have. Besides which, PostgreSQL on x86 hardware is almost certainly the best value around. No one touches it on a price/performance basis, and PostgreSQL has an impressive array of features. For example, can I connect my Zope application server running Linux to RDB running on VMS? I don't believe I can (Oracle would work, however). PostgreSQL plays nicely with just about any set of development tools you might care to mention. Jason
On Tue, 2003-06-17 at 13:20, Michael Meskes wrote: > On Tue, Jun 17, 2003 at 01:11:49PM -0500, Ron Johnson wrote: > > > I do recommend a a local checking like watchdog or mon, so a restart is > > > tried before the takeover. And I'd make sure the primary machine stays > > > down. > > > > This is going to sound bad to users of Open Source OSs and databases, > > but for all work that has to go into clustering machines and making > > databases work with them... > > ... > > Which indeed is load balancing again. I thought we were talking about a > simple failover solution. Yes, I know VMS can do that as well. :-) How quickly do the (h/w and manpower) costs of "simple failover" escalate? -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | "Oh, great altar of passive entertainment, bestow upon me | | thy discordant images at such speed as to render linear | | thought impossible" (Calvin, regarding TV) | +-----------------------------------------------------------
On Tue, 2003-06-17 at 14:01, Jason Earl wrote: > Ron Johnson <ron.l.johnson@cox.net> writes: > > > On Tue, 2003-06-17 at 12:05, Michael Meskes wrote: > >> On Tue, Jun 17, 2003 at 03:41:45PM +0200, Daniel Seichter wrote: [snip] > > This is going to sound bad to users of Open Source OSs and > > databases, but for all work that has to go into clustering machines > > and making databases work with them... > > > > Why not use a clustered-by-design OS like VMS? It is very easy to > > put a couple of dual-Alpha boxen cluster-connected via fiber to SCSI > > devices. A cluster-aware relational database like Rdb runs on all > > nodes of a cluster in a totally shared-disk environment. While both > > nodes are working fine, half of the work goes to either node, and if > > one node goes down, the other node still does all the work. > > I can't speak for everyone else, but I can tell you *my* reasons for > going with PostgreSQL as opposed to a fancier solution like RDB on > VMS, or Oracle on Solaris, or DB2 on whatever IBM platform sounds > interesting today. PostgreSQL does what I need it to do without > breaking the bank. Sure, it's a little extra work getting PostgreSQL > to do something like hot failover (or load balancing), but when you > can't afford the other options you make do with what you have. Disregarding clustering, I agree with you completely. > Besides which, PostgreSQL on x86 hardware is almost certainly the best > value around. No one touches it on a price/performance basis, and > PostgreSQL has an impressive array of features. For example, can I > connect my Zope application server running Linux to RDB running on > VMS? I don't believe I can (Oracle would work, however). PostgreSQL > plays nicely with just about any set of development tools you might > care to mention. If it can connect via SQL*Net or ODBC, Rdb will talk to it. -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | "Oh, great altar of passive entertainment, bestow upon me | | thy discordant images at such speed as to render linear | | thought impossible" (Calvin, regarding TV) | +-----------------------------------------------------------
On Tue, 17 Jun 2003, Jason Earl wrote: > Ron Johnson <ron.l.johnson@cox.net> writes: > > > On Tue, 2003-06-17 at 12:05, Michael Meskes wrote: > >> On Tue, Jun 17, 2003 at 03:41:45PM +0200, Daniel Seichter wrote: > >> > I am looking for a hot spare, so if one server crashed, the > >> > second will "spare" it, because if this database will be down > >> > (down is meant for longer than 2 hours) more than two other > >> > databases will not continue working (they could continue working, > >> > but without new data, so it will be senseless). > >> > >> Not sure what you mean. Shall the second machine take over? Since > >> this should be hot 2 hours is a lot of time. Using a private > >> network you can detect failures almost immediately. > >> > >> I do recommend a a local checking like watchdog or mon, so a > >> restart is tried before the takeover. And I'd make sure the primary > >> machine stays down. > > > > This is going to sound bad to users of Open Source OSs and > > databases, but for all work that has to go into clustering machines > > and making databases work with them... > > > > Why not use a clustered-by-design OS like VMS? It is very easy to > > put a couple of dual-Alpha boxen cluster-connected via fiber to SCSI > > devices. A cluster-aware relational database like Rdb runs on all > > nodes of a cluster in a totally shared-disk environment. While both > > nodes are working fine, half of the work goes to either node, and if > > one node goes down, the other node still does all the work. > > I can't speak for everyone else, but I can tell you *my* reasons for > going with PostgreSQL as opposed to a fancier solution like RDB on > VMS, or Oracle on Solaris, or DB2 on whatever IBM platform sounds > interesting today. PostgreSQL does what I need it to do without > breaking the bank. Sure, it's a little extra work getting PostgreSQL > to do something like hot failover (or load balancing), but when you > can't afford the other options you make do with what you have. Keep in mind, if you need more performance than X86, you can always buy a used E10K online for ~$24,000 or so (there's one on Eb*y now with 20 400 MHz CPUs for less than that, and it's been coming up week after week with no buyers.) Older mainframes are there for $5,000 or so as well. The linux kernel is supposed to have hot swappable hardware support in it eventually for both those platforms, so you've got your 24/7 with no need for a second box. Of course, I'm sure for $5,000 you could afford to buy two mainframes and fail them over yourself on the one time every fifty years or so one fails. :-) I'm certain the license fees for rdb and VMS are no small amount, and with the E10k or mainframe, you own it outright.
Linux supports hot-swappable hardware? [was Re: postgreSQL on NAS/SAN?]
From
"Shridhar Daithankar"
Date:
On 18 Jun 2003 at 5:36, scott.marlowe wrote: > The linux kernel is supposed to have hot swappable hardware support in it > eventually for both those platforms, so you've got your 24/7 with no need Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on the fly? That is news to me. Could you point me to more resources on this? Bye Shridhar -- work, n.: The blessed respite from screaming kids and soap operas for which you actually get paid.
At 14:41 18.06.2003, Shridhar Daithankar said: --------------------[snip]-------------------- > >Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on >the fly? > >That is news to me. Could you point me to more resources on this? --------------------[snip]-------------------- AFAIK swappable hardware needs to be supported by the hardware as well. Currently hard disks and power supplies can be hot-swapped, I never heard of the possibility to hot-swap directly on the data bus (memory, CPU, slot cards). As for SCSI disks and Power supplies, Linux supports hot swap. Check out Dell servers for example. -- >O Ernest E. Vogelsinger (\) ICQ #13394035 ^ http://www.vogelsinger.at/
You can hotswap PCI cards that follow the CompactPCI specification. This has been in the kernel for years, and I think was originally authored by Compaq. Jon On Wed, 18 Jun 2003, Ernest E Vogelsinger wrote: > At 14:41 18.06.2003, Shridhar Daithankar said: > --------------------[snip]-------------------- > > > >Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > >the fly? > > > >That is news to me. Could you point me to more resources on this? > --------------------[snip]-------------------- > > AFAIK swappable hardware needs to be supported by the hardware as well. > Currently hard disks and power supplies can be hot-swapped, I never heard > of the possibility to hot-swap directly on the data bus (memory, CPU, slot > cards). > > As for SCSI disks and Power supplies, Linux supports hot swap. Check out > Dell servers for example. > > > -- > >O Ernest E. Vogelsinger > (\) ICQ #13394035 > ^ http://www.vogelsinger.at/ > > > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly >
> Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > the fly? No CPUs or RAM. The problem isn't the kernel, realy, it's x86 hardware. The kernel guys aren't going to bother to try to support it until there's hardware support. However for PCI cards, USB, Firewire, and SCSI devices, Linux has had hotswap capability for a long while now. Jon > > That is news to me. Could you point me to more resources on this? > > > Bye > Shridhar > > -- > work, n.: The blessed respite from screaming kids and soap operas for which you > actually get paid. > > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly >
On Wed, 18 Jun 2003, Shridhar Daithankar wrote: > On 18 Jun 2003 at 5:36, scott.marlowe wrote: > > The linux kernel is supposed to have hot swappable hardware support in it > > eventually for both those platforms, so you've got your 24/7 with no need > > Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > the fly? > > That is news to me. Could you point me to more resources on this? No, it doesn't, it's supposedly being worked on. http://lwn.net/2001/0510/a/hot-swap-cpu.php3
On Wed, 18 Jun 2003, Ernest E Vogelsinger wrote: > At 14:41 18.06.2003, Shridhar Daithankar said: > --------------------[snip]-------------------- > > > >Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > >the fly? > > > >That is news to me. Could you point me to more resources on this? > --------------------[snip]-------------------- > > AFAIK swappable hardware needs to be supported by the hardware as well. > Currently hard disks and power supplies can be hot-swapped, I never heard > of the possibility to hot-swap directly on the data bus (memory, CPU, slot > cards). Mainframes and Sun E class servers have supported this for years, with their own OS in place. RUnning Linux in an LPAR on a mainframe allows you to do this right now, albeit requiring the linux image to be restarted to see the change. There ARE kernel patches in the works for 2.5/2.6 to allow this, and patches already released against older 2.4 kernels to allow it. http://lwn.net/2001/0510/a/hot-swap-cpu.php3 Note I didn't say that linux works right for this yet, but that it's coming. > As for SCSI disks and Power supplies, Linux supports hot swap. Check out > Dell servers for example. Linux doesn't need to do anything to allow that, only the hardware needs to. It's kept away from the kernel by the RAID controller (in the case of disks) or just not noticed in the PS department. I've got a Dual PPro-200 under my desk with hot swappable power supplies and hot swappable hard drives that I build in 1997... Intel hardware is still way behind Sun or IBM when it comes to hot swapping memory and CPU, but at least it's catching up with swappable PCI cards finally. Remember, Linux != X86 hardware only.
On Wed, 18 Jun 2003, scott.marlowe wrote: > On Wed, 18 Jun 2003, Shridhar Daithankar wrote: > > > On 18 Jun 2003 at 5:36, scott.marlowe wrote: > > > The linux kernel is supposed to have hot swappable hardware support in it > > > eventually for both those platforms, so you've got your 24/7 with no need > > > > Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > > the fly? > > > > That is news to me. Could you point me to more resources on this? > > No, it doesn't, it's supposedly being worked on. > > http://lwn.net/2001/0510/a/hot-swap-cpu.php3 I just checked and that's a dead link. I'm sure there are some live ones out there somewhere. time to google a bit more.
On Wed, 2003-06-18 at 08:03, Jonathan Bartlett wrote: > > Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > > the fly? > > No CPUs or RAM. The problem isn't the kernel, realy, it's x86 hardware. > The kernel guys aren't going to bother to try to support it until there's > hardware support. > > However for PCI cards, USB, Firewire, and SCSI devices, Linux has had > hotswap capability for a long while now. You mean that I can go into my white box PC and yank an unused PCI card from a "live" system, if it is running Linux? -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | "Oh, great altar of passive entertainment, bestow upon me | | thy discordant images at such speed as to render linear | | thought impossible" (Calvin, regarding TV) | +-----------------------------------------------------------
On 18 Jun 2003 at 8:58, Ron Johnson wrote: > On Wed, 2003-06-18 at 08:03, Jonathan Bartlett wrote: > > > Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > > > the fly? > > > > No CPUs or RAM. The problem isn't the kernel, realy, it's x86 hardware. > > The kernel guys aren't going to bother to try to support it until there's > > hardware support. > > > > However for PCI cards, USB, Firewire, and SCSI devices, Linux has had > > hotswap capability for a long while now. > > You mean that I can go into my white box PC and yank an unused PCI > card from a "live" system, if it is running Linux? It should be possible. Do an lsof and kill all processes using that device. Do a rmmod, change deice and modprobe.. Not really hotswap but you don't need to take down machine at least.. Of course this is a theory.. never tried that myself.. Bye Shridhar -- broad-mindedness, n: The result of flattening high-mindedness out.
Today's best approach to hot swap for Linux is clustering, if all your applications will pay attention to the clustering. The hot swap is at the LAN connector :-)
On 18 Jun 2003, Ron Johnson wrote: > On Wed, 2003-06-18 at 08:03, Jonathan Bartlett wrote: > > > Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > > > the fly? > > > > No CPUs or RAM. The problem isn't the kernel, realy, it's x86 hardware. > > The kernel guys aren't going to bother to try to support it until there's > > hardware support. > > > > However for PCI cards, USB, Firewire, and SCSI devices, Linux has had > > hotswap capability for a long while now. > > You mean that I can go into my white box PC and yank an unused PCI > card from a "live" system, if it is running Linux? I dunno, why don't you try and let us know how it works... :-) No, you can't do it on a plain white box PC. But there IS A standard for PCI to let you do this that Linux does support. Google for it. It's something like CPCI or something.
Hi, Shridhar Daithankar wrote: > On 18 Jun 2003 at 5:36, scott.marlowe wrote: > >>The linux kernel is supposed to have hot swappable hardware support in it >>eventually for both those platforms, so you've got your 24/7 with no need > > > Linux supports hot-swappable hardware? As in swappng CPU/RAM/Add on cards on > the fly? > > That is news to me. Could you point me to more resources on this? Yes, this is possible for example with Compaq hotswap controlers (for PCI slots). This support is in the Kernel sources at least from above 2.4.18, where I tested it and found it working. There might be any hardware which also supports hotswapping of CPU or RAM, but I suspect there might be no port of that ability to linux then. But who knows. May be S/390 from IBM? Ask SuSE as they primary support that hardware. Beside this you can of course hot swap USB, Firewire, SCSI and PCMCIA/CARDBUS. Regards Tino
> > As for SCSI disks and Power supplies, Linux supports hot swap. Check out > > Dell servers for example. > > Linux doesn't need to do anything to allow that, only the hardware needs > to. It's kept away from the kernel by the RAID controller (in the case of > disks) or just not noticed in the PS department. This is true, however, Linux also, to some extent, support generic SCSI hotswap when the controller will handle it. Jon
> You mean that I can go into my white box PC and yank an unused PCI > card from a "live" system, if it is running Linux? Yes, but if you want to keep from damaging your harwdare you need to use cards that follow the CompactPCI specification. This may require a specialized PCI controller as well, although I'm not certain. Jon > > -- > +-----------------------------------------------------------+ > | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | > | Jefferson, LA USA http://members.cox.net/ron.l.johnson | > | | > | "Oh, great altar of passive entertainment, bestow upon me | > | thy discordant images at such speed as to render linear | > | thought impossible" (Calvin, regarding TV) | > +----------------------------------------------------------- > > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org >
On Wed, 18 Jun 2003, Jonathan Bartlett wrote: > > > As for SCSI disks and Power supplies, Linux supports hot swap. Check out > > > Dell servers for example. > > > > Linux doesn't need to do anything to allow that, only the hardware needs > > to. It's kept away from the kernel by the RAID controller (in the case of > > disks) or just not noticed in the PS department. > > This is true, however, Linux also, to some extent, support generic SCSI > hotswap when the controller will handle it. True. There's actually some black art chicanery you can use to get a SCSI driver to add a drive and such other than just rmmod / insmodding it. I've played a bit with some of that stuff, and I don't think I'd ever do it in production in the middle of the day. Just wait til 10:00pm when the load is the lightest and hope it works, and if it doesn't, the unmount the partition, rmmod/insmod, remount, restart postgresql and you're gold. I've found that while it's a little harder to hot swap individual disks in linux using sw RAID, the ability to make the raid behave exactly as I want is worth it. Having lost a RAID5 set to a hw controller that simply had the cable to two drives come loose but refused to accept them back into the RAID5 after that without formatting them first, I'm no longer as wild about hw raid controllers as I once was.
scott.marlowe wrote: > I've found that while it's a little harder to hot swap individual disks in > linux using sw RAID, the ability to make the raid behave exactly as I want > is worth it. Having lost a RAID5 set to a hw controller that simply had > the cable to two drives come loose but refused to accept them back into > the RAID5 after that without formatting them first, I'm no longer as wild > about hw raid controllers as I once was. It seems that RAID controllers seem to be as likely a failure point as disk drives. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Again, if clustering can be made to work, with failover, load balancing,or whatever config you want, The failure points becomejust the boxes, not all the individual parts inside. A lot easier to hot swap too, with good clustering software. Bruce Momjian wrote: > scott.marlowe wrote: > >>I've found that while it's a little harder to hot swap individual disks in >>linux using sw RAID, the ability to make the raid behave exactly as I want >>is worth it. Having lost a RAID5 set to a hw controller that simply had >>the cable to two drives come loose but refused to accept them back into >>the RAID5 after that without formatting them first, I'm no longer as wild >>about hw raid controllers as I once was. > > > It seems that RAID controllers seem to be as likely a failure point as > disk drives. >
On Wed, 18 Jun 2003, Bruce Momjian wrote: > scott.marlowe wrote: > > I've found that while it's a little harder to hot swap individual disks in > > linux using sw RAID, the ability to make the raid behave exactly as I want > > is worth it. Having lost a RAID5 set to a hw controller that simply had > > the cable to two drives come loose but refused to accept them back into > > the RAID5 after that without formatting them first, I'm no longer as wild > > about hw raid controllers as I once was. > > It seems that RAID controllers seem to be as likely a failure point as > disk drives. Actually, the LSI cards can be setup to each run a RAID0 and then RAID1 them together, and if one card fails, the other keeps running. I.e. they can run two or more cards as though they were a single device. It's pretty slick. I'm just not happy with they way they behave when certain things happen, like my story above.
At 18:12 18.06.2003, scott.marlowe said: --------------------[snip]-------------------- >is worth it. Having lost a RAID5 set to a hw controller that simply had >the cable to two drives come loose but refused to accept them back into >the RAID5 after that without formatting them first, I'm no longer as wild >about hw raid controllers as I once was. --------------------[snip]-------------------- Say - you didn't test it before going production? ;-)) Never mind - I was that ignorant myself. However my current RAID V's are tested above the specs - installed RH7.2, removed 2 (sic!) disks, and remounted. The controller complained (of course) but still offered to try to remount - bingo, worked. I feel somehow safe now. -- >O Ernest E. Vogelsinger (\) ICQ #13394035 ^ http://www.vogelsinger.at/
On Wed, 18 Jun 2003, Ernest E Vogelsinger wrote: > At 18:12 18.06.2003, scott.marlowe said: > --------------------[snip]-------------------- > >is worth it. Having lost a RAID5 set to a hw controller that simply had > >the cable to two drives come loose but refused to accept them back into > >the RAID5 after that without formatting them first, I'm no longer as wild > >about hw raid controllers as I once was. > --------------------[snip]-------------------- > > Say - you didn't test it before going production? > ;-)) Actually, that was a legacy Oracle box we had that problem on. And it was someone else who called me RIGHT after moving it and having the cable come loose. Asked where his backups were. "We don't need backups, we run on a RAID5." uh huh... :-) It was on the older MegaRAID 428, by the way. > Never mind - I was that ignorant myself. However my current RAID V's are > tested above the specs - installed RH7.2, removed 2 (sic!) disks, and > remounted. The controller complained (of course) but still offered to try > to remount - bingo, worked. I test my linux box's sw raid the same way. Early flavors of sw raid were a little goofy (RH6.2 for instance) but under 7.x they seem to be very stable and work well at replacing drives and all. > I feel somehow safe now. Don't worry, something you forgot about will pop up it's head soon enough ;^)
> loose. Asked where his backups were. "We don't need backups, we run on a > RAID5." uh huh... :-) It amazes me how prevalent this idea is in the industry. Jon
Jonathan Bartlett wrote: > > loose. Asked where his backups were. "We don't need backups, we run on a > > RAID5." uh huh... :-) > > It amazes me how prevalent this idea is in the industry. Yes, it is amazing how there isn't more data loss than there already is. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Tue, Jun 17, 2003 at 02:38:30PM -0500, Ron Johnson wrote: > > Which indeed is load balancing again. I thought we were talking about a > > simple failover solution. Yes, I know VMS can do that as well. :-) > > How quickly do the (h/w and manpower) costs of "simple failover" > escalate? Two machines, a few days of setup and that's it. Okay, you get more once a machine fails, but this holds for all hardware, even if it fails less than PCs. :-) Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De ICQ: 179140304, AIM: michaelmeskes, Jabber: meskes@jabber.org Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
At 19:39 18.06.2003, scott.marlowe said: --------------------[snip]-------------------- >> I feel somehow safe now. > >Don't worry, something you forgot about will pop up it's head soon enough >;^) --------------------[snip]-------------------- You bet. -- >O Ernest E. Vogelsinger (\) ICQ #13394035 ^ http://www.vogelsinger.at/
Maybe there's lots of data loss but the records of data loss are also lost. ;) It's just most people aren't born with Murphy Field Intensifiers. e.g. I just pressed F1 on a PC BIOS screen option and got contents of some unknown portion of memory spewed on screen - looks like some server logs ( java etc). Didn't happen with other BIOS options. Bug in BIOS. Rebooted to somewhat clear the memory and it didn't happen. Link. At 01:58 PM 6/18/2003 -0400, Bruce Momjian wrote: >Jonathan Bartlett wrote: > > > loose. Asked where his backups were. "We don't need backups, we run > on a > > > RAID5." uh huh... :-) > > > > It amazes me how prevalent this idea is in the industry. > >Yes, it is amazing how there isn't more data loss than there already is. > >-- > Bruce Momjian | http://candle.pha.pa.us > pgman@candle.pha.pa.us | (610) 359-1001 > + If your life is a hard drive, | 13 Roberts Road > + Christ can be your backup. | Newtown Square, Pennsylvania 19073 > >---------------------------(end of broadcast)--------------------------- >TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html