Thread: embedded pgsql media-failure
Hi! I'd need some advice. I'm working on a quite special field, I have to set up an embedded DB, which will store logs (measured values) and gives configuration to the machine and alsothis configuration can be changed. The system will consist of a CF card (with wear leveling) and an Intel atom CPU. The config data and the measured values together will take up around 2-3000 rows + a couple of ten thousand rows. The businees's side wants to make it as secure as it is possible, meaning, that the CF card will have two partitions, and the DB should be mirrored or distributed somehow on this two partiton, in case of a one-point disk-error the system should stay stable. Even though I've never used them, but as I see is, that "standard" replication like Slony, Heartbeat+DRBD or Postgres-R are not really able to cope with such kind of setup. Maybe I got something wrong :) So my problem is: without a network, on one single CF card with two partitions and only one CPU and only one server running how can it be managed to protect the data part against media-failure. Thanks for your help! _______________________________ Kokas Zsolt Save a tree...please only print this e-mail if it is genuinely required.
Hi, on a linux system try software raid1 for pg data. check if pg is the right choice for your needs here. maybe flat files for config+log is less problematic. regards thomas Kokas Zsolt schrieb: > Hi! > > I'd need some advice. > I'm working on a quite special field, I have to set up an embedded DB, > which will store logs (measured values) and gives configuration to the > machine and alsothis configuration can be changed. > The system will consist of a CF card (with wear leveling) and an Intel > atom CPU. The config data and the measured values together will take > up around 2-3000 rows + a couple of ten thousand rows. > The businees's side wants to make it as secure as it is possible, > meaning, that the CF card will have two partitions, and the DB should > be mirrored or distributed somehow on this two partiton, in case of a > one-point disk-error the system should stay stable. > Even though I've never used them, but as I see is, that "standard" > replication like Slony, Heartbeat+DRBD or Postgres-R are not really > able to cope with such kind of setup. Maybe I got something wrong :) > So my problem is: without a network, on one single CF card with two > partitions and only one CPU and only one server running how can it be > managed to protect the data part against media-failure. > > Thanks for your help! > _______________________________ > Kokas Zsolt > > Save a tree...please only print this e-mail if it is genuinely required. > >
On Tue, Feb 03, 2009 at 12:03:11PM +0100, Kokas Zsolt wrote: > The businees's side wants to make it as secure as it is possible, > meaning, that the CF card will have two partitions, and the DB should > be mirrored or distributed somehow on this two partiton, in case of a > one-point disk-error the system should stay stable. As Thomas said, I'd recommend leaving this disk level stuff to the OS. Divide the CF card in two and create a RAID1 array over the two partitions. The OS should be able to deal with disk issues much more robustly than PG. If you were more or less worried about things I suppose you could divide the disk up further to add more tolerance (i.e. protect "very important" areas by triple or quadruple redundancy, bearing in mind that write performance will be reduced by a similar factor). I'm not sure if you're trying to solve the wrong problem though, flash file systems should be used to dealing with this sort of issue and would be in a position to provide much more useful mechanisms than just duplicating everything. I've got (second-hand) recommendations of YAFFS, and have heard good things about JFFS2 as well. Also, it sounds as though PG may be overkill for this sort of scenario, it tends to be pretty write heavy which is something you probably don't want to be doing too much of on a flash device. Have you looked at anything simpler, maybe sqllite? Sam
> partitions. The OS should be able to deal with disk issues much more > robustly than PG. If you were more or less worried about things I As I see it now, it will be really the Soft-RAID what will suit for everybody here (including me) as well. > I'm not sure if you're trying to solve the wrong problem though, flash > file systems should be used to dealing with this sort of issue and > would be in a position to provide much more useful mechanisms than just > duplicating everything. I've got (second-hand) recommendations of > YAFFS, and have heard good things about JFFS2 as well. What I see from them is that they supported wear-leveling before wear-leveling was included into the drives. Currently I have a 16G SLC-based Swissbit CF-card. Practically the only high-tech part. Personally I have no practice with any flash file systems, so I would stick to ext3. Well, I wouldn't say I would have much more expertise with ext3 and soft-RAID. > Also, it sounds as though PG may be overkill for this sort of scenario, > it tends to be pretty write heavy which is something you probably don't > want to be doing too much of on a flash device. Have you looked at > anything simpler, maybe sqllite? Well, I've worked a bit with Oracle before, so PG is quite handy for me right now. Practically this decision was made before my arrival, I was basically asked, if I can learn PG too. I'm learning... PG - and database - was the decision because the flat files were managed over 15 years, sometimes partly rewritten, revisited and my real job is to adjust some in the application code as well for the configuration part of the DB. Guanistical software development :) Kokas
Kokas Zsolt wrote: >> I've got (second-hand) recommendations of >> YAFFS, and have heard good things about JFFS2 as well. > > What I see from them is that they supported wear-leveling before > wear-leveling was included into the drives. AFAIK jffs2 and yaffs are really for simple (generally memory-mapped) flash media that doesn't have any wear leveling, block remapping, etc. Things that're usually found on small embedded systems. Most flash storage that can be attached to PC-like machines is hidden behind a hardware translation interface (in the CF/SD/etc card its self or in the USB/ATA/etc adapter for it) that takes care of wear leveling and block remapping. That way you can use poorly suited filesystems like FAT32, ext3, etc on it without it flaking out on you. I haven't personally seen any that let you bypass or disable that remapping and address the raw flash. If anyone knows of any, I'd be really interested, as it'd be very handy for a project I'm working on. BTW, I'm not sure how much good OS-level RAID on a single device will do for you. Linux will try to reset the interface to the drive on I/O errors, will hang for long periods waiting for reads, etc and I wouldn't be at all surprised if this caused `md' to think that both "devices" (partitions) were failed. It also won't help you if the card's interface fails, the CF adapter fails, etc. Surely you'd be better off with *TWO* CF cards in RAID if you really want redundancy and reliability? -- Craig Ringer
On Tue, Feb 03, 2009 at 03:36:15PM +0100, Kokas Zsolt wrote: > > I've got (second-hand) recommendations of > > YAFFS, and have heard good things about JFFS2 as well. > > What I see from them is that they supported wear-leveling before > wear-leveling was included into the drives. Or for smaller, embedded, systems where the OS doesn't assume the presence of a HDD like device and it's more efficient to handle things in software. > Currently I have a 16G > SLC-based Swissbit CF-card. OK, the atom CPU made me think it was going to be a very small and simple device and hence the file system suggestions. As Craig says I'm not sure what help they would be if the device is doing wear leveling itself. > > Have you looked at anything simpler, maybe sqllite? > > Well, I've worked a bit with Oracle before, so PG is quite handy for > me right now. Practically this decision was made before my arrival, I > was basically asked, if I can learn PG too. I'm learning... Sounds more reasonable now, but thought it best to make some suggestions in case they hadn't been considered! -- Sam http://samason.me.uk/
On Wed, Feb 04, 2009 at 12:35:33AM +0900, Craig Ringer wrote: > BTW, I'm not sure how much good OS-level RAID on a single device will do > for you. Linux will try to reset the interface to the drive on I/O > errors, will hang for long periods waiting for reads, etc and I wouldn't > be at all surprised if this caused `md' to think that both "devices" > (partitions) were failed. Interesting, hadn't thought of that! > It also won't help you if the card's interface > fails, the CF adapter fails, etc. Surely you'd be better off with *TWO* > CF cards in RAID if you really want redundancy and reliability? But the flash storage is the part that's known to degrade with use, if you start worrying about the card's interface then why not the PCI bus it's hung off? -- Sam http://samason.me.uk/