Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? - Mailing list pgsql-general

From Greg Smith
Subject Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
Date
Msg-id alpine.GSO.2.01.0910210236400.1418@westnet.com
Whole thread Raw
In response to Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
List pgsql-general
On Wed, 21 Oct 2009, Scott Marlowe wrote:

> Actually, later models of linux have a direct RAID-10 level built in.
> I haven't used it.  Not sure how it would look in /proc/mdstat either.

I think I actively block memory of that because the UI on it is so cryptic
and it's been historically much more buggy than the simpler RAID0/RAID1
implementaions.  But you're right that it's completely possible Ow used
it.  Would explain not being able to figure out what's going on too.

There's a good example of what the result looks like with failed drives in
one of the many bug reports related to that feature at
https://bugs.launchpad.net/ubuntu/intrepid/+source/linux/+bug/285156 and I
liked the discussion of some of the details here at
http://robbat2.livejournal.com/231207.html

The other hint I forgot to mention is that you should try:

mdadm --examine /dev/XXX

For each of the drives that still works, to help figure out where they fit
into the larger array.  That and --detail are what I find myself using
instead of /proc/mdstat , which provides an awful interface IMHO.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

pgsql-general by date:

Previous
From: Greg Smith
Date:
Subject: Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
Next
From: Arnaud Lesauvage
Date:
Subject: Re: [postgis-users] pgsql2shp : Encoding headache