Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? - Mailing list pgsql-general

From Scott Marlowe
Subject Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
Date
Msg-id dcc563d10910202325q363fdbc2u3249cd76ff162d63@mail.gmail.com
Whole thread Raw
In response to Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?  (Greg Smith <gsmith@gregsmith.com>)
Responses Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
List pgsql-general
On Wed, Oct 21, 2009 at 12:10 AM, Greg Smith <gsmith@gregsmith.com> wrote:
> On Tue, 20 Oct 2009, Ow Mun Heng wrote:
>
>> Raid10 is supposed to be able to withstand up to 2 drive failures if the
>> failures are from different sides of the mirror.  Right now, I'm not sure
>> which drive belongs to which. How do I determine that? Does it depend on the
>> output of /prod/mdstat and in that order?
>
> You build a 4-disk RAID10 array on Linux by first building two RAID1 pairs,
> then striping both of the resulting /dev/mdX devices together via RAID0.

Actually, later models of linux have a direct RAID-10 level built in.
I haven't used it.  Not sure how it would look in /proc/mdstat either.

>  You'll actually have 3 /dev/mdX devices around as a result.  I suspect
> you're trying to execute mdadm operations on the outer RAID0, when what you
> actually should be doing is fixing the bottom-level RAID1 volumes.
>  Unfortunately I'm not too optimistic about your case though, because if you
> had a repairable situation you technically shouldn't have lost the array in
> the first place--it should still be running, just in degraded mode on both
> underlying RAID1 halves.

Exactly.  Sounds like both drives in a pair failed.

pgsql-general by date:

Previous
From: Greg Smith
Date:
Subject: Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
Next
From: Greg Smith
Date:
Subject: Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?