I inherited a machine that was running Debian with a RAID 5 array. I installed a bunch of updates (1700 or so) that the OS recommended, then after rebooting, the raid array did not mount. The device /dev/md0 now does not exist, and I do not know why.
The /etc/mdadm/mdadm.conf contains:
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=138b0c65:20644731:39e394c4:192c7227
I tried to do mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1 . This makes a device md0, but it is listed as "degraded," and the last drive in the list is for some reason considered "spare." I strongly suspect, though I can not be sure, that it was sdb, sdc, and sdd that were involved in the RAID-5 array
I tried all 6 possible orderings of the devices, but the last one would always come up spare. I also tried --spare-devices=0 --force, which successfully got all three drives into the array with a "clean" status, but I was unable to actually mount the device md0. When I run "file -s" on /dev/md0, I get GLS_BINARY_LSB_FIRST, which seems unhelpful.
I have no reason to believe any of the devices are faulty; all of this seems to stem from the recent upgrading. How can I resurrect the old RAID 5 array? Have my --create machinations somehow messed it up? Note that I have never successfully mounted md0.
Please advise. I know this is always the story, but I am in big trouble if I can't resurrect this thing, so anyone who helps has my eternal gratitude, for whatever it's worth.
No comments:
Post a Comment