Thursday, August 20, 2015

ubuntu raid problem - different arrays configured, defferent mounted



I've just finished setting up my new ubuntu server 10.4 machine, with 2x500 GB SATA disks - which I intended to configure in raid1; specifically, that's what I did during the instalation process:



partitions:



disk 1 - sda:
sda1 - 500mb primary
sda2 - 99gb primary
sda3 extended
sda5 - 399gb extended



disk 2 - sdb:
sdb 1 - 500mb primary
sdb2 - 99gb primary
sdb3 extended
sdb5 - 399gb extended




arrays:



md0 - sda1+sdb1, raid1, ext2, /boot
md1 - sda2+sdb2, raid1, ext4, /
md2 - sda5+sdb5, raid1, not formatted, not mounted during the instalation.



everything went smooth, but when my new system booted up, that's what I saw:





   / was on /dev/md1 during installation  

UUID=cc1a0b10-dd66-4c88-9022-247bff6571a6
/ ext4 errors=remount-ro 0 1
/boot was on /dev/md0 during installation
UUID=7e37165c-ab1c-4bd4-a62b-8b98656fe1f1
/boot ext2 defaults 0 2




 major minor blocks name  


8 0 488386584 sda
8 1 487424 sda1
8 2 97265664 sda2
8 3 1 sda3
8 5 390631424 sda5
8 16 488386584 sdb
8 17 487424 sdb1
8 18 97265664 sdb2
8 19 1 sdb3

8 21 390631424 sdb5
9 2 390631360 md2
259 0 487424 md2p1
259 1 97265664 md2p2
259 2 1 md2p3
259 3 292876224 md2p5
9 1 97265600 md1
9 0 487360 md0





 Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]  
md0 : active raid1 md2p1[0]
487360 blocks [2/1] [U_]

md1 : active raid1 md2p2[0]
97265600 blocks [2/1] [U_]

md2 : active raid1 sda[0] sdb[1]

390631360 blocks [2/2] [UU]
[============>........] resync = 63.1% (246865856/390631360) finish=25.9min speed=92459K/sec

unused devices:




/dev/md0:  
Version : 00.90

Creation Time : Wed Jul 7 16:07:16 2010
Raid Level : raid1
Array Size : 487360 (476.02 MiB 499.06 MB)
Used Dev Size : 487360 (476.02 MiB 499.06 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Jul 7 17:13:58 2010

State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : fefff930:8e4d661c:665cfb90:2bbaf5ad
Events : 0.74

Number Major Minor RaidDevice State

0 259 0 0 active sync /dev/md2p1
1 0 0 1 removed




/dev/md1:  
Version : 00.90
Creation Time : Wed Jul 7 16:07:23 2010
Raid Level : raid1

Array Size : 97265600 (92.76 GiB 99.60 GB)
Used Dev Size : 97265600 (92.76 GiB 99.60 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Jul 7 17:38:19 2010
State : clean, degraded
Active Devices : 1

Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 68b86560:6150f422:6a741df7:3de5f08f
Events : 0.460

Number Major Minor RaidDevice State
0 259 1 0 active sync /dev/md2p2
1 0 0 1 removed





/dev/md2:  
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 390631360 (372.54 GiB 400.01 GB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)

Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Wed Jul 7 17:37:04 2010
State : active, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0

Spare Devices : 0

Rebuild Status : 65% complete

UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb





/dev/md2p1:  
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 487424 (476.08 MiB 499.12 MB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)

Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0

Spare Devices : 0

UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb





/dev/md2p2:  
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 97265664 (92.76 GiB 99.60 GB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2

Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0


UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb





/dev/md2p3:  
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 1
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent


Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33


Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb




/dev/md2p5:  
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010

Raid Level : raid1
Array Size : 292876224 (279.31 GiB 299.91 GB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Wed Jul 7 17:37:04 2010
State : active

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda

1 8 16 1 active sync /dev/sdb


It seems like instead of building raid1 arrays:
md0 = sda1+sdb1
md1 = sda2+sdb2



something like additional 'sub-arrays' have been built:
md2p1 = sda1+sdb1
md2p2 = sda2+sdb2



and these 'sub-arrays' are configured as parts of md0 and md1 arrays.
Because I only have 2 disks (partitions) for each array, mdadm correctly builds md2p1 and md2p2 from 2 partitions each, but then starts main arrays: md0 and md1 as degraded - because they only consist from 1 'sub-array' each.




Now I'm wondering - what did I do wrong? Or maybe everything is ok, and I just don't understand some part of this configuration? But it really doesn't seem that way - md0 and md1 are clearly marked as degraded. So now - how do I make it right? Do I have to reinstall the system? better now, just after the installation, then later, after I put some effort in configuring and securing it. But maybe there are some nice mdadm tricks just to make everything ok?
Help please :) Thanks!






cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#


# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST


# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=fefff930:8e4d661c:665cfb90:2bbaf5ad
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=68b86560:6150f422:6a741df7:3de5f08f
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=fc7dadbe:2230a995:814dd292:d7c4bf75

# This file was auto-generated on Wed, 07 Jul 2010 16:18:30 +0200

# by mkconf $Id$

Answer



seems to be a rather serious bug:



fix will be shipped with ubuntu 10.04.2 - workaround possible as described at launchpad



https://bugs.launchpad.net/ubuntu/+source/partman-base/+bug/569900



i suffered hard from this issue trying to get a proper software raid running with to 500,1GB HDDs.




All one as a bug-victim has to do is to leave some space free at the end of the last partition and everything will be fine again :) . so don't choose the default value which gets wrongly calculated by partman.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...