Friday, March 24, 2017

linux - How to create partition when growing raid5 with mdadm



I have 4 drives, 2x640GB, and 2x1TB drives.
My array is made up of the four 640GB partitions and the beginning of each drive.
I want to replace both 640GB with 1TB drives.
I understand I need to
1) fail a disk
2) replace with new

3) partition
4) add disk to array



My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later?



Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive?



fdisk info:



Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xe3d0900f

Device Boot Start End Blocks Id System
/dev/sdb1 1 77825 625129281 fd Linux raid autodetect
/dev/sdb2 77826 121601 351630720 83 Linux

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc0b23adf

Device Boot Start End Blocks Id System
/dev/sdc1 1 77825 625129281 fd Linux raid autodetect
/dev/sdc2 77826 121601 351630720 83 Linux

Disk /dev/sdd: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x582c8b94

Device Boot Start End Blocks Id System
/dev/sdd1 1 77825 625129281 fd Linux raid autodetect

Disk /dev/sde: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbc33313a


Device Boot Start End Blocks Id System
/dev/sde1 1 77825 625129281 fd Linux raid autodetect

Disk /dev/md0: 1920.4 GB, 1920396951552 bytes
2 heads, 4 sectors/track, 468846912 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Answer





My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?




You can, but you're not going to gain anything immediately from that.




Or do I create another 640GB partition and grow it later?




Yes.




RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.



The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.






Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):



Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.







Additional follow-up:



I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself, I can say that it is possible, it does work, and can save you time.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...