Thursday, October 27, 2016

performance - SSD drives and RAID configurations vs LVM




Background:



I'm familiar with the basic RAID levels, and am curious to know if using SSD devices in a RAID0 or RAID5 would be a better deployment than adding them to a large LVM volume.



Specifically, I'm concerned about heat, sound, and power consumption in a small server room, and am planning to move to SSDs from hard disks. The servers in question have 4-6 SATA-II channels, so this is just about how to get the highest performance out of the drives after the switch, not worried about adding new controllers or anything else in the drastic category other than replacing drives.



RAID0



With RAID0, I realize I have no recoverability from a drive loss - but in a dominantly read environment, I believe the SSDs will not likely ever come close to hitting their estimated 1000000-hours MTBF, and certainly won't hit the write-cycle issues that plagued flash memory for a long time (but now seem to effectively be a thing of the past).




RAID5



With RAID5 I'd be "losing" one of the drives for parity, but in the event any one of them dies, I can recover by just replacing that unit.



LVM



With LVM, I'm effectively creating a software JBOD - simple, but if a drive dies, whatever is on it is gone like in RAID0.







Question:



What does the SF community suggest as the best approach for this scenario?


Answer



First of all, LVM configuration and RAID settings should be two independent decisions. Use RAID to set up redundancy and tweak performance, use LVM to build the volumes you need from the logical disks that RAID controller provides.



RAID0 should not appear in your vocabulary. It is only acceptable as a way to build fast storage for data that nobody cares about if it blows up. The need for it is largely alleviated by the speed of SSDs (enterprise-class SSD can do 10+ times more IOPS than the fastest SAS hard disk, so there's no longer need to spread the load over multiple spindles), and, should you ever need it, you can also achieve the same result with LVM striping, where you have much more flexibility.



RAID1 or RAID10 doesn't make much sense with SSDs, again, because they are much faster than regular disks, you don't need to waste 50% of your space in exchange for performance.




RAID5, therefore, is the most appropriate solution. You lose a bit of space (1/6th or 1/4th), but gain redundancy and peace of mind.



As for LVM, it's up to you do decide how to use the space you get after creating your RAID groups. You should use LVM as a rule, even in its simplest configuration of mapping one PV to one VG to one LV, just in case you need to make changes in the future. Besides, fdisk is so 20th century! In your specific case, since most likely it'll be single RAID group spanning all disks in the server, you won't be joining multiple PVs in a VG, so striping or concatenating don't figure in your setup, but in the future, if you move to larger external arrays (and I have the feeling that eventually you will), you'll have those capabilities at your disposal, with minimal changes to your existing configuration.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...