When an hypervisor like XenServer or vSphere can be run on diskless nodes (eg. by booting from flash cards or from the network) and VM storage is handled via a SAN, is there any good use for local disks?
Would it be better to have those disks even if not used to boot the hypervisor or to hold VMs?
What are the reasons, if any, to choose completely diskless servers VS having some local storage?
Answer
I don't have a ton of XenServer experience, but here's some information coming from a VMware background.
vSphere ESXi runs completely in memory following boot, so local storage JUST for the ESXi installation is generally considered overkill. Booting off SD card/USB stick/PXE(network) is supported under ESXi, so there's lots of options.
Servers with no local storage have some benefits, primarily:
- Lower cost
- Lower power consumption
- Less heat generated by server
However, this doesn't mean local storage can't be useful. First and foremost, you can configure ESXi to use local storage for VM swapfiles. This reduces load on your SAN and can improve performance under some workloads. Since these swapfiles are small in size, and temporary in nature, you can use small (70GB-150GB) 15k RPM SAS drives to get good performance for a low price.
Also, new in ESXi 5.5 is Flash Caching, which allows you to use SSDs local to the ESXi host to intelligently cache VM data. This reduces load on the SAN and improves performance for those VMs. This isn't cheap, but it can speed up some workloads significantly.
So, hypervisor configurations with local disk can be "better" if your workload can capitalize on localized swapfiles or Flash Caching. If you don't think those features will help you, than there's no compelling reason to use local disk.
No comments:
Post a Comment