Our current 200-user infrastructure is based upon XenServer and a StarWind iSCSI SAN. The C: drives for the virtual server's OS are mounted in XenServer SR volumes which are in turn virtual hard disks on the StarWind SAN. However, the data drives for the VM (say our file server) is mounted using the Microsoft iSCSI initiator from within the virtual OS. Therefore the bulk of the I/O is via iSCSI direct (okay via the hypervisor NIC stack) to/from the SAN. We're not limited by the 2TB disk limit in XenServer. Thin-provisioning is provided by the StarWind SAN.
We're moving to a Hyper-V 2012 environment and the situation is a little less clear as we could mount the E: drive via a 2nd VHDX (now the same 2TB size limit is removed). The VHDX also offers thin-provisioning. However, it's still going to have to go across iSCSI from the Hyper-V server to the same SAN so to me, it feels like the VHDX route must be adding an extra layer and therefore would offer lower performance.
Any words of wisdom on whether direct iSCSI or via VHDX is "better" appreciated.
Answer
The two strategies are about equally costly. The VHDX part does add a very thin layer, but doing networking from the Hyper-V parent partition in comparison to doing it from the guest will be slightly less expensive, as you're not doing network virtualization for the iSCSI traffic.
The VHDX strategy, however, it far easier to manage. Personally, I'd choose the ease of management.
No comments:
Post a Comment