I have a HP C3000 blade enclosure, some blades and a flex-10 interconnect module, running ESXi 4.1.
I'd like to have networking between the blades, and was wondering if the interconnect module can effectively act as a "virtual switch". Virtual connect manager seems to imply that blades can communicate within the virtual connect domain (in the description of the private networking functionality), but there doesn't seem any communication possible without an external uplink port to a physical switch.
I have the option of using a virtual switch within the ESXi hosts, but wouldn't that make vMotion unusable (with communication interrupted to the moved VM) ?
What I had in mind was a virtual network between VM's on enclosure level (not within ESXi hosts), with VLAN tags to separate between traffic types. I'd also like to have vMotion on that virtual network, as moving out of the enclosure just to come back again seems strange from my point of view.
Put shortly, what options could I have to establish networking between my blades/vm's without leaving the enclosure ?
Answer
It can act like a L2 switch exactly as you state, and better still it's non-blocking, you'll have effectively 160Gbps between servers so you won't have any vMotion performance issues.
We use exactly the same setup and it's just great - just carve up at least 1Gbps for vmkernel traffic (management, HA and vMotion) and that then leaves you the rest for either regular VM traffic or you can even split the remainder in two for iSCSI traffic to be separate too. If you had the FlexFabric you can do the same but use the G7/Gen8 CNA capability to do FC connections too.
Basically you're on the right track and you'll be just fine.
No comments:
Post a Comment