We'll soon have to come up with something working out of these:
- 1x new server with 6 ethernet interfaces
- 2x old servers with 4 ethernet interfaces
- vSphere Essentials Plus on those 3 servers
- 1x dual controller iSCSI SAN
- 1x switch
- 1x ethernet uplink to the server room backbone
I was thinking about this network setup:
- 2x ethernet ports from each server teamed up (vSwitch0) and going to the switch
- VLANs on those ports and on the switch to segregate WAN, LAN, vMotion, management
- 2x ethernet ports from each server to the two iSCSI controllers (jumbo frames)
- 2x unused ethernet ports on the new server
- one port of the switch for the uplink
This seems to me the most logical way to utilize the current hardware resources and it keeps things equal on all systems. The storage parts will go on without any single point of failure and the network part will have to wait for a second switch...
Other than getting that second switch, is there anything else that could be improved with the current setup? Should I put aside another VLAN for vSphere Replication if I want to try that? Or for anything else?
Answer
This is good as-is. You won't need a special VLAN for vSphere replication.
For small installations I often go with 4-pNIC solutions. Below is a 2 x vSwitch setup with VM traffic on vSwitch0 and storage on vSwitch1.
Public, management, private and vMotion networks are on their respective VLANs, trunked from the vSwitch uplinks. The vSwitch teaming is set to use both adapters, but the individual port groups have their own settings. E.g. vMotion1 and vMotion2 have one active and one standby adapter. The configurations for those ports groups are the inverse of each other.
While I normally use NFS, your iSCSI will look similar, but would instead use MPIO. Be sure to configure active-active load balancing for your iSCSI LUNs. Check the storage vendor's recommendation on round-robin tuning for this.
No comments:
Post a Comment