*Disclaimer: I am not an EMC expert, nor am I guaranteeing nothing horrible will happen to your equipment or vSphere environment. Use this guide at your own risk.
Note that the VNXe has a single network line from each SP to each switch. This is the most simple way to cable. The SPs are able to handle NIC teaming using LACP trunks but we're going to use vSphere's round robin scheme to get us the same functionality. The requirement of LACP trunks means that you need to rely on a higher end switch, such as a Cisco Catalyst 3560 or an HP Procurve 2910al. Using vSphere's built in capabilities you can use a less switch but be careful about your throughput and switching capacities! You can get burned skimping on switches.
I chose to have each NIC on the SPs set to a different IP address. In this case:
- SP A eth2 = 192.168.50.10
- SP A eth3 = 192.268.51.10
- SP B eth 2 = 192.168.50.20
- SP B eth 3 = 192.268.51.20
Moving on to the vSphere end of things. There is an iSCSI requirement that you ensure VMkernels are tied to a single specific port. You can assign multiple ports to a vSwitch but you need to ensure that each VMkernel has the switch failover order overridden to assign a specific physical port to the kernel. This can get confusing and since I don't have a whole lot of vSwitches in my environment I choose to create VMkernels one per vSwitch. VMware advises that the number of vSwitches can impact performance so be aware of how that may impact your environment.