HP Flex 10 vs VMware vSphere Network I/O Control for VDI

I once was a huge fan of HP’s Virtual Connect Flex-10 10Gb Ethernet Modules but with the new enhancements to VMware vSphere 5, I don’t think I would recommend for virtual environments anymore. The ability to divide the two onboard network cards up to 8 NICS was a great feature and still is, if you have to do physical deployments of servers. I do realize that there is the HP Virtual Connect FlexFabric 10Gb/24-port Module but I live in the land of iSCSI and NFS so that is off the table for me.

With vSphere 5.0, VMware improved on its VMware’s Virtual Distributed Switch (VDS) functionality and overall networking ability, so now it’s time to recoup some of that money on the hardware side. The way I see is most people with a chassis full of blade servers probably already have VMware Enterprise Plus licenses, so they are already entitled to VDS, however what you may not have known is that customers with VMware View Premier licenses are also entitled to use VDS. Some of the newest features found in VMware VDS 5 are:

 
· Supports NetFlow v5
· Port mirror
· Support for LLDP (Not just CISCO!)
· QoS
· Improved Priority features for VM traffic
· Network I/O Control (NIOC) for NFS

The last feature is the one that makes me think I don’t need to use HP’s Flex-10 anymore. Network I/O control (NIOC) allows you to assign shares to your network interfaces set priority, limits control congestions all in a dynamic fashion. What I particularly like about NIOC as compared to Flex-10 is the wasted bandwidth with hard limits. In the VDI world, the workload becomes very bursty. One example can be seen when using vMotion. When I’m performing maintenance work in a virtual environment I think it sure would be nice to have more than 2 GB/s a link to move the desktops off – however when you have to move 50+ desktops per blade you have to sit there and wait awhile. Of course, when this is your design, you wait because you wouldn’t want to suffer performance problems during the day by lack of bandwidth on services.

A typical Flex-10 configuration may break down the on board nic (LOM) something like this

Bandwidth vmnic NIC/SLOT Port Function
500 Mb/s 0 LOM 0A Management
2 Gb /s 1 LOM 0B vMotion
3.5 Gb /s 2 LOM 0C VM Networking
4 Gb/s 3 LOM 0D Storage (iSCSI/NFS)
500 Mb/s 4 LOM 1A Management
2 Gb /s 5 LOM 1B vMotion
3.5 Gb /s 6 LOM 1C VM Networking
4 Gb/s 7 LOM 1D Storage (iSCSI /NFS)

To get a similar setup with NIOC it may look something like this.

clip_image002

Total shares from above would be: 5 + 50 + 40 + 20 = 115

In this example FT, iSCSI and Replication don’t have to be counted as they will not be used. The shares only kick if there is contention. The shares are also only applied if the traffic type exists on the link. I think it would best practice to limit vMotion traffic as multiple vMotions kicking off could easily exceed the bandwidth. I think 8000 Mbps would be reasonable limit with this sort of setup.

Management: 5 shares; (5/115) X 10 Gb = 434.78 Mbps

NFS: 50 shares; (50/115) X 10 Gb = 4347.83Mbps

Virtual Machine: 40 shares; (40/115) * 10 Gb = 3478.26Mbps

vMotion: 20 shares; (20/115) X 10 Gb = 1739.13Mbps

I think the benefits plus the cost saving is worth moving ahead with a 10GB design with NIOC. Below are some list prices taken on November 28, 2011. Which one are you going to choose?

Flex-10

clip_image003

HP_6120G-XG

clip_image004

 

Dwayne is the newest Contributor to BladesMadeSimple.com and is the author of IT Blood Pressure (http://itbloodpressure.com/) where he provides tips on Virtual Desktops and gives advice on best practices in the IT industry with a particular focus in Healthcare.  In his day job, Dwayne is an Infrastructure Specialist in the Healthcare and Energy Sector in Western Canada.