Emulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC. The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card.
When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card. According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port. The one catch with this mode is that it ONLY operates with the BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC. This means no connection to Cisco Nexus…yet. According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade. Not sure if that means compatibility with Cisco Nexus 5000 or not. We’ll have to wait and see.
When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card. The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.
So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value. HP has had this functionality for over a year now in their VirtualConnect Flex-10 offering so this technology is nothing new. Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic. I’m just not sure, but am open to any comments or thoughts.
legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms
I tend to agree with you. The problem with the HP and IBM technology to carve up the NICs is that the indusrty is quickly moving the opposite way towards convergence. While the technology is nice and I do see some advantages for VMWare environments on Blades by carving up the vNICS I see the market evolving quickly past this as 10G Ethernet becomes the default “plumbing” in the datacenter.
Aaron – thanks for the comment. I really think that we are on the verge of 10G Ethernet becoming standard in all areas. When that happens, then the “virtual” nics will have little appeal. I guess we’ll see.
One more thing to move that point along. One thing I really like about convergence is the ability to run whatever protocol the environment calls for. It could be Eth, FCoE, iSCSI, NFS…
Until these vNic technologies do the same I don’t see them as cutting edge.
Again, I can see a use in Blade/VMware environments where you can carve up a bunch of nics but I would love to see 10x1GB instead of 4×2.5. Customers are used to the 1GB pipe and you don’t go down a hole trying to explain it because they are carving it up in non-traditional ways.
Thanks again for all your work keeping the blog up! I’m really enjoying it!
Pingback: Officially Announced: IBM’s Nexus 4000 Switch: 4001I (PART 2) « Kevin Houston's Blade Server Blog
Pingback: Blades Made Simple · Virtual I/O on IBM BladeCenter (IBM Virtual Fabric Adapter by Emulex)
Pingback: Virtual I/O on IBM BladeCenter (IBM Virtual Fabric Adapter by Emulex) « BladesMadeSimple.com (MIRROR SITE)
Pingback: Kevin Houston