A recent article by SearchDataCenter.com titled Top Ten Data Center Trends of 2011, predicts that Cisco UCS blade servers will grab mega market share and inspires more converged infrastructure products. Continue reading
Tag Archives: ESX
IBM Announces Emulex Virtual Fabric Adapter for BladeCenter…So?
Emulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC. The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card.
When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card. According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port. The one catch with this mode is that it ONLY operates with the BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC. This means no connection to Cisco Nexus…yet. According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade. Not sure if that means compatibility with Cisco Nexus 5000 or not. We’ll have to wait and see.
When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card. The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.
So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value. HP has had this functionality for over a year now in their VirtualConnect Flex-10 offering so this technology is nothing new. Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic. I’m just not sure, but am open to any comments or thoughts.
legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms