IBM once again is promoting is striving to increase market share by offering customers the chance to get a “free” IBM BladeCenter chassis. The last time they promoted a free chassis was in November, so this year they kicked in the promo effective July 5, 2011. The promotion is for a free chassis – without any purchase, however a chassis without any blades or switches is just a metal box. Regardless, this promotion is a great way to help offset some of the cost to implementation of your blade server project. Continue reading →
You have probably heard of IBM’s ruggedized BladeCenter offering, the BladeCenter T and HT but did you know there was another IBM blade server offering that meets MIL-SPEC requirements that is not sold by IBM? Continue reading →
A few weeks ago, IBM and Emulex announced a new blade server adapter for the IBM BladeCenter and IBM System x line, called the “Emulex Virtual Fabric Adapter for IBM BladeCenter" (IBM part # 49Y4235). Frequent readers may recall that I had a "so what" attitude when I blogged about it in October and that was because, I didn't get it. I didn't get what the big deal was with being able to take a 10Gb pipe and allow you to carve it up into 4 "virtual NICs". HP's been doing this for a long time with their FlexNICs (check out VirtualKennth's blog for a great detail on this technology) so I didn't see the value in what IBM and Emulex was trying to do. But now I understand. Before I get into this, let me remind you of what this adapter is. The Emulex Virtual Fabric Adapter (CFFh) for IBM BladeCenter is a dual-port 10 Gb Ethernet card that supports 1 Gbps or 10 Gbps traffic, or up to eight virtual NIC devices.
This adapter hopes to address three key I/O issues:
1.Need for more than two ports per server, with 6-8 recommended for virtualization
2.Need for more than 1Gb bandwidth, but can't support full 10Gb today
3.Need to prepare for network convergence in the future
"1, 2, 3, 4" I recently attended an IBM/Emulex partner event and Emulex presented a unique way to understand the value of the Emulex Virtual Fabric Adapter via the term, "1, 2, 3, 4" Let me explain:
"1" – Emulex uses a single chip architecture for these adapters. (As a non-I/O guy, I'm not sure of why this matters – I welcome your comments.)
"2" – Supports two platforms: rack and blade (Easy enough to understand, but this also emphasizes that a majority of the new IBM System x servers announced this week will have the Virtual Fabric Adapter "standard")
"3" – Emulex will have three product models for IBM (one for blade servers, one for the rack servers and one intergrated into the new eX5 servers)
"4" – There are four modes of operation:
Legacy 1Gb Ethernet
Fibre Channel over Ethernet (FCoE)…via software entitlement ($$)
This last part is the key to the reason I think this product could be of substantial value. The adapter enables a user to begin with traditional Ethernet, then grow into 10Gb, FCoE or iSCSI without any physical change – all they need to do is buy a license (for the FCoE or iSCSI).
Modes of operation
The expansion card has two modes of operation: standard physical port mode (pNIC) and virtual NIC (vNIC) mode.
In vNIC mode, each physical port appears to the blade server as four virtual NIC with a default bandwidth of 2.5 Gbps per vNIC. Bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.
In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.
As previously mentioned, a future entitlement purchase will allow for up to two FCoE ports or two iSCSI ports. The FCoE and iSCSI ports can be used in combination with up to six Ethernet ports in vNIC mode, up to a maximum of eight total virtual ports.
Mode IBM Switch Compatibility
vNIC – works with BNT Virtual Fabric Switch pNIC – works with BNT, IBM Pass-Thru, Cisco Nexus FCoE– BNT or Cisco Nexus iSCSI Acceleration – all IBM 10GbE switches
I really think the "one card can do all" concept works really well for the IBM BladeCenter design, and I think we'll start seeing more and more customers move toward this single card concept.
Comparison to HP Flex-10 I'll be the first to admit, I'm not a network or storage guy, so I'm not really qualified to compare this offering to HP's Flex-10, however IBM has created a very clever video that does some comparisons. Take a few minutes to watch and let me know your thoughts.
I recently heard some rumours about IBM’s BladeCenter products that I thought I would share – but FIRST let me be clear: this is purely speculation, I have no definitive information from IBM so this may be false info, but my source is pretty credible, so…
Rumour 1: It appears IBM may call it the HS43(not HS42 like I first thought.) I’m not sure why IBM would skip the “HS42” nomenclature, but I guess it doesn’t really matter. This is rumoured to be released in March 2010.
Rumour 2: It seems that I was right in that the 4 socket offering will be a double-wide server, however it appears IBM is working with Intel to provide a 2 socket Intel Nehalem EX blade as the foundation of the HS43. This means that you could start with a 2 socket blade, then “snap-on” a second to make it a 4 socket offering – but wait, there’s more… It seems that IBM is going to enable these blade servers to grow to up to 8 sockets via snapping on 4 x 2 socket servers together. If my earlier speculations (http://bladesmadesimple.com/2009/09/ibm-announces-4-socket-intel-blade-server/) are accurate and each 2 socket blade module has 12 DIMMs, this means you could have an 8 socket, 64 cores, 96 DIMM, 1.5TB of RAM (using 16GB per DIMM slot) all in a single BladeCenter chassis. This, of course, would take up 4 blade server slots. Now the obvious question around this bit of news is WHY would anyone do this? The current BladeCenter H only holds 14 servers so you would only be able to get 3 of these monster servers into a chassis. Feel free to offer up some comments on what you think about this.
Rumour 3: IBM’s BladeCenter S chassis currently uses local drives that are 3.5″. The industry is obviously moving to smaller 2.5″ drives, so it’s only natural that the BladeCenter S drive cage will need to be updated to provide 2.5″ drives. Rumour is that this is coming in April 2010 and it will offer up to 24 x 2.5″ SAS or SATA drives.
Rumour 4: What’s missing from the BladeCenter S right now that HP currently offers? A tape drive. Rumour has it that IBM will be adding a “TS Family” tape drive offering to the BladeCenter S in upcoming months. This makes total sense and is well-needed. Customers buying the BladeCenter S are typically smaller offices or branch offices, so using a local backup device is a critical component to insuring data protection. I’m not sure if this will be in the form of taking up a blade slot (like HP’s model) or it will be a replacement for one of the 2 drive cages. I would imagine it will be the latter since the BladeCenter S architecture allows for all servers to connect to the drive cages, but we’ll see.
That’s all I have. I’ll continue to keep you updated as I hear rumours or news.
According to IBM’s System x and BladeCenter x86 Server Blog, the IBM BladeCenter HS22 server has posted the best SPECweb2005 score ever from a blade server. With a SPECweb2005 supermetric score of 75,155, IBM has reached a benchmark seen by no other blade yet to-date. The SPECweb2005 benchmark is designed to be a neutral, equal benchmark for evaluting the peformance of web servers. According to the IBM blog, the score is derived from three different workloads measured:
The HS22 achieved these results using two Quad-Core Intel Xeon Processor X5570 (2.93GHz with 256KB L2 cache per core and 8MB L3 cache per processor—2 processors/8 cores/8 threads). The HS22 was also configured with 96GB of memory, the Red Hat Enterprise Linux® 5.4 operating system, IBM J9 Java® Virtual Machine, 64-bit Accoria Rock Web Server 1.4.9 (x86_64) HTTPS software, and Accoria Rock JSP/Servlet Container 1.3.2 (x86_64).
It’s important to note that these results have not yet been “approved” by SPEC, the group who posts the results, but as soon as they are, they’ll be published at at http://www.spec.org/osg/web2005
The IBM HS22 is IBM’s most popular blade server with the following specs:
up to 2 x Intel 5500 Processors
12 memory slots for a current maximum of 96Gb of RAM
2 hot swap hard drive slots capable of running RAID 1 (SAS or SATA)
Emulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC. The “Emulex Virtual Fabric Adapter for IBM BladeCenter(IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card.
When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card. According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port. The one catch with this mode is that it ONLY operates with the BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC. This means no connection to Cisco Nexus…yet. According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade. Not sure if that means compatibility with Cisco Nexus 5000 or not. We’ll have to wait and see.
When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card. The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.
So What? I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value. HP has had this functionality for over a year now in their VirtualConnect Flex-10 offering so this technology is nothing new. Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic. I’m just not sure, but am open to any comments or thoughts.