IBM’s New Approach to Ethernet/Fibre Traffic

Okay, I’ll be the first to admit when I’m wrong – or when I provide wrong information.

A few days ago, I commented that no one has yet offered the ability to split out Ethernet and Fibre traffic at the chassis level (as opposed to using a top of rack switch.)  I quickly found out that I was wrong – IBM now has the ability to separate the Ethernet fabric and the Fibre fabric at the BladeCenter H,  so if you are interested grab a cup of coffee and enjoy this read.

First a bit of background.  The traditional method of providing Ethernet and Fibre I/O in a blade infrastructure was to integrate 6 Ethernet switches and 2 Fibre switches into the blade chassis, which provides 6 NICs and 2 Fibre HBAs per blade server.  This is a costly method and it limits the scalability of a blade server.

A more conventional method that is becoming more popular is to converge the I/O traffic using a single converged network adapter (CNA) to carry the Ethernet and the Fibre traffic over a single 10Gb connection to a top of rack (TOR) switch which then sends the Ethernet traffic to the Ethernet fabric and the Fibre traffic to the Fibre fabric.  This reduces the number of physical cables coming out of the blade chassis, offers higher bandwidth and reduces the overall switching costs.  Up now, IBM offered two different methods to enable converged traffic:

Method 1: connect a pair of 10Gb Ethernet Pass-Thru modules into the blade chassis, add a CNA on each blade server, then connect the pass thru modules to a top of rack  convergence switch from Brocade or Cisco.  This method is the least expensive method, however since Pass-Thru modules are being used, a connection is required on the TOR convergence switch for every blade server being connected.  This would mean a 14 blade infrastructure would eat up 14 ports on the convergence switch, potentially leaving the switch with very few available ports.

Method #2: connect a pair of IBM Cisco Nexus 4001i switches, add a CNA on each server then connect the Nexus 4001i to a Cisco Nexus 5000 top of rack switch.  This method enables you to use as few as 1 uplink connection from the blade chassis to the Nexus 5000 top of rack switch, however it is more costly and you have to invest into another Cisco switch.

The New Approach
A few weeks ago, IBM announced the “Qlogic Virtual Fabric Extension Module” – a device that fits into the IBM BladeCenter H and takes the the Fibre traffic from the CNA on a blade server and sends it to the Fibre fabric.  This is HUGE!  While having a top of rack convergence switch is helpful, you can now remove the need to have a top of rack switch because the I/O traffic is being split out into it’s respective fabrics at the BladeCenter H.

What’s Needed
I’ll make it simple – here’s a list of components that are needed to make this method work:

  • 2 x BNT Virtual Fabric 10 Gb Switch Module – part # 46C7191
  • 2 x QLogic Virtual Fabric Extension Module – part # 46M6172
  • a Qlogic 2-port 10 Gb Converged Network Adapter per blade server – part # 42C1830
  • a IBM 8 Gb SFP+ SW Optical Transceiver for each uplink needed to your fibre fabric – part # 44X1964 (notethe QLogic Virtual Fabric Extension Module doesn’t come with any, so you’ll need the same quantity for each module.)

The CNA cards connect to the BNT Virtual Fabric 10 Gb Switch Module in Bays 7 and 9.  These switch modules have an internal connector to the QLogic Virtual Fabric Extension Module, located in Bays 3 and 5.  The I/O traffic moves from the CNA cards to the BNT switch, which separates the Ethernet traffic and sends it out to the Ethernet fabric while the Fibre traffic routes internally to the QLogic Virtual Fabric Extension Modules.  From the Extension Modules, the traffic flows into the Fibre Fabric.

It’s important to understand the switches, and how they are connected, too, as this is a new approach for IBM.  Previously the Bridge Bays (I/O Bays 5 and 6) really haven’t been used and IBM has never allowed for a card in the CFF-h slot to connect to the switch bay in I/O Bay 3. 

 

There are a few other designs that are acceptable that will still give you the split fabric out of the chassis, however they were not “redundant” so I did not think they were relevant.  If you want to read the full IBM Redbook on this offering, head over to IBM’s site.

A few things to note with the maximum redundancy design I mentioned above:

1) the CIOv slots on the HS22 and HS22v can not be used.  This is because I/O bay 3 is being used for the Extension Module and since the CIOv slot is hard wired to I/O bay 3 and 4, that will just cause problems – so don’t do it.

2) The BladeCenter E chassis is not supported for this configuration.  It doesn’t have any “high speed bays” and quite frankly wasn’t designed to handle high I/O throughput like the BladeCenter H.

3) Only the parts listed above are supported.  Don’t try and slip in a Cisco Fibre Switch Module or use the Emulex Virtual Adapter on the blade server – it won’t work.  This is a QLogic design and they don’t want anyone else’s toys in their backyard.

That’s it.  Let me know what you think by leaving a comment below.  Thanks for stopping by!

10 Things That Cisco UCS Polices Can Do (That IBM, Dell or HP Can’t)

ViewYonder.com recently posted a great write up on some things that Cisco’s UCS can do that IBM, Dell or HP really can’t. You can go to ViewYonder.com to read the full article, but here are 10 things that Cisco’s UCS Polices do:

  • Chassis Discovery – allows you to decide how many links you should use from the FEX (2104) to the FI (6100).  This affects the path from blades to FI and the oversubscription rate.  If you’ve cabled 4 I can just use 2 if you want, or even 1.
  • MAC Aging – helps you manage your MAC table?  This affects ability to scale, as bigger MAC tables need more management.
  • Autoconfig – when you insert a blade, depending on its hardware config enables you to apply a specific template for you and put it in a organization automatically.
  • Inheritence – when you insert a blade, allows you to automatically create a logical version (Service Profile) by coping the UUID, MAC, WWNs etc.
  • vHBA Templates – helps you to determine how you want _every_ vmhba2 to look like (i.e. Fabric,  VSAN,  QoS, Pin to a border port)
  • Dynamic vNICs – helps you determine how to distribute the VIFs on a VIC
  • Host Firmware – enables you to determine what firmware to apply to the CNA, the HBA, HBA ROM, BIOS, LSI
  • Scrub – provides you with the ability to wipe the local disks on association
  • Server Pool Qualification – enables you to determine which hardware configurations live in which pool
  • vNIC/vHBA Placement – helps you to determine how to distribute VIFs over one/two CNAs?

For more on this topic, visit Steve’s blog at ViewYonder.com.  Nice job, Steve!

HP BladeSystem Rumours

I’ve recently posted some rumours about IBM’s upcoming announcements in their blade server line, now it is time to let you know some rumours I’m hearing about HP.   NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.  That being said – here we go:

Rumour #1:  Integration of “CNA” like devices on the motherboard. 
As you may be aware, with the introduction of the “G6”, or Generation 6, of HP’s blade servers, HP added “FlexNICs” onto the servers’ motherboards instead of the 2 x 1Gb NICs that are standard on most of the competition’s blades.  FlexNICs allow for the user to carve up a 10Gb NIC into 4 virtual NICs when using the Flex-10 Modules inside the chassis.  (For a detailed description of Flex-10 technology, check out this HP video.)  The idea behind Flex-10 is that you have 10Gb connectivity that allows you to do more with fewer NICs. 

SO – what’s next?  Rumour has it that the “G7” servers, expected to be announced on March 16, will have an integrated CNA or Converged Network Adapter.  With a CNA on the motherboard, both the ethernet and the fibre traffic will have a single integrated device to travel over.  This is a VERY cool idea because this announcement could lead to a blade server that can eliminate the additional daughter card or mezzanine expansion slots therefore freeing up valueable real estate for newer Intel CPU architecture.

Rumour #2: Next generation Flex-10 Modules will separate Fibre and Network traffic.

Today, HP’s Flex-10 ONLY allows handles Ethernet traffic.  There is no support for FCoE (Fibre Channel over Ethernet) so if you have a Fibre network, then you’ll also have to add a Fibre Switch into your BladeSystem chassis design. If HP does put in a CNA onto their next generation blade servers that carry Fibre and Ethernet traffic, wouldn’t it make sense there would need to be a module that would fit in the BladeSystem chassis that would allow for the storage and Ethernet traffic to exit? 

I’m hearing that a new version of the Flex-10 Module is coming, very soon, that will allow for the Ethernet AND the Fibre traffic to exit out the switch. (The image to the right shows what it could look like.)  The switch would allow for 4 of the uplink ports to go to the Ethernet fabric and the other 4 ports of the 8 port Next Generation Flex-10 switch to either be dedicated to a Fibre fabric OR used for additional 4 ports to the Ethernet fabric. 

If this rumour is accurate, it could shake up things in the blade server world.  Cisco UCS uses 10Gb Data Center Ethernet (Ethernet plus FCoE); IBM BladeCenter has the ability to do a 10Gb plus Fibre switch fabric (like HP) or it can use a 10Gb Enhanced Ethernet plus FCoE (like Cisco) however no one currently has a device to split the Ethernet and Fibre traffic at the blade chassis.  If this rumour is true, then we should see it announced around the same time as the G7 blade server (March 16).

That’s all for now.  As I come across more rumours, or information about new announcements, I’ll let you know.

Introducing the IBM HS22v Blade Server

IBM officially announced today a new addition to their blade server line – the HS22v.  Modeled after the HS22 blade server, the HS22v is touted by IBM as a “high density, high performance blade optimized for virtualization.”  So what makes it so great for virtualization?  Let’s take a look.

Memory
One of the big differences between the HS22v and the HS22 is more memory slots.  The HS22v comes with 18 x very low profile (VLP) DDR3 memory DIMMs for a maximum of 144GB RAM.  This is a key attribute for a server running virtualization since everyone knows that VM’s love memory.  It is important to note, though, the memory will only run at 800Mhz when all 18 slots are used.  In comparison, if you only had 6 memory DIMMs installed (3 per processor) then the memory would run at 1333Mhz and 12 DIMMs installed (6 per processor) runs at 1066Mhz.  As a final note on the memory, this server will be able to use both 1.5v and 1.35v memory.  The 1.35v will be newer memory that is introduced as the Intel Westmere EP processor becomes available.  The big deal about this is that lower voltage memory = lower overall power requirements.

Drives
The second big difference is the HS22v does not use hot-swap drives like the HS22 does.  Instead, it uses a 2 x solid state drives (SSD) for local storage. These drives have  hardware RAID 0/1 capabilities standard.  Although the picture to the right shows a 64GB SSD drive, my understanding is that only 50GB drives will be available as they start to become readlily available on March 19, with larger sizes (64GB and 128GB) becoming available in the near future.  Another thing to note is that the image shows a single SSD drive, however the 2nd drive is located directly beneath.  As mentioned above, these drives do have the ability to be set up in a RAID 0 or 1 as needed.

So – why did IBM go back to using internal drives?  For a few reasons:

Reason #1
: in order to get the space to add the extra memory slots, a change had to be made in the design.  IBM decided that solid state drives were the best fit.

Reason #2: the SSD design allows the server to run with lower power.  It’s well known that SSD drives run at a much lower power draw than physical spinning disks, so using SSD’s will help the HS22v be a more power efficient blade server than the HS22.

Reason #3: a common trend of virtualization hosts, especially VMware ESXi, is to run on integrated USB devices.  By using an integrated USB key for your virtualization software, you can eliminate the need for spinning disks, or even SSD’s therefore reducing your overall cost of the server.

Processors
So here’s the sticky area.  IBM will be releasing the HS22v with the Intel Xeon 5500 processor first.  Later in March, as the Intel Westmere EP (Intel Xeon 5600) is announced, IBM will have models that come with it.  IBM will have both Xeon 5500 and Xeon 5600 processor offerings.  Why is this?  I think for a couple of reasons:

a) the Xeon 5500 and the Xeon 5600 will use the same chipset (motherboard) so it will be easy for IBM to make one server board, and plop in either the Nehalem EP or the Westmere EP

b) simple – IBM wants to get this product into the marketplace sooner than later.

Questions

1) Will it fit into the BladeCenter E?
YES – however there may be certain limitations, so I’d recommend you reference the IBM BladeCenter Interoperability Guide for details.

2) Is it certified to run VMware ESX 4?
YES

3) Why didn’t IBM call it HS22XM?
According to IBM, the “XM” name is feature focused while “V” is workload focused – a marketing strategy we’ll probably see more of from IBM in the future.

That’s it for now.  If there are any questions you have about the HS22v, let me know in the comments and I’ll try to get some answers.

For more on the IBM HS22v, check out IBM’s web site here.

Check back with me in a few weeks when I’m able to give some more info on what’s coming from IBM!

Cisco Takes Top 8 Core VMware VMmark Server Position

Cisco is getting some (more) recognition with their UCS blade server product, as they recently achieved the top position for “8 Core Server” on VMware’s VMmark benchmark tool.  VMmark is the industry’s first (and only credible) virtualization benchmark for x86-based computers.  According to the VMmark website, the Cisco UCS B200 blade server reached a score of 25.06 @ 17 tiles.  A “tile” is simple a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment.   The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity.

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot?  Here’s the configuration:

server config:

  • 2 x Intel Xeon X5570 Processors
  • 96GB of RAM (16 x 8GB)
  • 1 x Converged Network Adapter (Cisco UCS M71KR-Q)

storage config:

  • EMC CX4-240
  • Cisco MDS 9130
  • 1154.27GB Used Disk Space
  • 1024MB Array Cache
  • 41 disks used on 4 enclosures/shelves (1 with 14 disk, 3 with 9 disks)
  • 37 LUNs used
    *17 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *17 at 15GB (database) + 2 LUNs at 400GB (Misc) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

While first place on the VMmark page (8 cores) shows Fujitsu’s RX300, it’s important to note that it was reached using Intel’s W5590 processor – a processor that is designed for “workstations” – not servers.  Second place, of server processors, currently shows HP’s BL490 with 24.54 (@ 17 tiles)

Thanks to Omar Sultan (@omarsultan) for Tweeting about this and to Harris Sussman for blogging about it.

Mark Your Calendar – Upcoming Announcements

As I mentioned previously, the next few weeks are going to be filled with new product / technology annoucements.  Here’s a list of some dates that you may want to mark on your calendar (and make sure to come back here for details:)

Feb 9 – Big Blue new product announcement (hint: in the BladeCenter family)

Mar 2 – Big Blue non-product annoucement (hint: it’s not the eX4 family)

Mar 16  – Intel Westmere (Intel Xeon 5600) Processor Announcement (expect HP and IBM to announce their Xeon 5600 offerings)

Mar 30 – Intel Nehalem EX (Xeon 7600) Processor Annoucement (expect HP and IBM to announce their Intel Xeon 7600 offerings)

As always, you can expect for me to give you coverage on the new blade server technology as it gets announced!

IBM’s 4 Processor Intel Nehalem EX Blade Server

2-2-10 CORRECTION Made Below

Okay, I’ve seen the details on IBM’s next generation 4 processor blade server that is based on the Intel Nehalem EX cpu and I can tell you that IBM’s about to change the way people look at workloads for blade servers.  Out of respect for IBM (and at the risk of getting in trouble) I’m not going to disclose any confidential details, but I can tell you a few things:

1) my previous post about what the server will look like is not far off.  In fact it was VERY close.  However IBM up’d the ante and made a few additions that I didn’t expect that will make it appealing for customers who need the ability to run large workloads.

2) the scheduled announce date for this new 4 processor IBM blade server based on the Nehalem EX (whose name I guessed correctly) will be before April 1, 2010 but after March 15, 2010.  Ship date is currently scheduled sometime after May but before July.

As a final teaser, there’s another IBM blade server annoucement scheduled for tomorrow.  Once it’s officially announced on Feb 3rd  Feb 9th I’ll let you know and give you some details.

More IBM BladeCenter Rumours…

Okay, I can’t hold back any longer – I have more rumours. The next 45 days is going to be an EXTREMELY busy month with Intel announcing their Westmere EP processor, the predecessor to the Nehalem EP CPU and with the announcement of the Nehalem EX CPU, the predecessor to the Xeon 7400 CPU.  I’ll post more details on these processors in the future, as it becomes available, but for now, I want to talk on some additional rumours that I’m hearing from IBM.  As I’ve mentioned in my previous rumour post: this is purely speculation, I have no definitive information from IBM so this may be false info.  That being said, here we go:

Rumour #1:  As I previously posted, IBM has announced they will have a blade server based on their eX5 architecture  – the next generation of their eX4 architecture found in their IBM System x3850 M2 and x3950M2.  I’ve posted what I think this new blade server will look like (you can see it here) and  I had previously speculated that the server would be called  HS43 – however it appears that IBM may be changing their nomenclature for this class of blade to “HX5“.  I can see this happening – it’s a blend of “HS” and “eX5”.  It is a new class of blade server, so it makes sense.   I like the HX5 blade server name, although if you Google HX5 right now, you’ll get a lot of details about the Sony CyberShot DSC-HX5 digital camera.  (Maybe IBM should re-consider using HS43 instead of HX5 to avoid any lawsuits.)  It also makes it very clear that it is part of their eX5 architecture, so we’ll see if it gets announced that way.

Speaking of announcements…

Rumour #2:  While it is clear that Intel is waiting until March (31, I think) to announce the Nehalem EX and Westmere EP processors, I’m hearing rumours that IBM will be announcing their product offerings around the new Intel processors on March 2, 2010 in Toronto.  It will be interesting to see if this happens so soon (4 weeks away) but when it does, I’ll be sure to give you all the details!

That’s all I can talk about for now as “rumours”.  I have more information on another IBM announcement that I can not talk about, but come back to my site on Feb. 9 and you’ll find out what that new announcement is.

The IBM BladeCenter S Is Going to the Super Bowl

Unless you’ve been hiding in a cave in Eastern Europe, you know by now that the New Orleans Saints are headed to the Super Bowl.  According to IBM, this is all due to the Saints having an IBM BladeCenter S running their business.  Okay, well, I’m sure there’s other reasons, like having stellar tallent, but let’s take a look at what IBM did for the Saints.

Other than the obvious threat of having to relocate or evacuate due to the weather, the Saints’ constant travel required them to search for a portable IT solution that would make it easier to quickly set up operations in another city.  The Saints were a long-time IBM customer, so they looked at the IBM BladeCenter S for this solution, and it worked great.  (I’m going to review the BladeCenter S below, so keep reading.)  The Saints consolidated 20 physical servers onto the BladeCenter S, virtualizing the environment with VMware.   Although the specific configuration of their blade environment is not disclosed, IBM reports that the Saints are using 1 terabyte of built-in storage, which enables the Saints to go on the road with the essential files (scouting reports, financial apps, player stats, etc) and tools the coaches and the staff need.  In fact, in the IBM Case Study video, the Assistant Director of IT for the New Orleans Saints, Jody Barbier, says, “The Blade Center S definitely can make the trip with us if we go to the Super Bowl.”  I guess we’ll see.  Be looking for the IBM Marketing engine to jump on this bandwagon in the next few days.

A Look at the IBM BladeCenter S
The IBM BladeCenter S is a 7u high (click image on left for larger view of details) chassis that has the ability to hold 6 blade servers and up to 12 disk drives held in Disk Storage Modules located on the left and right of the blade server bays.  The chassis has the option to either segment the disk drives to an individual blade server, or the option to create a RAID volume and allow all of the servers to access the data.  As of this writing, the drive options for the Disk Storage Module are: 146GB, 300GB, 450GB SAS, 750GB and 1TB Near-Line SAS and 750GB and 1TB SATA.  Depending on your application needs, you could have up to 12TB of local storage for 6 servers.  That’s pretty impressive, but wait, there’s more!  As I reported a few weeks ago, there’s is a substantial rumour that there is a forthcoming option to use 2.5″ drives.  This would enable the ability to have up to 24 drives (12 per Disk Storage Module.)  Although that would provide more spindles, the current capacities of 2.5″ drives aren’t quite to the capacities of the 3.5″ drives.  Again, that’s just “rumour” – IBM has not disclosed whether that option is coming (but it is…)

IBM BladeCenter – Rear View
I love pictures – so I’ve attached an image of the BladeCenter S, as seen from the back.  A few key points to make note of:
110v Capable – yes, this can run on the average office power.  That’s the idea behind it.  If you have a small closet or an area near a desk, you can plug this bad boy in.   That being said, I always recommend calculating the power with IBM’s Power Configurator to make sure your design doesn’t exceed what 110v can handle.  Yes, this box will run on 220v as well.  Also, the power supplies are auto-sensing so there’s no worry about having to buy different power supplies based on your needs.

I/O Modules – if you are familar with the IBM BladeCenter or IBM BladeCenter H I/O architecture, you’ll know that the design is redundant, with dual paths.  With the IBM BladeCenter S, this isn’t the case.   As you can see below, the onboard network adapters (NICs) both are mapped to the I/O module in Bay #1.  The expansion card is mapped to Bay #3 and 4 and the high speed card slot (CFF-h) is mapped to I/O Bay 2.  Yes, this design put I/O Bays 1 and 2 as single points of failure (since both paths connect intothe module bay), however when you look at the typical small office or branch office environment that the IBM BladeCenter S is designed for, you’ll realize that very rarely do they have redundant network fabrics – so this is no different.

Another key point here is that I/O Bays 3 and 4 are connected to the Disk Storage Modules mentioned above.  In order for a blade server to access the external disks in the Disk Storage Module bays, the blade server must:

a) have a SAS Expansion or Connectivity card installed in the expansion card slot
b) have 1 or 2 SAS Connectivity or RAID modules attached in Bays 3 and 4

This means that there is currently no way to use the local drives (in the Disk Storage Modules) and have external access to a fibre storage array.

BladeCenter S Office Enablement Kit
Finally – I wanted to show you the optional Office Enablement Kit.  This is an 11U enclosure that is based on IBM’s NetBay 11.  It has security doors and special acoustics and air filtration to suit office environements.  The Kit features:
*
an acoustical module (to lower the sound of the environment)  Check out this YouTube video for details.
*
a locking door
*
4U of extra space (for other devices)
*
wheels

There is also an optional Air Contaminant Filter that is available that assists in keeping the IBM BladeCenter S functional in a dusty environment (i.e. shops or production floors) using air filters.

If the BladeCenter S is going to be used in an environment without a rack (i.e. broom closet) or in a mobile environment (i.e. going to the Super Bowl) the Office Enablement Kit is a necessary addition.

So, hopefully, you can now see the value that the New Orleans Saints saw in the IBM BladeCenter S for their flexible, mobile IT needs.  Good luck in the Super Bowl, Saints.  I know that IBM will be rooting for you.

Weta Digital, Production House for AVATAR, Donates IBM Blade Servers to Schools

Weta Digital, the digital production house that designed the hit movie AVATAR recently donated about 300 IBM HS20 blade servers to Whitireia Community Polytechnic in Porirua which will use them to help teach students how to create 3-D animations. The IBM HS20 blade servers were originally bought to produce special effects for The Lord of the Rings at a cost of more than $1 million (for more details on this, check out this November 2004 article from DigitalArtsOnline.co.uk.) Weta Digital has since replaced them with more powerful HP BL 2x220c G5 servers supplied by Hewlett-Packard, which were used for AVATAR.

According to the school, these older IBM blade servers will help the schoolexpand its graphics and information technology courses and turn out students with more experience of 3-D rendering.

Thanks to Stuff.co.nz for the information mentioned above.