(UPDATED) The Best Blade Server Option Is…[Part 1 - A Look at Cisco]

Updated on 9/13/2010 with link to Sean McGee’s I/O Card Blog Post
 
One of the questions I get the most is, “which blade server option is best for me?” My honest answer is always, “it depends.” The reality is that the best blade infrastructure for YOU is really going to depend on what is important to you. Based on this, I figured it would be a good exercise to do a high level comparison of the blade chassis offerings from Cisco, Dell, HP and IBM. If you ready through my past blog posts, you’ll see that my goal is to be as unbiased as possible when it comes to talking about blade servers. I’m going to attempt to be “vendor neutral” with this post as well, but I welcome your comments, thoughts and criticisms.
 In part 1, I’ll focus on Cisco since they come first alphabetically.  I’ll post equivalent posts for Dell, HP and IBM over the next few weeks, then I’ll try and summarize. 
 
Chassis Overview 
 
Cisco’s Unified Computing System (UCS) is a bit unique in that the chassis is a small component of the overall offering. Cisco’s UCS is a “system” of components that consists of blade servers, blade chassis, fabric extenders and fabric interconnects. The blade chassis is called the UCS 5100. It is a 6 rack unit (6u) tall chassis that can hold anywhere from 4 to 8 blade servers (dependent upon the blade form factor). The chassis comes with 4 front-accessible 2500W single-phase, hot-swappable power supplies that are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid-redundant configurations.
 
The rear of the UCS5100 chassis offers 4 hot-swap blowers, 4 power plug connectors requiring 15.5A, 220-240V AC. There are also a pair of redundant fabric extenders. This is where Cisco’s design differs from everyone else. These “fabric extenders”, known as UCS 2104XP Fabric Extender simply extend the reach of the onboard 10Gb or Converged Network Adapters (CNAs) I/O fabric from the blade server bays to the management console, known as the fabric interconnect. I previously blogged that there were rumours at one time that there would be an 8 port version of the fabric extender, however to date, I have not seen any proof of this. The UCS 2104XP Fabric Extender provides 4 x 10Gb uplinks, so if you have 8 blade servers, you theoretically would be looking at a 2:1 ratio (8 blade servers to 4 uplinks.) There have been several comments and blog posts on the functionality of the fabric extender, including the infamous Tolly Report that received several comments from the Tolly Group, Cisco employees and HP employees – but in summary, the 4 x 10Gb uplinks are adequate for handling all the I/O that the max 8 blade servers can throw at it. Yes, you can put two in for redundant pathways as well. The Fabric Extenders connect in to the brains of the solution – the fabric interconnect. 
 
The function of the UCS 6100 fabric interconnect is to connect ALL of the UCS 5100 chassis to the network and storage fabrics.The Cisco UCS 6100 series fabric interconnect currently comes in two flavors – a 20 port (UCS 6120XP) and a 40 port (UCS 6140XP). A 20 port could connect 5 x UCS5100 chassis’ fabric extenders (4 ports x 5 chassis = 20) all the way up to 20 x UCS5100 (1 port per fabric extender). This last example doesn’t seem to be ideal, as you would be running up to 8 x blade servers’ 10Gb I/O traffic up a single 10Gb uplink – but, who knows – I’m not a networking guy, so I’ll have to leave those comments to the experts. Personally, I think that it’s a bunch of marketing fluff… 
 
 
  
Server Review

When we look at the sheer capacity of the quantity of blade servers that you can fit into a 42u rack, we see that Cisco can offer a maximum of 7 chassis into a rack (6u tall). The stats below provide a good comparison between the different server offerings from Cisco. 

A few things to point out:

  • there are no AMD options (Intel only)
  • half-width blade servers have a 1 x I/O card whereas full-width blade servers have 2 x I/O cards    

I/O Card Options
Cisco offers 4 different I/O Network Card options for their blade servers:

  • Cisco UCS 82598KR-CI 10 Gigabit Ethernet Adapter  – based on the Intel 82598 10 Gigabit Ethernet controller, which is designed for efficient high-performance Ethernet transport.
  • Cisco UCS M71KR-E Emulex Converged Network Adapter – uses an Intel 82598 10 Gigabit Ethernet controller for network traffic and an Emulex 4-Gbps Fibre Channel controller for Fibre Channel traffic all on the same mezzanine card.
  • Cisco UCS M71KR-Q QLogic Converged Network Adapter – uses an Intel 82598 10 Gigabit Ethernet controller for network traffic and a QLogic 4-Gbps Fibre Channel controller for Fibre Channel traffic, all on the same mezzanine card.
  • Cisco UCS M81KR Virtual Interface Card -  a dual-port 10 Gigabit Ethernet mezzanine card that supports up to 128 virtual interfaces that can be dynamically configured so that both their interface type (network interface card [NIC] or host bus adapter [HBA]) and identity (MAC address and worldwide name [WWN]) are established using just-in-time provisioning.

Fellow blogger, Sean McGee, has written up a nice post on the Cisco UCS B-Series I/O Card Options.  I recommend you go read it (after you finish this post).  You can find Sean’s article here.

  (For more details on these card options, please visit http://www.cisco.com/en/US/products/ps10280/products_data_sheets_list.html)

 As you may notice, there are no card options with fibre-channel only or Infiniband.  This is part of Cisco’s UCS strategy – the network and the storage traffic travel over the same cable from the blade server though the fabric extender to the fabric interconnect where the traffic is separated into network fabrics and storage fabrics.  This design allows for Cisco to require a maximum of 8 cables (4 from each fabric extender) per blade chassis and as few as 2 cables (1 per fabric extender).  Compared to a traditional server environment using multiple 1Gb Ethernet and 4Gb fibre connections per connection, there is a huge savings in cables.

Chassis Switch Options

 As I have previously mentioned, the architecture of Cisco’s UCS blade environment takes an approach of “extending” the I/O connectivity from the blades to the fabric interconnect.  With this design, there are no “switches”, therefore there are no switch options.

Server Management

Cisco’s blade infrastructure management lies within the Cisco UCS 6100 fabric interconnect.  The base management software, called UCS Manager, is the central point of management for the entire UCS environment.  It manages the UCS system, including the blades, the chassis, and the network (both LAN and SAN) – configuration, environmentals, etc.  Take a few minutes to look at the UCS Manager software in this short video:

While the UCS Manager is rich in features, it does have the following limitations:

•(Hardware) Templates can NOT be shared across systems
•Available MAC addresses are scoped per UCS Manager instance (not across the Enterprise)
•Available WWN addresses are scoped per UCS Manager instance (not across the Enterprise)
•Available UUIDs are scoped per UCS Manager instance (not across the Enterprise)

Bottom line – the UCS Manager is limited to each UCS chassis and most of the features are manual steps.  Never fear, however.  Cisco offers BMC’s BladeLogic which adds the following:

•UCS template creation and editing
•Cross-UCS template management
•Cross-UCS MAC and WWN Management
•Local disk provisioning for UCS
•SAN provisioning for UCS
•ESX provisioning for UCS
•Consolidated UCS operator and management action
•Manages UCS resources, VMs, guest OS, and business applications

The catch, however, is that the BMC BladeLogic is an extra cost.  How much – I’m not sure, but it adds something…  Cisco has a really good simulator that highlights what you can do with the UCS Manager software, so if you are interested, take a few minutes to watch.  There is no narration, just a walk-through of the UCS Manager.  I also recommend you view in full screen:

So let me know what you think.  Is there anything I’m missing – anything else you would like to see on this?  Let me know in the comments below.  Make sure to keep an eye on this site as I’ll be posting information on Dell, HP and IBM in the following weeks.

“Oo-rah” (that’s for @jonisick

beauty and essex nyc
how to use photoshop
taco salad recipe
htc touch pro
zulily coupon code