Category Archives: Server Comparisons

New Cisco Blade Server: B440-M1

Cisco recently announced their first blade offering with the Intel Xeon 7500 processor, known as the “Cisco UCS B440-M1 High-Performance Blade Server.”  This new blade is a full-width blade that offers 2 – 4 Xeon 7500 processors and 32 memory slots, for up to 256GB RAM, as well as 4 hot-swap drive bays.  Since the server is a full-width blade, it will have the capability to handle 2 dual-port mezzanine cards for up to 40 Gbps I/O per blade. 

Each Cisco UCS 5108 Blade Server Chassis can house up to four B440 M1 servers (maximum 160 per Unified Computing System). 

How Does It Compare to the Competition?
Since I like to talk about all of the major blade server vendors, I thought I’d take a look at how the new Cisco B440 M1 compares to IBM and Dell.  (HP has not yet announced their Intel Xeon 7500 offering.)

Processor Offering
Both Cisco and Dell offer models with 2 – 4 Xeon 7500 CPUs as standard.  They each have variations on speeds – Dell has 9 processor speed offerings; Cisco hasn’t released their speeds and IBM’s BladeCenter HX5 blade server will have 5 processor speed offerings initially.  With all 3 vendors’ blades, however, IBM’s blade server is the only one that is designed to scale from 2 CPUs to 4 CPUs by connecting 2 x HX5 blade servers.  Along with this comes their “FlexNode” technology that enables users to have the 4 processor blade system to split back into 2 x 2 processor systems at specific points during the day.  Although not announced, and purely my speculation, IBM’s design also leads to a possible future capability of connecting 4 x 2 processor HX5’s for an 8-way design.  Since each of the vendors offer up to 4 x Xeon 7500’s, I’m going to give the advantage in this category to IBM.  WINNER: IBM

Memory Capacity
Both IBM and Cisco are offering 32 DIMM slots with their blade solutions, however they are not certifying the use of 16GB DIMMs – only 4GB and 8GB DIMMs, therefore their offering only scales to 256GB of RAM.  Dell claims to offers 512GB DIMM capacity on their the PowerEdge 11G M910 blade server, however that is using 16GB DIMMs.  REalistically, I think the M910 would only be used with 8GB DIMMs, so Dell’s design would equal IBM and Cisco’s.  I’m not sure who has the money to buy 16GB DIMMs, but if they do – WINNER: Dell (or a TIE)

Server Density
As previously mentioned, Cisco’s B440-M1 blade server is a “full-width” blade so 4 will fit into a 6U high UCS5100 chassis.  Theoretically, you could fit 7 x UCS5100 blade chassis into a rack, which would equal a total of 28 x B440-M1’s per 42U rack.
Overall, Cisco’s new offering is a nice addition to their existing blade portfolio.  While IBM has some interesting innovation in CPU scalability and Dell appears to have the overall advantage from a server density, Cisco leads the management front. 

Dell’s PowerEdge 11G M910 blade server is a “full-height” blade, so 8 will fit into a 10u high M1000e chassis.  This means that 4 x M1000e chassis would fit into a 42u rack, so 32 x Dell PowerEdge M910 blade servers should fit into a 42u rack.

IBM’s BladeCenter HX5 blade server is a single slot blade server, however to make it a 4 processor blade, it would take up 2 server slots.  The BladeCenter H has 14 server slots, so that makes the IBM solution capable of holding 7 x 4 processor HX5 blade servers per chassis.  Since the chassis is a 9u high chassis, you can only fit 4 into a 42u rack, therefore you would be able to fit a total of 28 IBM HX5 (4 processor) servers into a 42u rack.
WINNER: Dell

Management
The final category I’ll look at is the management.  Both Dell and IBM have management controllers built into their chassis, so management of a lot of chassis as described above in the maximum server / rack scenarios could add some additional burden.  Cisco’s design, however, allows for the management to be performed through the UCS 6100 Fabric Interconnect modules.  In fact, up to 40 chassis could be managed by 1 pair of 6100’s.  There are additional features this design offers, but for the sake of this discussion, I’m calling WINNER: Cisco.

Cisco’s UCS B440 M1 is expected to ship in the June time frame.  Pricing is not yet available.  For more information, please visit Cisco’s UCS web site at http://www.cisco.com/en/US/products/ps10921/index.html.

Blade Server Shoot-Out (Dell/HP/IBM) – InfoWorld.com

InfoWorld.com posted on 3/22/2010 the results of a blade server shoot-out between Dell, HP, IBM and Super Micro. I’ll save you some time and help summarize the results of Dell, HP and IBM.

The Contenders
Dell, HP and IBM each provided blade servers with the Intel Xeon X5670 2.93GHz CPUs and at least 24GB of RAM in each blade.

The Tests
InfoWorld designed a custom suite VMware tests as well as several real-world performance metric tests. The VMware tests were composed of:

  • a single large-scale custom LAMP application
  • a load-balancer running Nginx
  • four Apache Web servers
  • two MySQL servers

InfoWorld designed the VMware workloads to mimic a real-world Web app usage model that included a weighted mix of static and dynamic content, randomized database updates, inserts, and deletes with the load generated at specific concurrency levels, starting at 50 concurrent connections and ramping up to 200.  InfoWorld’s started off with the VMware tests first on one blade server, then across two blades. Each blade being tested were running VMware ESX 4 and controlled by a dedicated vCenter instance.

The other real-world tests included serveral tests of common single-threaded tasks run simultaneously at levels that met and eclipsed the logical CPU count on each blade, running all the way up to an 8x oversubscription of physical cores. These tests included:

  • LAME MP3 conversions of 155MB WAV files
  • MP4-to-FLV video conversions of 155MB video files
  • gzip and bzip2 compression tests
  • MD5 sum tests

The ResultsDell
Dell did very well, coming in at 2nd in overall scoring.  The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis

Some key points made in the article about Dell:

  • Dell does not offer a lot of “blade options.”  There are several models available, but they are the same type of blades with different CPUs.  Dell does not currently offer any storage blades or virtualization-centric blades.
  • Dell’s 10Gb design does not offer any virtualized network I/O. The 10G pipe to each blade is just that, a raw 10G interface.  No virtual NICs.
  • The new CMC (chassis management controller) is a highly functional and attractive management tool offering new tasks like pusing actions to multiple blades at once such as BIOS updates and RAID controller firmware updates.
  • Dell has implemented more efficient dynamic power and cooling features in the M1000e chassis. Such features include the ability to shut down power supplies when the power isn’t needed, or ramping the fans up and down depending on load and the location of that load.

According to the article, “Dell offers lots of punch in the M1000e and has really brushed up the embedded management tools. As the lowest-priced solution…the M1000e has the best price/performance ratio and is a great value.”

HP
Coming in at 1st place, HP continues to shine in blade leadership.  HP’s testing equipment consisted of a c7000 nine BL460c blades, each running two 2.93GHz Intel Xeon X5670 (Westmere-EP) CPUs and 96GB of RAM as well as embedded 10G NICs with a dual 1G mezzanine card.  As an important note, HP was the only server vendor with 10G NICs on the motherboard.  Some key points made in the article about HP:

  •  With the 10G NICs standard on the newest blade server models, InfoWorld says “it’s clear that HP sees 10G as the rule now, not the exception.”
  • HP’s embedded Onboard Administrator offers detailed information on all chassis components from end to end.  For example, HP’s management console can provide exact temperatures of every chassis or blade component.
  • HP’s console can not offer  global BIOS and firmware updates (unlike Dell’s CMC) or the ability to powering up or down more than one blade at a time.
  • HP offers “multichassis management” – the ability to daisy-chain several chassis together and log into any of them from the same screen as well as manage them.  This appears to be a unique feature to HP.
  • The HP c7000 chassis also has power controlling features like dynamic power saving options that will automatically turn off power supplies when the system energy requirements are low or increasing the fan airflow to only those blades that need it.

InfoWorld’s final thoughts on HP: “the HP c7000 isn’t perfect, but it is a strong mix of reasonable price and high performance, and it easily has the most options among the blade system we reviewed.”

IBM
Finally, IBM’s came in at 3rd place, missing a tie with Dell by a small fraction.  Surprisingly, I was unable to find the details on what the configuration was for IBM’s testing.  Not sure if I’m just missing it, or if InfoWorld left out the information, but I know IBM’s blade server had the same Intel Xeon X5670 CPUs as Dell and HP used.   Some of the points that InfoWorld mentioned about IBM’s BladeCenter H offering:

  • IBM’s pricing is higher.
  • IBM’s chassis only holds 14 servers whereas HP can hold 32 servers (using BL2x220c servers) and Dell holds 16 servers.
  • IBM’s chassis doesn’t offer a heads-up display (like HP and Dell.)
  • IBM had the only redundant internal power and I/O connectors on each blade.  It is important to note the lack of redundant power and I/O connectors is why HP and Dell’s densities are higher.  If you want redundant connections on each blade with HP and Dell, you’ll need to use their “full-height” servers, which decrease HP and Dell’s overall capacity to 8.
  • IBM’s Management Module is lacking graphical features – there’s no graphical representation of the chassis or any images.  From personal experience, IBM’s management module looks like it’s stuck in the ’90s – very text based.
  • The IBM BladeCenter H lacks dynamic power and cooling capabilities.  Instead of using smaller independent regional fans for cooling, IBM uses two blowers.  Because of this, the ability to reduce cooling in specific areas, like Dell and HP offer are lacking.

InfoWorld summarizes the IBM results saying, ” if you don’t mind losing two blade slots per chassis but need some extra redundancy, then the IBM BladeCenter H might be just the ticket.”

Overall, each vendor has their own pro’s and con’s.  InfoWorld does a great job summarizing the benefits of each offering below.  Please make sure to visit the InfoWorld article and read all of the details of their blade server shoot-out.

ibs symptoms
dish network careers
fort jackson sc
escape the car
navy seals training

IBM BladeCenter H vs Cisco UCS

(From the Archives – September 2009)

News Flash: Cisco is now selling servers!

Okay – perhaps this isn’t news anymore, but the reality is Cisco has been getting a lot of press lately – from their overwhelming presence at VMworld 2009 to their ongoing cat fight with HP. Since I work for a Solutions Provider that sells HP, IBM and now Cisco blade servers, I figured it might be good to “try” and put together a comparison between the Cisco and IBM. Why IBM? Simply because at this time, they are the only blade vendor who offers a Converged Network Adapter (CNA) that will work with the Cisco Nexus 5000 line. At this time Dell and HP do not offer a CNA for their blade server line so IBM is the closest we can come to Cisco’s offering. I don’t plan on spending time educating you on blades, because if you are interested in this topic, you’ve probably already done your homework. My goal with this post is to show the pros (+) and cons (-) that each vendor has with their blade offering – based on my personal, neutral observation

Chassis Variety / Choice: winner in this category is IBM.
IBM currently offers 5 types of blade chassis: BladeCenter S, BladeCenter E, BladeCenter H, BladeCenter T and BladeCenter HT. Each of the IBM blade chassis have unique offerings, such as the BladeCenter S is designed for small or remote offices with local storage capabilities, whereas the BladeCenter HT is designed for Telco environments with options for NEBS compliant features including DC power. At this time, Cisco only offers a single blade chassis offering (the 5808).

IBM BladeCenter H

IBM BladeCenter H

Cisco UCS 5108

Cisco UCS 5108

Server Density and Server Offerings: winner in this category is IBM. IBM’s BladeCenter E and BladeCenter H chassis offer up to 14 blade servers with servers using Intel, AMD and Power PC processors. In comparison, Cisco’s 5808 chassis offers up to 8 server slots and currently offers servers with Intel Xeon processors. As an honorable mention Cisco does offer a “full width” blade (Cisco UCS B250 server) that provides up to 384Gb of RAM in a single blade server across 48 memory slots offering up the ability to get to higher memory at a lower price point.

 Management / Scalability: winner in this category is Cisco.
This is where Cisco is changing the blade server game. The traditional blade server infrastructure calls for each blade chassis to have its own dedicated management module to gain access to the chassis’ environmentals and to remote control the blade servers. As you grow your blade chassis environment, you begin to manage multiple servers.
Beyond the ease of managing , the management software that the Cisco 6100 series offers provides users with the ability to manage server service profiles that consists of things like MAC Addresses, NIC Firmware, BIOS Firmware, WWN Addresses, HBA Firmware (just to name a few.)

Cisco UCS 6100 Series Fabric Interconnect

Cisco UCS 6100 Series Fabric Interconnect

With Cisco’s UCS 6100 Series Fabric Interconnects, you are able to manage up to 40 blade chassis with a single pair of redundant UCS 6140XP (consisting of 40 ports.)

If you are familiar with the Cisco Nexus 5000 product, then understanding the role of the Cisco UCS 6100 Fabric Interconnect should be easy. The UCS 6100 Series Fabric Interconnect do for the Cisco UCS servers what Nexus does for other servers: unifies the fabric. HOWEVER, it’s important to note the UCS 6100 Series Fabric Interconnect is NOT a Cisco Nexus 5000. The UCS 6100 Series Fabric Interconnect is only compatible with the UCS servers.

UCS Diagram

Cisco UCS I/O Connectivity Diagram (UCS 5108 Chassis with 2 x 6120 Fabric Interconnects)

If you have other servers, with CNAs, then you’ll need to use the Cisco Nexus 5000.

The diagram on the right shows a single connection from the FEX to the UCS 6120XP, however the FEX has 4 uplinks, so if you want (need) more throughput, you can have it. This design provides each half-wide Cisco B200 server with the ability to have 2

CNA ports with redundant pathways. If you are satisified with using a single FEX connection per chassis, then you have the ability to scale up to 20 x blade chassis with a Cisco UCS 6120 Fabric Interconnect, or 40 chassis with the Cisco UCS 6140 Fabric Interconnect. As hinted in the previous section, the management software for the all connected UCS chassis resides in the redundant Cisco UCS 6100 Series Fabric Interconnects. This design offers a highly scaleable infrastructure that enables you to scale simply by dropping in a chassis and connecting the FEX to the 6100 switch. (Kind of like Lego blocks.)

On the flip side, while this architecture is simple, it’s also limited. There is currently no way to add additional I/O to an individual server. You get 2 x CNA ports per Cisco B200 server or 4 x CNA ports per Cisco B250 server.

As previously mentioned, IBM has a strategy that is VERY similar to the Cisco UCS strategy using the Cisco Nexus 5000 product line with pass-thru modules. IBM’s solution consists of:

  • IBM BladeCenter H Chassis
  • 10Gb Pass-Thru Module
  • CNA’s on the blade servers

Even though IBM and Cisco designed the Cisco Nexus 4001i  switch that integrates into the IBM BladeCenter H chassis, using a 10Gb pass-thru module “may” be the best option to get true DataCenter Ethernet (or Converged Enhanced Ethernet) from the server to the Nexus switch – especially for users looking for the lowest cost. The performance for the IBM solution should equal the Cisco UCS design, since it’s just passing the signal through, however the connectivity is going to be more with the IBM solution. Passing signals through means NO cable

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

consolidation – for every server you’re going to need a connection to the Nexus 5000. For a fully populated IBM BladeCenter H chassis, you’ll need 14 connections to the Cisco Nexus 5000. If you are using the Cisco 5010 (20 ports) you’ll eat up all but 6 ports. Add a 2nd IBM BladeCenter chassis and you’re buying more Cisco Nexus switches. Not quite the scaleable design that the Cisco UCS offers.

IBM also offers a 10Gb Ethernet Switch Option from BNT (Blade Networks) that will work with converged switches like the Nexus 5000, but at this time that upgrade is not available. Once it does become available, it would reduce the connectivity requirements down to a single cable, but, adding a switch between the blade chassis and the Nexus switch could bring additional management complications. Let me know your thoughts on this.

IBM’s BladeCenter H (BCH) does offer something that Cisco doesn’t – additional I/O expansion. Since this solution uses two of the high speed bays in the BCH, bays 1, 2, 3 & 4 remain available. Bays 1 & 2 are mapped to the onboard NICs on each server, and bays 3&4 are mapped to the 1st expansion card on each server. This means that 2 additional NICs and 2 additional HBAs (or NICs) could be added in conjunction with the 2 CNAs on each server. Based on this, IBM potentially offers more I/O scalability.

And the Winner Is…

It depends. I love the concept of the Cisco UCS platform. Servers are seen as processors and memory – building blocks that are centrally managed. Easy to scale, easy to size. However, is it for the average datacenter who only needs 5 servers with high I/O? Probably not. I see the Cisco UCS as a great platform for datacenters with more than 14 servers needing high I/O bandwidth (like a virtualization server or database server.) If your datacenter doesn’t need that type of scalability, then perhaps going with IBM’s BladeCenter solution is the choice for you. Going the IBM route gives you flexibility to choose from multiple processor types and gives you the ability to scale into a unified solution in the future. While ideal for scalability, the IBM solution is currently more complex and potentially more expensive than the Cisco UCS solution.

Let me know what you think. I welcome any comments.

maple grove community center
world population clock
isp speed test
breast cancer symptoms
home decorators coupon

4 Socket Blade Servers Density: Vendor Comparison

IMPORTANT NOTE – I updated this blog post on Feb. 28, 2011 with better details.  To view the updated blog post, please go to:

https://bladesmadesimple.com/2011/02/4-socket-blade-servers-density-vendor-comparison-2011/

Original Post (March 10, 2010):

As the Intel Nehalem EX processor is a couple of weeks away, I wonder what impact it will have in the blade server market.  I’ve been talking about IBM’s HX5 blade server for several months now, so it is very clear that the blade server vendors will be developing blades that will have some iteration of the Xeon 7500 processor.  In fact, I’ve had several people confirm on Twitter that HP, Dell and even Cisco will be offering a 4 socket blade after Intel officially announces it on March 30.  For today’s post, I wanted to take a look at how the 4 socket blade space will impact the overall capacity of a blade server environment.  NOTE: this is purely speculation, I have no definitive information from any of these vendors that is not already public.

The Cisco UCS 5108 chassis holds 8 “half-width” B-200 blade servers or 4 “full-width” B-250 blade servers, so when we guess at what design Cisco will use for a 4 socket Intel Xeon 7500 (Nehalem EX) architecture, I have to place my bet on the full-width form factor.  Why?  Simply because there is more real estate.  The Cisco B250 M1 blade server is known for its large memory capacity, however Cisco could sacrifice some of that extra memory space for a 4 socket, “Cisco B350 blade.  This would provide a bit of an issue for customers wanting to implement a complete rack full of these servers, as it would only allow for a total of 28 servers in a 42U rack (7 chassis x 4 servers per chassis.)

Estimated Cisco B300 with 4 CPUs

On the other hand, Cisco is in a unique position in that their half-width form factor also has extra real estate because they don’t have 2 daughter card slots like their competitors.  Perhaps Cisco would create a half-width blade with 4 CPUs (a B300?)  With a 42U rack, and using a half-width design, you would be able to get a maximum of 56 blade servers (7 chassis x 8 servers per chassis.)

Dell
The 10U M1000e chassis from Dell can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that Dell would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that Dell’s 4 socket blade will be a full-height blade server, probably named a PowerEdge M910.  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

HP
Similar to Dell, HP’s 10U BladeSystem c7000 chassis can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that HP would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that HP’s 4 socket blade will be a full-height blade server, probably named a Proliant BL680 G7 (yes, they’ll skip G6.)  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

IBM
Finally, IBM’s 9U BladeCenter H chassis offers up 14 servers.  IBM has one size server, called a “single wide.”  IBM will also have the ability to combine servers together to form a “double-wide”, which is what is needed for the newly announced IBM BladeCenter HX5.  A double-width blade server reduces the IBM BladeCenter’s capacity to 7 servers per chassis.  This means that you would be able to put 28 x 4 socket IBM HX5 blade servers into a 42u rack (4 chassis x 7 servers each.)

Summary
In a tie for 1st place, at 32 blade servers in a 42u rack, Dell and HP would have the most blade server density based on their existing full-height blade server design.  IBM and Cisco would come in at 3rd place with 28 blade servers in a 42u rack..  However IF Cisco (or HP and Dell for that matter) were able to magically re-design their half-height servers to hold 4 CPUs, then they would be able to take 1st place for blade density with 56 servers. 

Yes, I know that there are slim chances that anyone would fill up a rack with 4 socket servers, however I thought this would be good comparison to make.  What are your thoughts?  Let me know in the comments below.

Tolly Report: HP Flex-10 vs Cisco UCS (Network Bandwidth Scalability Comparison)

Tolly.com announced on 2/25/2010 a new Test Report that compares the network bandwidth scalability between HP BladeSystem c7000 with BL460 G6 Servers and Cisco UCS 5100 with B200 Servers, and the results were interesting.   The report simply tested 6 HP blades, with a single Flex-10 Module vs 6 Cisco blades using their Fabric Extender + a single Fabric Interconnect.  I’m not going to try and re-state what the report says (for that you can download it directly), instead, I’m going to highlight the results.  It is important to note that the report was “commissioned by Hewlett-Packard Dev. Co, L.P.”

Result #1:  HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Physical-to-Physical Comparison
>The test shows when 4 physical servers were tested, Cisco can achieve an aggregate throughput of 36.59 Gbps vs HP achieving 35.83Gbps (WINNER: Cisco)

>When 6 physical servers were tested, Cisco achieved an aggregate throughput of 27.37 Gbps vs HP achieving 53.65 Gbps – a difference of 26.28 Gbps (WINNER: HP)

Result #2:
 HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Virtual-to-Virtual Comparison
>Testing 2 servers each running 8 VMware Red Hat Linux hosts showed that HP achieved an aggregate throughput of 16.42 Gbps vs Cisco UCS achieving 16.70 Gbps (WINNER: Cisco). 

The results of the above was performed with the 2 x Cisco B200 blade servers each mapped to a dedicated 10Gb uplink port on the Fabric Extender (FEX).  When the 2 x Cisco B200 blade servers were designed to share the same 10Gb uplink port on the FEX, the achieved aggregate throughput on the Cisco UCS decreased to 9.10 Gbps.

A few points to note about these findings:
a) the HP Flex-10 Module has 8 x 10Gb uplinks whereas the Cisco Fabric Extender (FEX) has 4 x 10Gb uplinks

b) Cisco’s FEX Design allows for the 8 blade servers to extend out the 4 external ports in the FEX a 2:1 ratio (2 blades per external FEX port.) The current Cisco UCS design requires the servers to be “pinned”, or permanently assigned, to the respective FEX uplink. This works well when there are 4 blade servers, but when you get to more than 4 blade servers, the traffic is shared between two servers, which could cause bandwidth contention. 

 Furthermore, it’s important to understand that the design of the UCS blade infrastructure does not allow communication to go from Server 1 to Server 2 without leaving the FEX, connecting to the Fabric Interconnect (top of the picture) then returning to the FEX and connecting to the server.  This design is the potential cause of the decrease in aggregate throughput from 16.70Gbps to 9.10Gbps as shown above.


One of the “Bottom Line” conclusions from this report states, “throughput degradation on the Cisco UCS cased by bandwidth contention is a cause of concern for customers considering the use of UCS in a virtual server environment”  however I encourage you to take a few minutes, download this full report from the Tolly.com website and make your own conclusions about this report. 

Let me know your thoughts about this report – leave a comment below.

Disclaimer: This report was brought to my attention while attending the HP Tech Day event where airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

What Gartner Thinks of Cisco, HP, IBM and Dell (UPDATED)

(UPDATED 10/28/09 with new links to full article)

I received a Tweet from @HPITOps linked to Gartner’s first ever “Magic Quadrant” for blade servers.  Gartner Magic Quadrant - October 2009The Magic Quadrant is a tool that Gartner put together to help people easily where manufacturers rank, based on certain criteria.  As the success of blade servers continues to grow, the demand for blades increases.  You can read the complete Gartner paper at http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4AA3-0100ENW.pdf, but I wanted to touch on a few highlights.

Key Points

  • *Blades are less than 15% of the server marketplace today.
  • *HP and IBM make up 70% of the blade market share
  • *HP, IBM and Dell are classified as “Leaders” in the blade market place and Cisco is listed as a “Visionary” 

What Gartner Says About Cisco, Dell, HP and IBM

Cisco
Cisco announced their entry into the blade server market place in early 2009 and as of the past few weeks began shipping their first product.  Gartner’s report says, “Cisco’s Unified Computing System (UCS) is highly innovative and is particularly targeted at highly integrated and virtualized enterprise requirements.”  Gartner currently views Cisco as being in the “visionaries” quadrant.  The report comments that Cisco’s strengths are:

  • they have a  global presence in “most data centers”
  • differentiated blade design
  • they have a cross-selling opportunity across their huge install base
  • they have strong relationships with virtualization and integration vendors

As part of the report, Gartner also mentions some negative points (aka “Cautions”) about Cisco to consider:

  • Lack of blade server install base
  • limited blade portfolio
  • limited hardware certification by operating system and application software vendors

Obviously these Cautions are based on Cisco’s newness to the marketplace, so let’s wait 6 months and check back on what Gartner thinks.

Dell
No stranger to the blade marketplace, Dell continues to produce new servers and new designs.  While Dell has a fantastic marketing department, they still are not anywhere close to the market share that IBM and HP split.  In spite of this, Gartner still classifies Dell in the “leaders” quadrant.  According to the report, “Dell offers Intel and AMD Opteron blade servers that are well-engineered, enterprise-class platforms that fit well alongside the rest of DelI’s x86 server portfolio, which has seen the company grow its market share steadily through the past 18 months.

The report views that Dell’s strengths are:

  • having a cross-selling opportunity to sell blades to their existing server, desktop and notebook customers
  • aggressive pricing policies
  • focused in innovating areas like cooling and virtual I/O

Dell’s “cautions” are reported as:

  • having a limited portfolio that is targeted toward enterprise needs
  • bad history of “patchy committment” to their blade platforms

It will be interesting to see where Dell takes their blade model.  It’s easy to have a low price model on entry level rack servers, but in a blade server infrastructure where standardization is key and integrated switches are a necessity having the lowest pricing may get tough.

IBM
Since 2002, IBM has ventured into the blade server marketplace with an wide variety of server and chassis offerings.  Gartner placed IBM in the “leaders” quadrant as well, although they place IBM much higher and to the right signifying a “greater ability to execute” and a “more complete vision.”  While IBM once had the lead in blade server market share, they’ve since handed that over to HP.  Gartner reports, “IBM is putting new initiatives in place to regain market share, including supply chain enhancements, dedicated sales resources and new channel programs. 

The report views that IBM’ strengths are:

  • strong global market share
  • cross selling opportunities to sell into existing IBM System x, System i, System p and System z customers
  • broad set of chassis options that address specialized needs (like DC power & NEBS compliance for Telco) as well as Departmental / Enterprise
  • blade server offerings for x86 and Power Processors
  • strong record of management tools
  • innovation around cooling and specialized workloads

Gartner only lists one “caution” for IBM and that is their loss of market share to HP since 2007.

HP
Gartner identifies HP as being in the farthest right in the October 2009 Magic Quadrant, therefore I’ll classify HP as being the #1 “leader.”  Gartner’s report says, “Since the 2006 introduction of its latest blade generation, HP has recaptured market leadership and now sells more blade servers than the rest of the market combined.”  Ironically, Gartner list of HP’s strengths is nearly identical to IBM:

  • global blade market leader
  • cross selling opportunities to sell into existing HP server, laptop and desktop customers
  • broad set of chassis options that address Departmental and Enterprise needs
  • blade server offerings for x86 and Itanium Processors
  • strong record of management tools
  • innovation around cooling and virtual I/O

Gartner only lists one “caution” for HP and that is their portfolio, as extensive as it may be, could be considered too complex and it could be too close to HP’s alternative, modular, rack-based offering.

Gartner’s report continues to discuss other niche players like Fujitsu, NEC and Hitachi, so if you are interesting in reading about them, check out the full report at 

http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4AA3-0100ENW.pdf.  All-in-all, Gartner’s report reaffirms that HP, IBM and Dell are the market leaders, for now, with Cisco coming up behind them.

Feel free to comment on this post and let me know what you think.

estimated tax payments
christmas tree store
beaches in florida
dog treat recipes
new zealand map

Cisco UCS vs IBM BladeCenter H

News Flash: Cisco is now selling servers!

Okay – perhaps this isn’t news anymore, but the reality is Cisco has been getting a lot of press lately – from their overwhelming presence at VMworld 2009 to their ongoing cat fight with HP.  Since I work for a Solutions Provider that sells HP, IBM and now Cisco blade servers, I figured it might be good to “try” and put together a comparison between the Cisco and IBM.  Why IBM?  Simply because at this time, they are the only blade vendor who offers a Converged Network Adapter (CNA) that will work with the Cisco Nexus 5000 line.  At this time Dell and HP do not offer a CNA for their blade server line so IBM is the closest we can come to Cisco’s offering.  I don’t plan on spending time educating you on blades, because if you are interested in this topic, you’ve probably already done your homework.  My goal with this post is to show the pros (+) and cons (-) that each vendor has with their blade offering – based on my personal, neutral observation

Chassis Variety / Choice: winner in this category is IBM.
IBM currently offers 5 types of blade chassis: BladeCenter S, BladeCenter E, BladeCenter H, BladeCenter T and BladeCenter HT.   Each of the IBM blade chassis have unique offerings, such as the BladeCenter S is designed for small or remote offices with local storage capabilities, whereas the BladeCenter HT is designed for Telco environments with options for NEBS compliant features including DC power.  At this time, Cisco only offers a single blade chassis offering (the 5808).

IBM BladeCenter H

IBM BladeCenter H

Cisco UCS 5108

Cisco UCS 5108

Server Density and Server Offerings: winner in this category is IBM.  IBM’s BladeCenter E and BladeCenter H chassis offer up to 14 blade servers with servers using Intel, AMD and Power PC processors.  In comparison, Cisco’s 5808 chassis offers up to 8 server slots and currently offers servers with Intel Xeon processors.  As an honorable mention Cisco does offer a “full width” blade (Cisco UCS B250 server)  that provides up to 384Gb of RAM in a single blade server across 48 memory slots offering up the ability to get to higher memory at a lower price point.  

 Management / Scalability: winner in this category is Cisco.
This is where Cisco is changing the blade server game.  The traditional blade server infrastructure calls for each blade chassis to have its own dedicated management module to gain access to the chassis’ environmentals and to remote control the blade servers.  As you grow your blade chassis environment, you begin to manage multiple servers. 
Beyond the ease of managing , the management software that the Cisco 6100 series offers provides users with the ability to manage server service profiles that consists of things like MAC Addresses, NIC Firmware, BIOS Firmware, WWN Addresses, HBA Firmware (just to name a few.) 

Cisco UCS 6100 Series Fabric Interconnect

Cisco UCS 6100 Series Fabric Interconnect

With Cisco’s UCS 6100 Series Fabric Interconnects, you are able to manage up to 40 blade chassis with a single pair of redundant UCS 6140XP (consisting of 40 ports.)

If you are familiar with the Cisco Nexus 5000 product, then understanding the role of the  Cisco UCS 6100 Fabric Interconnect should be easy.  The UCS 6100 Series Fabric Interconnect do for the Cisco UCS servers what Nexus does for other servers: unifies the fabric.   HOWEVER, it’s important to note the UCS 6100 Series Fabric Interconnect is NOT a Cisco Nexus 5000.  The UCS 6100 Series Fabric Interconnect is only compatible with the UCS servers. 

UCS Diagram

Cisco UCS I/O Connectivity Diagram (UCS 5108 Chassis with 2 x 6120 Fabric Interconnects)

If you have other servers, with CNAs, then you’ll need to use the Cisco Nexus 5000.

The diagram on the right shows a single connection from the FEX to the UCS 6120XP, however the FEX has 4 uplinks, so if you want (need) more throughput, you can have it.  This design provides each half-wide Cisco B200 server with the ability to have 2

CNA ports with redundant pathways.  If you are satisified with using a single FEX connection per chassis, then you have the ability to scale up to 20 x blade chassis with a Cisco UCS 6120 Fabric Interconnect, or 40 chassis with the Cisco UCS 6140 Fabric Interconnect.  As hinted in the previous section, the management software for the all connected UCS chassis resides in the redundant Cisco UCS 6100 Series Fabric Interconnects.   This design offers a highly scaleable infrastructure that enables you to scale simply by dropping in a chassis and connecting the FEX to the 6100 switch.  (Kind of like Lego blocks.)

On the flip side, while this architecture is simple, it’s also limited.  There is currently no way to add additional I/O to an individual server.  You get 2 x CNA ports per Cisco B200 server or 4 x CNA ports per Cisco B250 server. 

As previously mentioned, IBM has a strategy that is VERY similar to the Cisco UCS strategy using the Cisco Nexus 5000 product line.  IBM’s solution consists of:

  • IBM BladeCenter H Chassis
  • 10Gb Pass-Thru Module
  • CNA’s on the blade servers

Until IBM and Cisco design a Cisco Nexus switch that integrates into the IBM BladeCenter H chassis, using a 10Gb pass-thru module is the best option to get true DataCenter Ethernet (or Converged Enhanced Ethernet) from the server to the Nexus switch.  The performance for the IBM solution should equal the Cisco UCS design, since it’s just passing the signal through, however the connectivity is going to be more with the IBM solution.  Passing signals through means NO cable 

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

consolidation –  for every server you’re going to need a connection to the Nexus 5000.  For a fully populated IBM BladeCenter H chassis, you’ll need 14 connections to the Cisco Nexus 5000.  If you are using the Cisco 5010 (20 ports) you’ll eat up all but 6 ports.  Add a 2nd IBM BladeCenter chassis and you’re buying more Cisco Nexus switches.  Not quite the scaleable design that the Cisco UCS offers.

IBM offers a 10Gb Ethernet Switch Option from BNT (Blade Networks) that will work with converged switches like the Nexus 5000, but at this time that upgrade is not available.  Once it does become available, it would reduce the connectivity requirements down to a single cable, but, adding a switch between the blade chassis and the Nexus switch could bring additional management complications.  That is yet to be seen.

IBM’s BladeCenter H (BCH) does offer something that Cisco doesn’t – additional I/O expansion.  Since this solution uses two of the high speed bays in the BCH, bays 1, 2, 3 & 4 remain available.  Bays 1 & 2 are mapped to the onboard NICs on each server, and bays 3&4 are mapped to the 1st expansion card on each server.  This means that 2 additional NICs and 2 additional HBAs (or NICs) could be added in conjunction with the 2 CNAs on each server.  Based on this, IBM potentially offers more I/O scalability.

And the Winner Is…

It depends.  I love the concept of the Cisco UCS platform.  Servers are seen as processors and memory – building blocks that are centrally managed.  Easy to scale, easy to size.  However, is it for the average datacenter who only needs 5 servers with high I/O?  Probably not.  I see the Cisco UCS as a great platform for datacenters with more than 14 servers needing high I/O bandwidth (like a virtualization server or database server.)  If your datacenter doesn’t need that type of scalability, then perhaps going with IBM’s BladeCenter solution is the choice for you.  Going the IBM route gives you flexibility to choose from multiple processor types and gives you the ability to scale into a unified solution in the future.  While ideal for scalability, the IBM solution is currently more complex and potentially more expensive than the Cisco UCS solution.

Let me know what you think.  I welcome any comments.