Author Archives: Kevin Houston

About Kevin Houston

Founder of BladesMadeSimple.com. Blade server evangelist.

Details on Intel’s Nehalem EX (Xeon 7500 and Xeon 6500)

Intel is scheduled to “officially” announce today the details of their Nehalem EX CPU platform, although the details have been out for quite a while, however I wanted to highlight some key points.

Intel Xeon 7500 Chipset
This chipset will be the flagship replacement for the existing Xeon 7400 architecture.  Enhancements include:
•Nehalem uarchitecture
•8-cores per CPU 
•24MB Shared L3 Cache
• 4 Memory Buffers per CPU
•16 DIMM slots per CPU for a total of 64 DIMM slots supporting up to 1 terabyte of memory (across 4 CPUs)
•72 PCIe Gen2 lanes
•Scaling from 2-256 sockets  
•Intel Virtualization Technologies

Intel Xeon 6500 Chipset
Perhaps the coolest addition to the Nehalem EX announcement by Intel is the ability for certain vendors to cut the architecture in half, and use the same quality of horsepower across 2 CPUs.  The Xeon 6500 chipset will offer 2 CPUs, each with the same qualities of it’s bigger brother, the Xeon 7500 chipset.  See below for details on both of the offerings.

Additional Features
Since the Xeon 6500/7500 chipsets are modeled off the familiar Nehalem uarchitecture, there are certain well-known features that are available.  Both Turbo Boost and HyperThreading have been added to the and will provide users for the ability to have better performance in their high-end servers (shown left to right below.)

HyperThreading

Memory
Probably the biggest winner of the features that Intel’s bringing with the Nehalem EX announcement is the ability to have more memory and bigger memory pipes.  Each CPU will have 4 x high speed “Scalable Memory Interconnects” (SMI’s) that will be the highways for the memory to communicate with the CPUs.  As with the existing Nehalem architecture, each CPU has a dedicated memory controller that provides access to the memory.  In the case of the Nehalem EX design, each CPU has 4 pathways that each have a Scalable Memory Buffer, or SMB, that provide access to 4 memory DIMMs.  So, in total, each CPU will have access to 16 DIMMs across 4 pathways.  Based on the simple math, a server with 4 CPUs will be able to have up to 64 memory DIMMs.  Some other key facts:
• it will support up to 16GB DDR3 DIMMs
•it will support up to 1TB with 16GB DIMMS
•it
will support DDR3 DIMMs up to 1066MHz, in Registered, Single-Rank, Dual-Rank and Quad-Rank flavors.

Another important note is the actual system memory speed will depend on specific processor capabilities (see reference table below for max SMI link speeds per CPU):
•6.4GT/s SMI link speed capable of running memory speeds up to 1066Mhz
•5.86GT/s SMI link speed capable of running memory speeds up to 978Mhz
•4.8GT/s SMI link speed capable of running memory speeds up to 800Mhz

Here’s a great chart to reference on the features across the individual CPU offerings, from Intel:

Finally, take a look at some comparisons between the Nehalem EX (Xeon 7500) and the previous generation, Xeon 7400:

That’s it for now.  Check back later for more specific details on Dell, HP, IBM and Cisco’s new Nehalem EX blade servers.

HP Blades Helping Make Happy Feet 2 and Mad Max 4

Chalk yet another win up for HP. 

It was reported last week on www.itnews.com.au that Digital production house Dr. D. Studios is in the early stages of building a supercomputer grid cluster for the rendering of the animated feature film Happy Feet 2 and visual effects in Fury Road the long-anticipated fourth film in the Mad Max series.  The super computer grid is based on HP BL490 G6 blade servers housed within an APC HACS pod, is already running in excess of 1000 cores and is expected to reach over 6000 cores during peak rendering by mid-2011.

This cluster boasted 4096 cores, taking it into the top 100 on the list of Top 500 supercomputers in the world in 2007 (it now sits at 447).

According to Doctor D infrastructure engineering manager James Bourne, “High density compute clusters provide an interesting engineering exercise for all parties involved. Over the last few years the drive to virtualise is causing data centres to move down a medium density path.”

Check out the full article, including video at:
http://www.itnews.com.au/News/169048,video-building-a-supercomputer-for-happy-feet-2-mad-max-4.aspx

Blade Server Shoot-Out (Dell/HP/IBM) – InfoWorld.com

InfoWorld.com posted on 3/22/2010 the results of a blade server shoot-out between Dell, HP, IBM and Super Micro. I’ll save you some time and help summarize the results of Dell, HP and IBM.

The Contenders
Dell, HP and IBM each provided blade servers with the Intel Xeon X5670 2.93GHz CPUs and at least 24GB of RAM in each blade.

The Tests
InfoWorld designed a custom suite VMware tests as well as several real-world performance metric tests. The VMware tests were composed of:

  • a single large-scale custom LAMP application
  • a load-balancer running Nginx
  • four Apache Web servers
  • two MySQL servers

InfoWorld designed the VMware workloads to mimic a real-world Web app usage model that included a weighted mix of static and dynamic content, randomized database updates, inserts, and deletes with the load generated at specific concurrency levels, starting at 50 concurrent connections and ramping up to 200.  InfoWorld’s started off with the VMware tests first on one blade server, then across two blades. Each blade being tested were running VMware ESX 4 and controlled by a dedicated vCenter instance.

The other real-world tests included serveral tests of common single-threaded tasks run simultaneously at levels that met and eclipsed the logical CPU count on each blade, running all the way up to an 8x oversubscription of physical cores. These tests included:

  • LAME MP3 conversions of 155MB WAV files
  • MP4-to-FLV video conversions of 155MB video files
  • gzip and bzip2 compression tests
  • MD5 sum tests

The ResultsDell
Dell did very well, coming in at 2nd in overall scoring.  The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis

Some key points made in the article about Dell:

  • Dell does not offer a lot of “blade options.”  There are several models available, but they are the same type of blades with different CPUs.  Dell does not currently offer any storage blades or virtualization-centric blades.
  • Dell’s 10Gb design does not offer any virtualized network I/O. The 10G pipe to each blade is just that, a raw 10G interface.  No virtual NICs.
  • The new CMC (chassis management controller) is a highly functional and attractive management tool offering new tasks like pusing actions to multiple blades at once such as BIOS updates and RAID controller firmware updates.
  • Dell has implemented more efficient dynamic power and cooling features in the M1000e chassis. Such features include the ability to shut down power supplies when the power isn’t needed, or ramping the fans up and down depending on load and the location of that load.

According to the article, “Dell offers lots of punch in the M1000e and has really brushed up the embedded management tools. As the lowest-priced solution…the M1000e has the best price/performance ratio and is a great value.”

HP
Coming in at 1st place, HP continues to shine in blade leadership.  HP’s testing equipment consisted of a c7000 nine BL460c blades, each running two 2.93GHz Intel Xeon X5670 (Westmere-EP) CPUs and 96GB of RAM as well as embedded 10G NICs with a dual 1G mezzanine card.  As an important note, HP was the only server vendor with 10G NICs on the motherboard.  Some key points made in the article about HP:

  •  With the 10G NICs standard on the newest blade server models, InfoWorld says “it’s clear that HP sees 10G as the rule now, not the exception.”
  • HP’s embedded Onboard Administrator offers detailed information on all chassis components from end to end.  For example, HP’s management console can provide exact temperatures of every chassis or blade component.
  • HP’s console can not offer  global BIOS and firmware updates (unlike Dell’s CMC) or the ability to powering up or down more than one blade at a time.
  • HP offers “multichassis management” – the ability to daisy-chain several chassis together and log into any of them from the same screen as well as manage them.  This appears to be a unique feature to HP.
  • The HP c7000 chassis also has power controlling features like dynamic power saving options that will automatically turn off power supplies when the system energy requirements are low or increasing the fan airflow to only those blades that need it.

InfoWorld’s final thoughts on HP: “the HP c7000 isn’t perfect, but it is a strong mix of reasonable price and high performance, and it easily has the most options among the blade system we reviewed.”

IBM
Finally, IBM’s came in at 3rd place, missing a tie with Dell by a small fraction.  Surprisingly, I was unable to find the details on what the configuration was for IBM’s testing.  Not sure if I’m just missing it, or if InfoWorld left out the information, but I know IBM’s blade server had the same Intel Xeon X5670 CPUs as Dell and HP used.   Some of the points that InfoWorld mentioned about IBM’s BladeCenter H offering:

  • IBM’s pricing is higher.
  • IBM’s chassis only holds 14 servers whereas HP can hold 32 servers (using BL2x220c servers) and Dell holds 16 servers.
  • IBM’s chassis doesn’t offer a heads-up display (like HP and Dell.)
  • IBM had the only redundant internal power and I/O connectors on each blade.  It is important to note the lack of redundant power and I/O connectors is why HP and Dell’s densities are higher.  If you want redundant connections on each blade with HP and Dell, you’ll need to use their “full-height” servers, which decrease HP and Dell’s overall capacity to 8.
  • IBM’s Management Module is lacking graphical features – there’s no graphical representation of the chassis or any images.  From personal experience, IBM’s management module looks like it’s stuck in the ’90s – very text based.
  • The IBM BladeCenter H lacks dynamic power and cooling capabilities.  Instead of using smaller independent regional fans for cooling, IBM uses two blowers.  Because of this, the ability to reduce cooling in specific areas, like Dell and HP offer are lacking.

InfoWorld summarizes the IBM results saying, ” if you don’t mind losing two blade slots per chassis but need some extra redundancy, then the IBM BladeCenter H might be just the ticket.”

Overall, each vendor has their own pro’s and con’s.  InfoWorld does a great job summarizing the benefits of each offering below.  Please make sure to visit the InfoWorld article and read all of the details of their blade server shoot-out.

ibs symptoms
dish network careers
fort jackson sc
escape the car
navy seals training

IBM BladeCenter H vs Cisco UCS

(From the Archives – September 2009)

News Flash: Cisco is now selling servers!

Okay – perhaps this isn’t news anymore, but the reality is Cisco has been getting a lot of press lately – from their overwhelming presence at VMworld 2009 to their ongoing cat fight with HP. Since I work for a Solutions Provider that sells HP, IBM and now Cisco blade servers, I figured it might be good to “try” and put together a comparison between the Cisco and IBM. Why IBM? Simply because at this time, they are the only blade vendor who offers a Converged Network Adapter (CNA) that will work with the Cisco Nexus 5000 line. At this time Dell and HP do not offer a CNA for their blade server line so IBM is the closest we can come to Cisco’s offering. I don’t plan on spending time educating you on blades, because if you are interested in this topic, you’ve probably already done your homework. My goal with this post is to show the pros (+) and cons (-) that each vendor has with their blade offering – based on my personal, neutral observation

Chassis Variety / Choice: winner in this category is IBM.
IBM currently offers 5 types of blade chassis: BladeCenter S, BladeCenter E, BladeCenter H, BladeCenter T and BladeCenter HT. Each of the IBM blade chassis have unique offerings, such as the BladeCenter S is designed for small or remote offices with local storage capabilities, whereas the BladeCenter HT is designed for Telco environments with options for NEBS compliant features including DC power. At this time, Cisco only offers a single blade chassis offering (the 5808).

IBM BladeCenter H

IBM BladeCenter H

Cisco UCS 5108

Cisco UCS 5108

Server Density and Server Offerings: winner in this category is IBM. IBM’s BladeCenter E and BladeCenter H chassis offer up to 14 blade servers with servers using Intel, AMD and Power PC processors. In comparison, Cisco’s 5808 chassis offers up to 8 server slots and currently offers servers with Intel Xeon processors. As an honorable mention Cisco does offer a “full width” blade (Cisco UCS B250 server) that provides up to 384Gb of RAM in a single blade server across 48 memory slots offering up the ability to get to higher memory at a lower price point.

 Management / Scalability: winner in this category is Cisco.
This is where Cisco is changing the blade server game. The traditional blade server infrastructure calls for each blade chassis to have its own dedicated management module to gain access to the chassis’ environmentals and to remote control the blade servers. As you grow your blade chassis environment, you begin to manage multiple servers.
Beyond the ease of managing , the management software that the Cisco 6100 series offers provides users with the ability to manage server service profiles that consists of things like MAC Addresses, NIC Firmware, BIOS Firmware, WWN Addresses, HBA Firmware (just to name a few.)

Cisco UCS 6100 Series Fabric Interconnect

Cisco UCS 6100 Series Fabric Interconnect

With Cisco’s UCS 6100 Series Fabric Interconnects, you are able to manage up to 40 blade chassis with a single pair of redundant UCS 6140XP (consisting of 40 ports.)

If you are familiar with the Cisco Nexus 5000 product, then understanding the role of the Cisco UCS 6100 Fabric Interconnect should be easy. The UCS 6100 Series Fabric Interconnect do for the Cisco UCS servers what Nexus does for other servers: unifies the fabric. HOWEVER, it’s important to note the UCS 6100 Series Fabric Interconnect is NOT a Cisco Nexus 5000. The UCS 6100 Series Fabric Interconnect is only compatible with the UCS servers.

UCS Diagram

Cisco UCS I/O Connectivity Diagram (UCS 5108 Chassis with 2 x 6120 Fabric Interconnects)

If you have other servers, with CNAs, then you’ll need to use the Cisco Nexus 5000.

The diagram on the right shows a single connection from the FEX to the UCS 6120XP, however the FEX has 4 uplinks, so if you want (need) more throughput, you can have it. This design provides each half-wide Cisco B200 server with the ability to have 2

CNA ports with redundant pathways. If you are satisified with using a single FEX connection per chassis, then you have the ability to scale up to 20 x blade chassis with a Cisco UCS 6120 Fabric Interconnect, or 40 chassis with the Cisco UCS 6140 Fabric Interconnect. As hinted in the previous section, the management software for the all connected UCS chassis resides in the redundant Cisco UCS 6100 Series Fabric Interconnects. This design offers a highly scaleable infrastructure that enables you to scale simply by dropping in a chassis and connecting the FEX to the 6100 switch. (Kind of like Lego blocks.)

On the flip side, while this architecture is simple, it’s also limited. There is currently no way to add additional I/O to an individual server. You get 2 x CNA ports per Cisco B200 server or 4 x CNA ports per Cisco B250 server.

As previously mentioned, IBM has a strategy that is VERY similar to the Cisco UCS strategy using the Cisco Nexus 5000 product line with pass-thru modules. IBM’s solution consists of:

  • IBM BladeCenter H Chassis
  • 10Gb Pass-Thru Module
  • CNA’s on the blade servers

Even though IBM and Cisco designed the Cisco Nexus 4001i  switch that integrates into the IBM BladeCenter H chassis, using a 10Gb pass-thru module “may” be the best option to get true DataCenter Ethernet (or Converged Enhanced Ethernet) from the server to the Nexus switch – especially for users looking for the lowest cost. The performance for the IBM solution should equal the Cisco UCS design, since it’s just passing the signal through, however the connectivity is going to be more with the IBM solution. Passing signals through means NO cable

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

consolidation – for every server you’re going to need a connection to the Nexus 5000. For a fully populated IBM BladeCenter H chassis, you’ll need 14 connections to the Cisco Nexus 5000. If you are using the Cisco 5010 (20 ports) you’ll eat up all but 6 ports. Add a 2nd IBM BladeCenter chassis and you’re buying more Cisco Nexus switches. Not quite the scaleable design that the Cisco UCS offers.

IBM also offers a 10Gb Ethernet Switch Option from BNT (Blade Networks) that will work with converged switches like the Nexus 5000, but at this time that upgrade is not available. Once it does become available, it would reduce the connectivity requirements down to a single cable, but, adding a switch between the blade chassis and the Nexus switch could bring additional management complications. Let me know your thoughts on this.

IBM’s BladeCenter H (BCH) does offer something that Cisco doesn’t – additional I/O expansion. Since this solution uses two of the high speed bays in the BCH, bays 1, 2, 3 & 4 remain available. Bays 1 & 2 are mapped to the onboard NICs on each server, and bays 3&4 are mapped to the 1st expansion card on each server. This means that 2 additional NICs and 2 additional HBAs (or NICs) could be added in conjunction with the 2 CNAs on each server. Based on this, IBM potentially offers more I/O scalability.

And the Winner Is…

It depends. I love the concept of the Cisco UCS platform. Servers are seen as processors and memory – building blocks that are centrally managed. Easy to scale, easy to size. However, is it for the average datacenter who only needs 5 servers with high I/O? Probably not. I see the Cisco UCS as a great platform for datacenters with more than 14 servers needing high I/O bandwidth (like a virtualization server or database server.) If your datacenter doesn’t need that type of scalability, then perhaps going with IBM’s BladeCenter solution is the choice for you. Going the IBM route gives you flexibility to choose from multiple processor types and gives you the ability to scale into a unified solution in the future. While ideal for scalability, the IBM solution is currently more complex and potentially more expensive than the Cisco UCS solution.

Let me know what you think. I welcome any comments.

maple grove community center
world population clock
isp speed test
breast cancer symptoms
home decorators coupon

Cisco, IBM and HP Update Blade Portfolio with Westmere Processor

Intel officially announced today the Xeon 5600 processor, code named “Westmere.” Cisco, HP and IBM also announced their blade servers that have the new processor. The Intel Xeon 5600 offers:

  • 32nm process technology with 50% more threads and cache
  • Improved energy efficiency with support for 1.35V low power memory

There will be 4 core and 6 core offerings. This processor also provide the option of HyperThreading, so you could have up to 8 threads and 12 threads per processor, or 16 and 24 in a dual CPU system. This will be a huge advantage to applications that like multiple threads, like virtualization. Here’s a look at what each vendor has come out with:

Cisco
Cisco B200 blade serverThe B200 M2 provides Cisco users with the current Xeon 5600 processors. It looks like Cisco will be offering a choice of the following Xeon 5600 processors: Intel Xeon X5670, X5650, E5640, E5620, L5640, or E5506. Because Cisco’s model is a “built-to-order” design, I can’t really provide any part numbers, but knowing what speeds they have should help.

HP
HP is starting off with the Intel Xeon 5600 by bumping their existing G6 models to include the Xeon 5600 processor. The look, feel, and options of the blade servers will remain the same – the only difference will be the new processor. According to HP, “the HP ProLiant G6 platform, based on Intel Xeon 5600 processors, includes the HP ProLiant BL280c, BL2x220c, BL460c and BL490c server blades and HP ProLiant WS460c G6 workstation blade for organizations requiring high density and performance in a compact form factor. The latest HP ProLiant G6 platforms will be available worldwide on March 29.It appears that HP’s waiting until March 29 to provide details on their Westmere blade offerings, so don’t go looking for part numbers or pricing on their website.

IBM
IBM is continuing to stay ahead of the game with details about their product offerings. They’ve refreshed their HS22 and HS22v blade servers:

HS22
7870ECU – Express HS22, 2x Xeon 4C X5560 95W 2.80GHz/1333MHz/8MB L2, 4x2GB, O/Bay 2.5in SAS, SR MR10ie

7870G4U – HS22, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870GCU – HS22, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 2.5in SAS, Broadcom 10Gb Gen2 2-port

7870H2U -HS22, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870H4U – HS22, Xeon 6C X5670 95W 2.93GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870H5U – HS22, Xeon 4C X5667 95W 3.06GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870HAU – HS22, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS, Emulex Virtual Fabric Adapter

7870N2U – HS22, Xeon 6C L5640 60W 2.26GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870EGU – Express HS22, 2x Xeon 4C E5630 80W 2.53GHz/1066MHz/12MB, 6x2GB, O/Bay 2.5in SAS

IBM HS22V Blade ServerHS22V
7871G2U HS22V, Xeon 4C E5620 80W 2.40GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871G4U HS22V, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871GDU HS22V, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871H4U HS22V, Xeon 6C X5670 95W 2.93GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871H5U HS22V, Xeon 4C X5667 95W 3.06GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871HAU HS22V, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871N2U HS22V, Xeon 6C L5640 60W 2.26GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871EGU Express HS22V, 2x Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 6x2GB, O/Bay 1.8in SAS

7871EHU Express HS22V, 2x Xeon 6C X5660 95W 2.80GHz/1333MHz/12MB, 6x4GB, O/Bay 1.8in SAS

I could not find any information on what Dell will be offering, from a blade server perspective, so if you have information (that is not confidential) feel free send it my way.

New IBM Blade Chassis? New Liquid Cooled Blade?

Okay, I’ll be the first to admit. I’m a geek. I’m not an uber-geek, but I’m a geek. When things get slow, I like digging around in the U.S. Patent Archives for hints as to what might be coming next in the blade server market place. My latest find uncovered a couple of “interesting” patents that were published by the International Business Machines Corporation, also known as IBM.

Liquid Cooled Blades?

IBM ColdBlade2United States Patent #7552758, titled “Method for high-density packaging and cooling of high-powered compute and storage server blades” (published 6/29/2009) may be IBM’s clever way of disguising a method to liquid cool blade servers.IBM ColdBlade According to the patent, the invention is “A system for removing heat from server blades, comprising: a server rack enclosure, the server rack enclosure enclosing: a liquid distribution manifold; a plurality of cold blades attached to the liquid distribution manifold, wherein liquid is circulated through the liquid distribution manifold and the cold blades; and at least one server blade attached to each of the cold blades, wherein the server blade includes a base portion, the base portion is a heat-conducting aluminum plate, the base portion is positioned directly onto the cold blade, and contact blocks penetrate the aluminum plate and make contact with corresponding contact points of the cold blades.”

You can read more about this patent, in detail, at http://www.freepatentsonline.com/7552758.html

New Storage Blade?

Another search revealed a patent for a “hard disk enclosure blade” (patent # 7499271), published on 3/3/2009, is a design that IBM seems to have been working on for a few years, as this design stems back to 2006.IBM future Storage blade It appears to be a “double-wide” enclosure that will allow for 8 disk drives to be inserted.

IBM future Storage blade2

This is an interesting idea, if the goal were to be used inside a normal bladecenter chassis. It would be like having the local space of an IBM BladeCenter S, but in the IBM BladeCenter E or IBM BladeCenter H. On the other hand, it could have been the invention that was used for the storage modules of the IBM BladeCenter S. You can read more about this invention at http://www.freepatentsonline.com/7499271.html.

New IBM BladeCenter Chassis?
IBM NewChassisThe final invention that I uncovered is very mysterious to me. Titled, “Securing Blade Servers in a Data Center,” patent application # 20100024001 shows a new concept from IBM encompassing a blade server chassis, a router, a patch panel, a RAID Array, a power strip and blade servers all inside of a single enclosure, or “Data Center.” An important note is that this device is not yet approved as a patent – it’s still a patent application. Filed on 7/25/2008 and published as a patent application on 1/28/2010, this patent application lists an abstract description of, “Securing blade servers in a data center, the data center including a plurality of blade servers installed in a plurality of blade server chassis, the blade servers and chassis connected for data communications to a management module, each blade server chassis including a chassis key, where securing blade servers includes: prior to enabling user-level operation of the blade server, receiving, by a security module, from the management module, a chassis key for the blade server chassis in which the blade server is installed; determining, by the security module, whether the chassis key matches a security key stored on the blade server; if the chassis key matches the security key, enabling, by the security module, user-level operation of the blade server; and if the chassis key does not match the security key, disabling, by the security module, operation of the blade server.” I’ve tried a few times to decipher what this patent is really for, but I’ve not had any luck. I encourage you to head over to http://www.freepatentsonline.com/y2010/0024001.html and take a look. If it makes sense to you, leave me a comment.

While this was nothing but a trivial attempt at finding the next big thing before it’s announced, I walk away from this amazed at the number of patents that IBM has, just for blade servers. I hope to do a similar exercise for HP, Dell and Cisco in the near future, after tomorrow’s Westmere announcements.

4 Socket Blade Servers Density: Vendor Comparison

IMPORTANT NOTE – I updated this blog post on Feb. 28, 2011 with better details.  To view the updated blog post, please go to:

https://bladesmadesimple.com/2011/02/4-socket-blade-servers-density-vendor-comparison-2011/

Original Post (March 10, 2010):

As the Intel Nehalem EX processor is a couple of weeks away, I wonder what impact it will have in the blade server market.  I’ve been talking about IBM’s HX5 blade server for several months now, so it is very clear that the blade server vendors will be developing blades that will have some iteration of the Xeon 7500 processor.  In fact, I’ve had several people confirm on Twitter that HP, Dell and even Cisco will be offering a 4 socket blade after Intel officially announces it on March 30.  For today’s post, I wanted to take a look at how the 4 socket blade space will impact the overall capacity of a blade server environment.  NOTE: this is purely speculation, I have no definitive information from any of these vendors that is not already public.

The Cisco UCS 5108 chassis holds 8 “half-width” B-200 blade servers or 4 “full-width” B-250 blade servers, so when we guess at what design Cisco will use for a 4 socket Intel Xeon 7500 (Nehalem EX) architecture, I have to place my bet on the full-width form factor.  Why?  Simply because there is more real estate.  The Cisco B250 M1 blade server is known for its large memory capacity, however Cisco could sacrifice some of that extra memory space for a 4 socket, “Cisco B350 blade.  This would provide a bit of an issue for customers wanting to implement a complete rack full of these servers, as it would only allow for a total of 28 servers in a 42U rack (7 chassis x 4 servers per chassis.)

Estimated Cisco B300 with 4 CPUs

On the other hand, Cisco is in a unique position in that their half-width form factor also has extra real estate because they don’t have 2 daughter card slots like their competitors.  Perhaps Cisco would create a half-width blade with 4 CPUs (a B300?)  With a 42U rack, and using a half-width design, you would be able to get a maximum of 56 blade servers (7 chassis x 8 servers per chassis.)

Dell
The 10U M1000e chassis from Dell can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that Dell would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that Dell’s 4 socket blade will be a full-height blade server, probably named a PowerEdge M910.  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

HP
Similar to Dell, HP’s 10U BladeSystem c7000 chassis can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that HP would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that HP’s 4 socket blade will be a full-height blade server, probably named a Proliant BL680 G7 (yes, they’ll skip G6.)  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

IBM
Finally, IBM’s 9U BladeCenter H chassis offers up 14 servers.  IBM has one size server, called a “single wide.”  IBM will also have the ability to combine servers together to form a “double-wide”, which is what is needed for the newly announced IBM BladeCenter HX5.  A double-width blade server reduces the IBM BladeCenter’s capacity to 7 servers per chassis.  This means that you would be able to put 28 x 4 socket IBM HX5 blade servers into a 42u rack (4 chassis x 7 servers each.)

Summary
In a tie for 1st place, at 32 blade servers in a 42u rack, Dell and HP would have the most blade server density based on their existing full-height blade server design.  IBM and Cisco would come in at 3rd place with 28 blade servers in a 42u rack..  However IF Cisco (or HP and Dell for that matter) were able to magically re-design their half-height servers to hold 4 CPUs, then they would be able to take 1st place for blade density with 56 servers. 

Yes, I know that there are slim chances that anyone would fill up a rack with 4 socket servers, however I thought this would be good comparison to make.  What are your thoughts?  Let me know in the comments below.

IDC Q4 2009 Report: Blade Servers STILL Growing, HP Leads STILL Leading in Shares

IDC reported on February 24, 2010 that blade server sales for Q4 2009 returned to quarterly revenue growth with factory revenues increasing 30.9% in Q4 2009 year over year (vs 1.2% in Q3.)  For the first time in 2009 there was an 8.3% increase in year-over-year shipments in Q4.  Overall blade servers accounted for $1.8 billion in Q4 2009 (up from $1.3 billion in Q3) which represented 13.9% of the overall server revenue.  It was also reported that more than 87% of all blade revenue in Q4 2009 was driven by x86 systems where blades now represent 21.4% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following: 

#1 market share: HP with 52.4%

#2 market share: IBM increased their marketshare from Q3 by 5.7% growth to 35.1%

q4_2009_idc

As an important note, according to IDC, IBM significantly outperformed the market with year-over-year revenue growth of 64.1%.  

According to Jed Scaramella, senior research analyst in IDC's Datacenter and Enterprise Server group,  "Blades remained a bright spot in the server vendors’ portfolios.  They were able to grow blade revenue throughout the year while maintaining their average selling prices. Customers recognize the benefits extend beyond consolidation and density, and are leveraging the platform to deliver a dynamic IT environment. Vendors consider blades strategic to their business due to the strong loyalty customers develop for their blade vendor as well as the higher level of pull-through revenue associated with blades."

Virtual I/O on IBM BladeCenter (IBM Virtual Fabric Adapter by Emulex)

A few weeks ago, IBM and Emulex announced a new blade server adapter for the IBM BladeCenter and IBM System x line, called the “Emulex Virtual Fabric Adapter for IBM BladeCenter" (IBM part # 49Y4235). Frequent readers may recall that I had a "so what" attitude when I blogged about it in October and that was because, I didn't get it. I didn't get what the big deal was with being able to take a 10Gb pipe and allow you to carve it up into 4 "virtual NICs". HP's been doing this for a long time with their FlexNICs (check out VirtualKennth's blog for a great detail on this technology) so I didn't see the value in what IBM and Emulex was trying to do. But now I understand. Before I get into this, let me remind you of what this adapter is. The Emulex Virtual Fabric Adapter (CFFh) for IBM BladeCenter is a dual-port 10 Gb Ethernet card that supports 1 Gbps or 10 Gbps traffic, or up to eight virtual NIC devices.

This adapter hopes to address three key I/O issues:

1.Need for more than two ports per server, with 6-8 recommended for virtualization
2.Need for more than 1Gb bandwidth, but can't support full 10Gb today
3.Need to prepare for network convergence in the future

"1, 2, 3, 4"
I recently attended an IBM/Emulex partner event and Emulex presented a unique way to understand the value of the Emulex Virtual Fabric Adapter via the term, "1, 2, 3, 4" Let me explain:

"1" – Emulex uses a single chip architecture for these adapters. (As a non-I/O guy, I'm not sure of why this matters – I welcome your comments.)


"2" – Supports two platforms: rack and blade
(Easy enough to understand, but this also emphasizes that a majority of the new IBM System x servers announced this week will have the Virtual Fabric Adapter "standard")

"3" – Emulex will have three product models for IBM (one for blade servers, one for the rack servers and one intergrated into the new eX5 servers)

"4" – There are four modes of operation:

  • Legacy 1Gb Ethernet
  • 10Gb Ethernet
  • Fibre Channel over Ethernet (FCoE)…via software entitlement ($$)
  • iSCSI Hardware Acceleration…via software entitlement ($$)

This last part is the key to the reason I think this product could be of substantial value. The adapter enables a user to begin with traditional Ethernet, then grow into 10Gb, FCoE or iSCSI without any physical change – all they need to do is buy a license (for the FCoE or iSCSI).

Modes of operation

The expansion card has two modes of operation: standard physical port mode (pNIC) and virtual NIC (vNIC) mode.

In vNIC mode, each physical port appears to the blade server as four virtual NIC with a default bandwidth of 2.5 Gbps per vNIC. Bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.

In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.

As previously mentioned, a future entitlement purchase will allow for up to two FCoE ports or two iSCSI ports. The FCoE and iSCSI ports can be used in combination with up to six Ethernet ports in vNIC mode, up to a maximum of eight total virtual ports.

Mode IBM Switch Compatibility

vNIC – works with BNT Virtual Fabric Switch
pNIC – works with BNT, IBM Pass-Thru, Cisco Nexus
FCoE– BNT or Cisco Nexus
iSCSI Acceleration – all IBM 10GbE switches

I really think the "one card can do all" concept works really well for the IBM BladeCenter design, and I think we'll start seeing more and more customers move toward this single card concept.

Comparison to HP Flex-10
I'll be the first to admit, I'm not a network or storage guy, so I'm not really qualified to compare this offering to HP's Flex-10, however IBM has created a very clever video that does some comparisons. Take a few minutes to watch and let me know your thoughts.

7 habits of highly effective people
pet food express
cartoon network video
arnold chiari malformation
category 1 hurricane

Announcing the IBM BladeCenter HX5 Blade Server (with detailed pics)

(UPDATED 11:29 AM EST 3/2/2010)
IBM announced today the BladeCenter® HX5 – their first 4 socket blade since the HS41 blade server. IBM calls the HX5 “a scalable, high-performance blade server with unprecedented compute and memory performance, and flexibility ideal for compute and memory-intensive enterprise workloads.”

The HX5 will have the ability to be coupled with a 2nd HX5 to scale to 4 CPU Sockets, grow beyond the base memory with the MAX5 memory expansion and be offer hardware partition to split a dual node server into 2 x single node servers and back again. I’ll review each of these features in more detail below, but first, let’s look at the basics of the HX5 blade server.

X5 features:

  • Up to 2 x Intel Xeon 7500 CPUs per node
  • 16 DIMMs per node
  • 2 x Solid State Disk (SSD) slots per node
  • 1 x CIOv and 1 CFFh daughter card expansion slot per node, providing up to 8 I/O ports per node
  • 1 x scale connector per node

CPU Scalability
In the fashion of the eX5 architecture, IBM is enabling the HX5 blade server to grow from 2 CPUs to 4 CPUs (and theoretically more) via connecting the servers through a “scale connector“. This connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering. This means you could have any number of combinations from 2 x HX5 blade servers to 2 x HX5 blade servers + a MAX5 memory blade.

Memory Scalability
With the addition of a new 24 DIMM memory blade, called the MAX5, IBM is enabling users to grow the base memory from 16 memory DIMMS to 48 40 (16+24) memory DIMMs. The MAX5 will be connected via the scale connector mentioned above, and in fact, when coupled with a 2 node, 4 socket system, could enable the entire system to have 72 80 DIMMS (16 DIMMs per HX5 plus 24 DIMMs per MAX5). Granted, this will be a 4 server wide offering, but this will be a powerful offering for database servers, or even virtualization.

Hardware Partitioning
The final feature, known as FlexNode partitioning is the ability to split up a combined server node into individual server nodes and back again as needed. Performed using IBM Software, this feature will enable a user to automatically take a 2 node HX5 system acting as a single 4 socket system and split it up into 2 x 2 socket systems then revert back to a single 4 socket system once the workload is completed.

For example, during the day, the 4 socket HX5 server is used for as a database server, but at night, the database server is not being used, so the system is partitioned off into 2 x 2 socket physical servers that can each run their own applications.

As I’ve mentioned previously, the pricing and part number info for the IBM BladeCenter HX5 blade server is not expected to show up until the Intel Xeon 7500 processor announcement on March 30, so when that info is released, you can find it here.

For more details, head over to IBM’s
RedBook
site.

Let me know your thoughts – leave your comments below.