Tag Archives: blade server

IDC Q4 2009 Report: Blade Servers STILL Growing, HP Leads STILL Leading in Shares

IDC reported on February 24, 2010 that blade server sales for Q4 2009 returned to quarterly revenue growth with factory revenues increasing 30.9% in Q4 2009 year over year (vs 1.2% in Q3.)  For the first time in 2009 there was an 8.3% increase in year-over-year shipments in Q4.  Overall blade servers accounted for $1.8 billion in Q4 2009 (up from $1.3 billion in Q3) which represented 13.9% of the overall server revenue.  It was also reported that more than 87% of all blade revenue in Q4 2009 was driven by x86 systems where blades now represent 21.4% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following: 

#1 market share: HP with 52.4%

#2 market share: IBM increased their marketshare from Q3 by 5.7% growth to 35.1%

q4_2009_idc

As an important note, according to IDC, IBM significantly outperformed the market with year-over-year revenue growth of 64.1%.  

According to Jed Scaramella, senior research analyst in IDC's Datacenter and Enterprise Server group,  "Blades remained a bright spot in the server vendors’ portfolios.  They were able to grow blade revenue throughout the year while maintaining their average selling prices. Customers recognize the benefits extend beyond consolidation and density, and are leveraging the platform to deliver a dynamic IT environment. Vendors consider blades strategic to their business due to the strong loyalty customers develop for their blade vendor as well as the higher level of pull-through revenue associated with blades."

Virtual I/O on IBM BladeCenter (IBM Virtual Fabric Adapter by Emulex)

A few weeks ago, IBM and Emulex announced a new blade server adapter for the IBM BladeCenter and IBM System x line, called the “Emulex Virtual Fabric Adapter for IBM BladeCenter" (IBM part # 49Y4235). Frequent readers may recall that I had a "so what" attitude when I blogged about it in October and that was because, I didn't get it. I didn't get what the big deal was with being able to take a 10Gb pipe and allow you to carve it up into 4 "virtual NICs". HP's been doing this for a long time with their FlexNICs (check out VirtualKennth's blog for a great detail on this technology) so I didn't see the value in what IBM and Emulex was trying to do. But now I understand. Before I get into this, let me remind you of what this adapter is. The Emulex Virtual Fabric Adapter (CFFh) for IBM BladeCenter is a dual-port 10 Gb Ethernet card that supports 1 Gbps or 10 Gbps traffic, or up to eight virtual NIC devices.

This adapter hopes to address three key I/O issues:

1.Need for more than two ports per server, with 6-8 recommended for virtualization
2.Need for more than 1Gb bandwidth, but can't support full 10Gb today
3.Need to prepare for network convergence in the future

"1, 2, 3, 4"
I recently attended an IBM/Emulex partner event and Emulex presented a unique way to understand the value of the Emulex Virtual Fabric Adapter via the term, "1, 2, 3, 4" Let me explain:

"1" – Emulex uses a single chip architecture for these adapters. (As a non-I/O guy, I'm not sure of why this matters – I welcome your comments.)


"2" – Supports two platforms: rack and blade
(Easy enough to understand, but this also emphasizes that a majority of the new IBM System x servers announced this week will have the Virtual Fabric Adapter "standard")

"3" – Emulex will have three product models for IBM (one for blade servers, one for the rack servers and one intergrated into the new eX5 servers)

"4" – There are four modes of operation:

  • Legacy 1Gb Ethernet
  • 10Gb Ethernet
  • Fibre Channel over Ethernet (FCoE)…via software entitlement ($$)
  • iSCSI Hardware Acceleration…via software entitlement ($$)

This last part is the key to the reason I think this product could be of substantial value. The adapter enables a user to begin with traditional Ethernet, then grow into 10Gb, FCoE or iSCSI without any physical change – all they need to do is buy a license (for the FCoE or iSCSI).

Modes of operation

The expansion card has two modes of operation: standard physical port mode (pNIC) and virtual NIC (vNIC) mode.

In vNIC mode, each physical port appears to the blade server as four virtual NIC with a default bandwidth of 2.5 Gbps per vNIC. Bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.

In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.

As previously mentioned, a future entitlement purchase will allow for up to two FCoE ports or two iSCSI ports. The FCoE and iSCSI ports can be used in combination with up to six Ethernet ports in vNIC mode, up to a maximum of eight total virtual ports.

Mode IBM Switch Compatibility

vNIC – works with BNT Virtual Fabric Switch
pNIC – works with BNT, IBM Pass-Thru, Cisco Nexus
FCoE– BNT or Cisco Nexus
iSCSI Acceleration – all IBM 10GbE switches

I really think the "one card can do all" concept works really well for the IBM BladeCenter design, and I think we'll start seeing more and more customers move toward this single card concept.

Comparison to HP Flex-10
I'll be the first to admit, I'm not a network or storage guy, so I'm not really qualified to compare this offering to HP's Flex-10, however IBM has created a very clever video that does some comparisons. Take a few minutes to watch and let me know your thoughts.

7 habits of highly effective people
pet food express
cartoon network video
arnold chiari malformation
category 1 hurricane

Announcing the IBM BladeCenter HX5 Blade Server (with detailed pics)

(UPDATED 11:29 AM EST 3/2/2010)
IBM announced today the BladeCenter® HX5 – their first 4 socket blade since the HS41 blade server. IBM calls the HX5 “a scalable, high-performance blade server with unprecedented compute and memory performance, and flexibility ideal for compute and memory-intensive enterprise workloads.”

The HX5 will have the ability to be coupled with a 2nd HX5 to scale to 4 CPU Sockets, grow beyond the base memory with the MAX5 memory expansion and be offer hardware partition to split a dual node server into 2 x single node servers and back again. I’ll review each of these features in more detail below, but first, let’s look at the basics of the HX5 blade server.

X5 features:

  • Up to 2 x Intel Xeon 7500 CPUs per node
  • 16 DIMMs per node
  • 2 x Solid State Disk (SSD) slots per node
  • 1 x CIOv and 1 CFFh daughter card expansion slot per node, providing up to 8 I/O ports per node
  • 1 x scale connector per node

CPU Scalability
In the fashion of the eX5 architecture, IBM is enabling the HX5 blade server to grow from 2 CPUs to 4 CPUs (and theoretically more) via connecting the servers through a “scale connector“. This connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering. This means you could have any number of combinations from 2 x HX5 blade servers to 2 x HX5 blade servers + a MAX5 memory blade.

Memory Scalability
With the addition of a new 24 DIMM memory blade, called the MAX5, IBM is enabling users to grow the base memory from 16 memory DIMMS to 48 40 (16+24) memory DIMMs. The MAX5 will be connected via the scale connector mentioned above, and in fact, when coupled with a 2 node, 4 socket system, could enable the entire system to have 72 80 DIMMS (16 DIMMs per HX5 plus 24 DIMMs per MAX5). Granted, this will be a 4 server wide offering, but this will be a powerful offering for database servers, or even virtualization.

Hardware Partitioning
The final feature, known as FlexNode partitioning is the ability to split up a combined server node into individual server nodes and back again as needed. Performed using IBM Software, this feature will enable a user to automatically take a 2 node HX5 system acting as a single 4 socket system and split it up into 2 x 2 socket systems then revert back to a single 4 socket system once the workload is completed.

For example, during the day, the 4 socket HX5 server is used for as a database server, but at night, the database server is not being used, so the system is partitioned off into 2 x 2 socket physical servers that can each run their own applications.

As I’ve mentioned previously, the pricing and part number info for the IBM BladeCenter HX5 blade server is not expected to show up until the Intel Xeon 7500 processor announcement on March 30, so when that info is released, you can find it here.

For more details, head over to IBM’s
RedBook
site.

Let me know your thoughts – leave your comments below.

Announcing IBM eX5 Portfolio and the HX5 Blade Server

UPDATED: 3/2/2010 at 12:58 PM EST
Author’s Note: I’m stretching outside of my “blades” theme today so I can capture the entire eX5 messaging.
 
Finally, all the hype is over.  IBM announced today the next evolution of their “Enterprise x-Architecture”, also known as eX5.  
Why eX5?  Simple:  e=Enterprise X=x-Architecture  5=fifth generation. 

IBM’s Enterprise x-Architecture has been around for quite a while providing unique Scalability, Reliability and Flexibility in the x86 4-socket platforms.  You can check out the details of the eX4 technology here. 

Today’s announcement offered up a few facts:   

a) the existing x3850 and x3950 M2 will be called x3850 and x3950 X5 signifying a trend for IBM to move toward product naming designations that reflect the purpose of the server. 

b) the x3850 and x3950 X5’s will use the Intel Nehalem EX – to be officially announced/released on March 30.  At this time we can expect full details including part numbers, pricing and technical specifications. 

 c) a new 2u high,  2 socket server, the x3690 X5 was also announced.  This is probably the most exciting of the product announcements, as it is based on the Intel Nehalem EX processor but IBM’s innovation is going to enable the x3690 X5 to scale from 2 sockets to 4 sockets – but wait, there’s more.  There will be the ability, called MAX5 to add a memory expansion unit  to the x3690 X5 systems, enabling their system memory to be DOUBLED.d) in addition to the memory drawer, IBM will be shipping packs of solid state disks, called eXFlash that will deliver high performance to replace the limited IOPs of traditional spinning disks.  IBM is touting “significant” increases in performance for local databases with this new bundle of solid state disks.   In fact, according to IBM’s press release, eXFlash technology would eliminate the need for a client to purchase two entry-level servers and 80 JBODs to support a 240,000 IOPs database environment, saving $670,000 in server and storage acquisition costs.   The cool part is, these packs of disks will pop into the hot-swap drive bays of the x3690, x3850 and x3950 X5 servers.

e) IBM also announced a new technology, known as “FlexNode” that offers up physical partitioning capability for servers to move from being a single system to 2 different unique systems and back again. 

 
Blade Specific News
1) IBM will be releasing a new blade server, the BladeCenter HX5 next quarter that will also use the Intel Xeon 7500.  This blade server will scale, like all of the eX5 products, from 2 processors to 4 processors (and theoretically more) and will be ideal for database workloads.  Again, pricing and specs for this product will be released on the official Intel Nehalem EX launch date.  
 

 

  

IBM BladeCenter HX5 Blade Server

 

An observation from the pictures of the HX5 is that it will not have hot-swap drives, like the HS22’s do.  This means there will be internal drives – most like solid state drives (SSDs).  You may recall from my previous rumour post that the lack of hot-swap drives is pretty evident – IBM needed the real estate for the memory.  Unfortunately until memristors become available, blade vendors will need to sacrifice real estate for memory. 

2) As part of the MAX5 technology, IBM will also be launching a memory blade to increase the overall memory on the HX5 blade server.  Expect more details on this in the near future. 

Visit IBM’s website for their Live eX5 Event at 2 p.m. Eastern time at this site: 

http://www-03.ibm.com/systems/info/x86servers/ex5/events/index.html?CA=ex5launchteaser&ME=m&MET=exli&RE=ezvrm&Tactic=us0ab06w&cm_mmc=us0ab06w-_-m-_-ezvrm-_-ex5launchteaser-20100203 

As more information comes out on the new IBM eX5 portfolio, check back here and I’ll keep you posted.  I’d love to hear your thoughts in the comments below. 

MAX5 Memory Drawer (1U)

 

I find the x3690 X5 to be so interesting and exciting because it could quickly take over the server space that is currently occupied by the HP DL380 and the IBM x3650’s when it comes to virtualization.  We all know that VMware and other hypervisors thrive on memory, however the current 2 socket server design is limited to 12 – 16 memory sockets.  With the IBM System x3690 X5, this limitation can be overcome, as you can simply add on a memory drawer to achieve more memory capacity. 
Industry Opinions
Check out this analyst’s view of the IBM eX5 announcement here (pdf).
Here’s what VMware’s CTO, Stephen Herrod, has to say about IBM eX5:

  

maryland general hospital
horror movies 2010
ny gay marriage
online grocery coupons
smith goggles

Tolly Report: HP Flex-10 vs Cisco UCS (Network Bandwidth Scalability Comparison)

Tolly.com announced on 2/25/2010 a new Test Report that compares the network bandwidth scalability between HP BladeSystem c7000 with BL460 G6 Servers and Cisco UCS 5100 with B200 Servers, and the results were interesting.   The report simply tested 6 HP blades, with a single Flex-10 Module vs 6 Cisco blades using their Fabric Extender + a single Fabric Interconnect.  I’m not going to try and re-state what the report says (for that you can download it directly), instead, I’m going to highlight the results.  It is important to note that the report was “commissioned by Hewlett-Packard Dev. Co, L.P.”

Result #1:  HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Physical-to-Physical Comparison
>The test shows when 4 physical servers were tested, Cisco can achieve an aggregate throughput of 36.59 Gbps vs HP achieving 35.83Gbps (WINNER: Cisco)

>When 6 physical servers were tested, Cisco achieved an aggregate throughput of 27.37 Gbps vs HP achieving 53.65 Gbps – a difference of 26.28 Gbps (WINNER: HP)

Result #2:
 HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Virtual-to-Virtual Comparison
>Testing 2 servers each running 8 VMware Red Hat Linux hosts showed that HP achieved an aggregate throughput of 16.42 Gbps vs Cisco UCS achieving 16.70 Gbps (WINNER: Cisco). 

The results of the above was performed with the 2 x Cisco B200 blade servers each mapped to a dedicated 10Gb uplink port on the Fabric Extender (FEX).  When the 2 x Cisco B200 blade servers were designed to share the same 10Gb uplink port on the FEX, the achieved aggregate throughput on the Cisco UCS decreased to 9.10 Gbps.

A few points to note about these findings:
a) the HP Flex-10 Module has 8 x 10Gb uplinks whereas the Cisco Fabric Extender (FEX) has 4 x 10Gb uplinks

b) Cisco’s FEX Design allows for the 8 blade servers to extend out the 4 external ports in the FEX a 2:1 ratio (2 blades per external FEX port.) The current Cisco UCS design requires the servers to be “pinned”, or permanently assigned, to the respective FEX uplink. This works well when there are 4 blade servers, but when you get to more than 4 blade servers, the traffic is shared between two servers, which could cause bandwidth contention. 

 Furthermore, it’s important to understand that the design of the UCS blade infrastructure does not allow communication to go from Server 1 to Server 2 without leaving the FEX, connecting to the Fabric Interconnect (top of the picture) then returning to the FEX and connecting to the server.  This design is the potential cause of the decrease in aggregate throughput from 16.70Gbps to 9.10Gbps as shown above.


One of the “Bottom Line” conclusions from this report states, “throughput degradation on the Cisco UCS cased by bandwidth contention is a cause of concern for customers considering the use of UCS in a virtual server environment”  however I encourage you to take a few minutes, download this full report from the Tolly.com website and make your own conclusions about this report. 

Let me know your thoughts about this report – leave a comment below.

Disclaimer: This report was brought to my attention while attending the HP Tech Day event where airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

HP BladeSystem Rumours

I’ve recently posted some rumours about IBM’s upcoming announcements in their blade server line, now it is time to let you know some rumours I’m hearing about HP.   NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.  That being said – here we go:

Rumour #1:  Integration of “CNA” like devices on the motherboard. 
As you may be aware, with the introduction of the “G6”, or Generation 6, of HP’s blade servers, HP added “FlexNICs” onto the servers’ motherboards instead of the 2 x 1Gb NICs that are standard on most of the competition’s blades.  FlexNICs allow for the user to carve up a 10Gb NIC into 4 virtual NICs when using the Flex-10 Modules inside the chassis.  (For a detailed description of Flex-10 technology, check out this HP video.)  The idea behind Flex-10 is that you have 10Gb connectivity that allows you to do more with fewer NICs. 

SO – what’s next?  Rumour has it that the “G7” servers, expected to be announced on March 16, will have an integrated CNA or Converged Network Adapter.  With a CNA on the motherboard, both the ethernet and the fibre traffic will have a single integrated device to travel over.  This is a VERY cool idea because this announcement could lead to a blade server that can eliminate the additional daughter card or mezzanine expansion slots therefore freeing up valueable real estate for newer Intel CPU architecture.

Rumour #2: Next generation Flex-10 Modules will separate Fibre and Network traffic.

Today, HP’s Flex-10 ONLY allows handles Ethernet traffic.  There is no support for FCoE (Fibre Channel over Ethernet) so if you have a Fibre network, then you’ll also have to add a Fibre Switch into your BladeSystem chassis design. If HP does put in a CNA onto their next generation blade servers that carry Fibre and Ethernet traffic, wouldn’t it make sense there would need to be a module that would fit in the BladeSystem chassis that would allow for the storage and Ethernet traffic to exit? 

I’m hearing that a new version of the Flex-10 Module is coming, very soon, that will allow for the Ethernet AND the Fibre traffic to exit out the switch. (The image to the right shows what it could look like.)  The switch would allow for 4 of the uplink ports to go to the Ethernet fabric and the other 4 ports of the 8 port Next Generation Flex-10 switch to either be dedicated to a Fibre fabric OR used for additional 4 ports to the Ethernet fabric. 

If this rumour is accurate, it could shake up things in the blade server world.  Cisco UCS uses 10Gb Data Center Ethernet (Ethernet plus FCoE); IBM BladeCenter has the ability to do a 10Gb plus Fibre switch fabric (like HP) or it can use a 10Gb Enhanced Ethernet plus FCoE (like Cisco) however no one currently has a device to split the Ethernet and Fibre traffic at the blade chassis.  If this rumour is true, then we should see it announced around the same time as the G7 blade server (March 16).

That’s all for now.  As I come across more rumours, or information about new announcements, I’ll let you know.

Introducing the IBM HS22v Blade Server

IBM officially announced today a new addition to their blade server line – the HS22v.  Modeled after the HS22 blade server, the HS22v is touted by IBM as a “high density, high performance blade optimized for virtualization.”  So what makes it so great for virtualization?  Let’s take a look.

Memory
One of the big differences between the HS22v and the HS22 is more memory slots.  The HS22v comes with 18 x very low profile (VLP) DDR3 memory DIMMs for a maximum of 144GB RAM.  This is a key attribute for a server running virtualization since everyone knows that VM’s love memory.  It is important to note, though, the memory will only run at 800Mhz when all 18 slots are used.  In comparison, if you only had 6 memory DIMMs installed (3 per processor) then the memory would run at 1333Mhz and 12 DIMMs installed (6 per processor) runs at 1066Mhz.  As a final note on the memory, this server will be able to use both 1.5v and 1.35v memory.  The 1.35v will be newer memory that is introduced as the Intel Westmere EP processor becomes available.  The big deal about this is that lower voltage memory = lower overall power requirements.

Drives
The second big difference is the HS22v does not use hot-swap drives like the HS22 does.  Instead, it uses a 2 x solid state drives (SSD) for local storage. These drives have  hardware RAID 0/1 capabilities standard.  Although the picture to the right shows a 64GB SSD drive, my understanding is that only 50GB drives will be available as they start to become readlily available on March 19, with larger sizes (64GB and 128GB) becoming available in the near future.  Another thing to note is that the image shows a single SSD drive, however the 2nd drive is located directly beneath.  As mentioned above, these drives do have the ability to be set up in a RAID 0 or 1 as needed.

So – why did IBM go back to using internal drives?  For a few reasons:

Reason #1
: in order to get the space to add the extra memory slots, a change had to be made in the design.  IBM decided that solid state drives were the best fit.

Reason #2: the SSD design allows the server to run with lower power.  It’s well known that SSD drives run at a much lower power draw than physical spinning disks, so using SSD’s will help the HS22v be a more power efficient blade server than the HS22.

Reason #3: a common trend of virtualization hosts, especially VMware ESXi, is to run on integrated USB devices.  By using an integrated USB key for your virtualization software, you can eliminate the need for spinning disks, or even SSD’s therefore reducing your overall cost of the server.

Processors
So here’s the sticky area.  IBM will be releasing the HS22v with the Intel Xeon 5500 processor first.  Later in March, as the Intel Westmere EP (Intel Xeon 5600) is announced, IBM will have models that come with it.  IBM will have both Xeon 5500 and Xeon 5600 processor offerings.  Why is this?  I think for a couple of reasons:

a) the Xeon 5500 and the Xeon 5600 will use the same chipset (motherboard) so it will be easy for IBM to make one server board, and plop in either the Nehalem EP or the Westmere EP

b) simple – IBM wants to get this product into the marketplace sooner than later.

Questions

1) Will it fit into the BladeCenter E?
YES – however there may be certain limitations, so I’d recommend you reference the IBM BladeCenter Interoperability Guide for details.

2) Is it certified to run VMware ESX 4?
YES

3) Why didn’t IBM call it HS22XM?
According to IBM, the “XM” name is feature focused while “V” is workload focused – a marketing strategy we’ll probably see more of from IBM in the future.

That’s it for now.  If there are any questions you have about the HS22v, let me know in the comments and I’ll try to get some answers.

For more on the IBM HS22v, check out IBM’s web site here.

Check back with me in a few weeks when I’m able to give some more info on what’s coming from IBM!

Cisco Takes Top 8 Core VMware VMmark Server Position

Cisco is getting some (more) recognition with their UCS blade server product, as they recently achieved the top position for “8 Core Server” on VMware’s VMmark benchmark tool.  VMmark is the industry’s first (and only credible) virtualization benchmark for x86-based computers.  According to the VMmark website, the Cisco UCS B200 blade server reached a score of 25.06 @ 17 tiles.  A “tile” is simple a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment.   The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity.

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot?  Here’s the configuration:

server config:

  • 2 x Intel Xeon X5570 Processors
  • 96GB of RAM (16 x 8GB)
  • 1 x Converged Network Adapter (Cisco UCS M71KR-Q)

storage config:

  • EMC CX4-240
  • Cisco MDS 9130
  • 1154.27GB Used Disk Space
  • 1024MB Array Cache
  • 41 disks used on 4 enclosures/shelves (1 with 14 disk, 3 with 9 disks)
  • 37 LUNs used
    *17 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *17 at 15GB (database) + 2 LUNs at 400GB (Misc) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

While first place on the VMmark page (8 cores) shows Fujitsu’s RX300, it’s important to note that it was reached using Intel’s W5590 processor – a processor that is designed for “workstations” – not servers.  Second place, of server processors, currently shows HP’s BL490 with 24.54 (@ 17 tiles)

Thanks to Omar Sultan (@omarsultan) for Tweeting about this and to Harris Sussman for blogging about it.

More IBM BladeCenter Rumours…

Okay, I can’t hold back any longer – I have more rumours. The next 45 days is going to be an EXTREMELY busy month with Intel announcing their Westmere EP processor, the predecessor to the Nehalem EP CPU and with the announcement of the Nehalem EX CPU, the predecessor to the Xeon 7400 CPU.  I’ll post more details on these processors in the future, as it becomes available, but for now, I want to talk on some additional rumours that I’m hearing from IBM.  As I’ve mentioned in my previous rumour post: this is purely speculation, I have no definitive information from IBM so this may be false info.  That being said, here we go:

Rumour #1:  As I previously posted, IBM has announced they will have a blade server based on their eX5 architecture  – the next generation of their eX4 architecture found in their IBM System x3850 M2 and x3950M2.  I’ve posted what I think this new blade server will look like (you can see it here) and  I had previously speculated that the server would be called  HS43 – however it appears that IBM may be changing their nomenclature for this class of blade to “HX5“.  I can see this happening – it’s a blend of “HS” and “eX5”.  It is a new class of blade server, so it makes sense.   I like the HX5 blade server name, although if you Google HX5 right now, you’ll get a lot of details about the Sony CyberShot DSC-HX5 digital camera.  (Maybe IBM should re-consider using HS43 instead of HX5 to avoid any lawsuits.)  It also makes it very clear that it is part of their eX5 architecture, so we’ll see if it gets announced that way.

Speaking of announcements…

Rumour #2:  While it is clear that Intel is waiting until March (31, I think) to announce the Nehalem EX and Westmere EP processors, I’m hearing rumours that IBM will be announcing their product offerings around the new Intel processors on March 2, 2010 in Toronto.  It will be interesting to see if this happens so soon (4 weeks away) but when it does, I’ll be sure to give you all the details!

That’s all I can talk about for now as “rumours”.  I have more information on another IBM announcement that I can not talk about, but come back to my site on Feb. 9 and you’ll find out what that new announcement is.

The IBM BladeCenter S Is Going to the Super Bowl

Unless you’ve been hiding in a cave in Eastern Europe, you know by now that the New Orleans Saints are headed to the Super Bowl.  According to IBM, this is all due to the Saints having an IBM BladeCenter S running their business.  Okay, well, I’m sure there’s other reasons, like having stellar tallent, but let’s take a look at what IBM did for the Saints.

Other than the obvious threat of having to relocate or evacuate due to the weather, the Saints’ constant travel required them to search for a portable IT solution that would make it easier to quickly set up operations in another city.  The Saints were a long-time IBM customer, so they looked at the IBM BladeCenter S for this solution, and it worked great.  (I’m going to review the BladeCenter S below, so keep reading.)  The Saints consolidated 20 physical servers onto the BladeCenter S, virtualizing the environment with VMware.   Although the specific configuration of their blade environment is not disclosed, IBM reports that the Saints are using 1 terabyte of built-in storage, which enables the Saints to go on the road with the essential files (scouting reports, financial apps, player stats, etc) and tools the coaches and the staff need.  In fact, in the IBM Case Study video, the Assistant Director of IT for the New Orleans Saints, Jody Barbier, says, “The Blade Center S definitely can make the trip with us if we go to the Super Bowl.”  I guess we’ll see.  Be looking for the IBM Marketing engine to jump on this bandwagon in the next few days.

A Look at the IBM BladeCenter S
The IBM BladeCenter S is a 7u high (click image on left for larger view of details) chassis that has the ability to hold 6 blade servers and up to 12 disk drives held in Disk Storage Modules located on the left and right of the blade server bays.  The chassis has the option to either segment the disk drives to an individual blade server, or the option to create a RAID volume and allow all of the servers to access the data.  As of this writing, the drive options for the Disk Storage Module are: 146GB, 300GB, 450GB SAS, 750GB and 1TB Near-Line SAS and 750GB and 1TB SATA.  Depending on your application needs, you could have up to 12TB of local storage for 6 servers.  That’s pretty impressive, but wait, there’s more!  As I reported a few weeks ago, there’s is a substantial rumour that there is a forthcoming option to use 2.5″ drives.  This would enable the ability to have up to 24 drives (12 per Disk Storage Module.)  Although that would provide more spindles, the current capacities of 2.5″ drives aren’t quite to the capacities of the 3.5″ drives.  Again, that’s just “rumour” – IBM has not disclosed whether that option is coming (but it is…)

IBM BladeCenter – Rear View
I love pictures – so I’ve attached an image of the BladeCenter S, as seen from the back.  A few key points to make note of:
110v Capable – yes, this can run on the average office power.  That’s the idea behind it.  If you have a small closet or an area near a desk, you can plug this bad boy in.   That being said, I always recommend calculating the power with IBM’s Power Configurator to make sure your design doesn’t exceed what 110v can handle.  Yes, this box will run on 220v as well.  Also, the power supplies are auto-sensing so there’s no worry about having to buy different power supplies based on your needs.

I/O Modules – if you are familar with the IBM BladeCenter or IBM BladeCenter H I/O architecture, you’ll know that the design is redundant, with dual paths.  With the IBM BladeCenter S, this isn’t the case.   As you can see below, the onboard network adapters (NICs) both are mapped to the I/O module in Bay #1.  The expansion card is mapped to Bay #3 and 4 and the high speed card slot (CFF-h) is mapped to I/O Bay 2.  Yes, this design put I/O Bays 1 and 2 as single points of failure (since both paths connect intothe module bay), however when you look at the typical small office or branch office environment that the IBM BladeCenter S is designed for, you’ll realize that very rarely do they have redundant network fabrics – so this is no different.

Another key point here is that I/O Bays 3 and 4 are connected to the Disk Storage Modules mentioned above.  In order for a blade server to access the external disks in the Disk Storage Module bays, the blade server must:

a) have a SAS Expansion or Connectivity card installed in the expansion card slot
b) have 1 or 2 SAS Connectivity or RAID modules attached in Bays 3 and 4

This means that there is currently no way to use the local drives (in the Disk Storage Modules) and have external access to a fibre storage array.

BladeCenter S Office Enablement Kit
Finally – I wanted to show you the optional Office Enablement Kit.  This is an 11U enclosure that is based on IBM’s NetBay 11.  It has security doors and special acoustics and air filtration to suit office environements.  The Kit features:
*
an acoustical module (to lower the sound of the environment)  Check out this YouTube video for details.
*
a locking door
*
4U of extra space (for other devices)
*
wheels

There is also an optional Air Contaminant Filter that is available that assists in keeping the IBM BladeCenter S functional in a dusty environment (i.e. shops or production floors) using air filters.

If the BladeCenter S is going to be used in an environment without a rack (i.e. broom closet) or in a mobile environment (i.e. going to the Super Bowl) the Office Enablement Kit is a necessary addition.

So, hopefully, you can now see the value that the New Orleans Saints saw in the IBM BladeCenter S for their flexible, mobile IT needs.  Good luck in the Super Bowl, Saints.  I know that IBM will be rooting for you.