Category Archives: IBM

More HP and IBM Blade Rumours

I wanted to post a few more rumours before I head out to HP in Houston for “HP Blades and Infrastructure Software Tech Day 2010” so it’s not to appear that I got the info from HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.

First off – the HP Rumour:
I’ve caught wind of a secret that may be truth, may be fiction, but I hope to find out for sure from the HP blade team in Houston.  The rumour is that HP’s development team currently has a Cisco Nexus Blade Switch Module for the HP BladeSystem in their lab, and they are currently testing it out.

Now, this seems far fetched, especially with the news of Cisco severing partner ties with HP, however, it seems that news tidbit was talking only about products sold with the HP label, but made by Cisco (OEM.)   HP will continue to sell Cisco Catalyst switches for the HP BladeSystem and even Cisco branded Nexus switches with HP part numbers (see this HP site for details.)  I have some doubt about this rumour of a Cisco Nexus Switch that would go inside the HP BladeSystem simply because I am 99% sure that HP is announcing a Flex10 type of BladeSystem switch that will allow converged traffic to be split out, with the Ethernet traffic going to the Ethernet fabric and the Fibre traffic going to the Fibre fabric (check out this rumour blog I posted a few days ago for details.)  Guess only time will tell.

The IBM Rumour:
I posted a few days ago a rumour blog that discusses the rumour of HP’s next generation adding Converged Network Adapters (CNA) to the motherboard on the blades (in lieu of the 1GB or Flex10 NICs), well, now I’ve uncovered a rumour that IBM is planning on following later this year with blades that will also have CNA’s on the motherboard.  This is huge!  Let me explain why.

The design of IBM’s BladeCenter E and BladeCenter H have the 1Gb NICs onboard each blade server hard-wired to I/O Bays 1 and 2 – meaning only Ethernet modules can be used in these bays (see the image to the left for details.)  However, I/O Bays 1 and 2 are for “standard form factor I/O modules” while I/O Bays are for “high speed form factor I/O modules”.  This means that I/O Bays 1 and 2 can not handle “high speed” traffic, i.e. converged traffic.

This means that IF IBM comes out with a blade server that has a CNA on the motherboard, either:

a) the blade’s CNA will have to route to I/O Bays 7-10
OR
b) IBM’s going to have to come out with a new BladeCenter chassis that allows the high speed converged traffic from the CNAs to connect to a high speed switch module in Bays 1 and 2.

So let’s think about this.  If IBM (and HP for that matter) does put CNA’s on the motherboard, is there a need for additional mezzanine/daughter cards?  This means the blade servers could have more real estate for memory, or more processors.   If there’s no extra daughter cards, then there’s no need for additional I/O module bays.  This means the blade chassis could be smaller and use less power – something every customer would like to have.

I can really see the blade market moving toward this type of design (not surprising very similar to Cisco’s UCS design) – one where only a pair of redundant “modules” are needed to split converged traffic to their respective fabrics.  Maybe it’s all a pipe dream, but when it comes true in 18 months, you can say you heard it here first.

Thanks for reading.  Let me know your thoughts – leave your comments below.

Blade Networks Announces Industry’s First and Only Fully Integrated FCoE Solution Inside Blade Chassis

BLADE Network Technologies, Inc. (BLADE), “officially” announces today the delivery of the industry’s first and only fully integrated Fibre Channel over Ethernet (FCoE) solution inside a blade chassis.   This integration significantly reduces power, cost, space and complexity over external FCoE implementations.

You may recall that I blogged about this the other day (click here to read), however I left off one bit of information.  The (Blade Networks) BNT Virtual Fabric 10 Gb Switch Module does not require the QLogic Virtual Fabric Extension Module to function.  It will work with an existing Top-of-Rack (TOR) Convergence Switch from Brocade or Cisco to act as a 10Gb switch module, feeding the converged 10Gb link up to the TOR switch.  Since it is a switch module, you can connect as few as 1 uplink to your TOR switch, therefore saving connectivity costs, as opposed to a pass-thru option (click here for details on the pass-thru option.) 

Yes – this is the same architectural design as the Cisco Nexus 4001i provides as well, however there are a couple of differences:

BNT Virtual Fabric Switch Module (IBM part #46C7191) – 10 x 10Gb Uplinks, $11,199 list (U.S.)
Cisco Nexus 4001i Switch (IBM part #46M6071) – 6 x 10Gb Uplinks, $12,999 list (U.S.)

While BNT provides 4 extra 10Gb uplinks, I can’t really picture anyone using all 10 ports.  However, it does has a lower list price, but I encourage you to check your actual price with your IBM partner, as the actual pricing may be different.  Regardless of whether you choose BNT or Cisco to connect into your TOR switch, don’t forget the transceivers!  They add much more $$ to the overall cost, and without them you are hosed.

About the BNT Virtual Fabric 10Gb Switch Module
The BNT Virtual Fabric 10Gb Switch Module includes the following features and functions:

  • Form-factor
    • Single-wide high-speed switch module (fits in IBM BladeCenter H bays #7 and 9.) 
  • Internal ports
    • 14 internal auto-negotiating ports: 1 Gb or 10 Gb to the server blades
    • Two internal full-duplex 100 Mbps ports connected to the management module
  • External ports
    • Up to ten 10 Gb SFP+ ports (also designed to support 1 Gb SFP if required, flexibility of mixing 1 Gb/10 Gb)
    • One 10/100/1000 Mb copper RJ-45 used for management or data
    • An RS-232 mini-USB connector for serial port that provides an additional means to install software and configure the switch module
  • Scalability and performance
    • Autosensing 1 Gb/10 Gb internal and external Ethernet ports for bandwidth optimization

To read the extensive list of details about this switch, please visit the IBM Redbook located here.

IBM’s New Approach to Ethernet/Fibre Traffic

Okay, I’ll be the first to admit when I’m wrong – or when I provide wrong information.

A few days ago, I commented that no one has yet offered the ability to split out Ethernet and Fibre traffic at the chassis level (as opposed to using a top of rack switch.)  I quickly found out that I was wrong – IBM now has the ability to separate the Ethernet fabric and the Fibre fabric at the BladeCenter H,  so if you are interested grab a cup of coffee and enjoy this read.

First a bit of background.  The traditional method of providing Ethernet and Fibre I/O in a blade infrastructure was to integrate 6 Ethernet switches and 2 Fibre switches into the blade chassis, which provides 6 NICs and 2 Fibre HBAs per blade server.  This is a costly method and it limits the scalability of a blade server.

A more conventional method that is becoming more popular is to converge the I/O traffic using a single converged network adapter (CNA) to carry the Ethernet and the Fibre traffic over a single 10Gb connection to a top of rack (TOR) switch which then sends the Ethernet traffic to the Ethernet fabric and the Fibre traffic to the Fibre fabric.  This reduces the number of physical cables coming out of the blade chassis, offers higher bandwidth and reduces the overall switching costs.  Up now, IBM offered two different methods to enable converged traffic:

Method 1: connect a pair of 10Gb Ethernet Pass-Thru modules into the blade chassis, add a CNA on each blade server, then connect the pass thru modules to a top of rack  convergence switch from Brocade or Cisco.  This method is the least expensive method, however since Pass-Thru modules are being used, a connection is required on the TOR convergence switch for every blade server being connected.  This would mean a 14 blade infrastructure would eat up 14 ports on the convergence switch, potentially leaving the switch with very few available ports.

Method #2: connect a pair of IBM Cisco Nexus 4001i switches, add a CNA on each server then connect the Nexus 4001i to a Cisco Nexus 5000 top of rack switch.  This method enables you to use as few as 1 uplink connection from the blade chassis to the Nexus 5000 top of rack switch, however it is more costly and you have to invest into another Cisco switch.

The New Approach
A few weeks ago, IBM announced the “Qlogic Virtual Fabric Extension Module” – a device that fits into the IBM BladeCenter H and takes the the Fibre traffic from the CNA on a blade server and sends it to the Fibre fabric.  This is HUGE!  While having a top of rack convergence switch is helpful, you can now remove the need to have a top of rack switch because the I/O traffic is being split out into it’s respective fabrics at the BladeCenter H.

What’s Needed
I’ll make it simple – here’s a list of components that are needed to make this method work:

  • 2 x BNT Virtual Fabric 10 Gb Switch Module – part # 46C7191
  • 2 x QLogic Virtual Fabric Extension Module – part # 46M6172
  • a Qlogic 2-port 10 Gb Converged Network Adapter per blade server – part # 42C1830
  • a IBM 8 Gb SFP+ SW Optical Transceiver for each uplink needed to your fibre fabric – part # 44X1964 (notethe QLogic Virtual Fabric Extension Module doesn’t come with any, so you’ll need the same quantity for each module.)

The CNA cards connect to the BNT Virtual Fabric 10 Gb Switch Module in Bays 7 and 9.  These switch modules have an internal connector to the QLogic Virtual Fabric Extension Module, located in Bays 3 and 5.  The I/O traffic moves from the CNA cards to the BNT switch, which separates the Ethernet traffic and sends it out to the Ethernet fabric while the Fibre traffic routes internally to the QLogic Virtual Fabric Extension Modules.  From the Extension Modules, the traffic flows into the Fibre Fabric.

It’s important to understand the switches, and how they are connected, too, as this is a new approach for IBM.  Previously the Bridge Bays (I/O Bays 5 and 6) really haven’t been used and IBM has never allowed for a card in the CFF-h slot to connect to the switch bay in I/O Bay 3. 

 

There are a few other designs that are acceptable that will still give you the split fabric out of the chassis, however they were not “redundant” so I did not think they were relevant.  If you want to read the full IBM Redbook on this offering, head over to IBM’s site.

A few things to note with the maximum redundancy design I mentioned above:

1) the CIOv slots on the HS22 and HS22v can not be used.  This is because I/O bay 3 is being used for the Extension Module and since the CIOv slot is hard wired to I/O bay 3 and 4, that will just cause problems – so don’t do it.

2) The BladeCenter E chassis is not supported for this configuration.  It doesn’t have any “high speed bays” and quite frankly wasn’t designed to handle high I/O throughput like the BladeCenter H.

3) Only the parts listed above are supported.  Don’t try and slip in a Cisco Fibre Switch Module or use the Emulex Virtual Adapter on the blade server – it won’t work.  This is a QLogic design and they don’t want anyone else’s toys in their backyard.

That’s it.  Let me know what you think by leaving a comment below.  Thanks for stopping by!

Introducing the IBM HS22v Blade Server

IBM officially announced today a new addition to their blade server line – the HS22v.  Modeled after the HS22 blade server, the HS22v is touted by IBM as a “high density, high performance blade optimized for virtualization.”  So what makes it so great for virtualization?  Let’s take a look.

Memory
One of the big differences between the HS22v and the HS22 is more memory slots.  The HS22v comes with 18 x very low profile (VLP) DDR3 memory DIMMs for a maximum of 144GB RAM.  This is a key attribute for a server running virtualization since everyone knows that VM’s love memory.  It is important to note, though, the memory will only run at 800Mhz when all 18 slots are used.  In comparison, if you only had 6 memory DIMMs installed (3 per processor) then the memory would run at 1333Mhz and 12 DIMMs installed (6 per processor) runs at 1066Mhz.  As a final note on the memory, this server will be able to use both 1.5v and 1.35v memory.  The 1.35v will be newer memory that is introduced as the Intel Westmere EP processor becomes available.  The big deal about this is that lower voltage memory = lower overall power requirements.

Drives
The second big difference is the HS22v does not use hot-swap drives like the HS22 does.  Instead, it uses a 2 x solid state drives (SSD) for local storage. These drives have  hardware RAID 0/1 capabilities standard.  Although the picture to the right shows a 64GB SSD drive, my understanding is that only 50GB drives will be available as they start to become readlily available on March 19, with larger sizes (64GB and 128GB) becoming available in the near future.  Another thing to note is that the image shows a single SSD drive, however the 2nd drive is located directly beneath.  As mentioned above, these drives do have the ability to be set up in a RAID 0 or 1 as needed.

So – why did IBM go back to using internal drives?  For a few reasons:

Reason #1
: in order to get the space to add the extra memory slots, a change had to be made in the design.  IBM decided that solid state drives were the best fit.

Reason #2: the SSD design allows the server to run with lower power.  It’s well known that SSD drives run at a much lower power draw than physical spinning disks, so using SSD’s will help the HS22v be a more power efficient blade server than the HS22.

Reason #3: a common trend of virtualization hosts, especially VMware ESXi, is to run on integrated USB devices.  By using an integrated USB key for your virtualization software, you can eliminate the need for spinning disks, or even SSD’s therefore reducing your overall cost of the server.

Processors
So here’s the sticky area.  IBM will be releasing the HS22v with the Intel Xeon 5500 processor first.  Later in March, as the Intel Westmere EP (Intel Xeon 5600) is announced, IBM will have models that come with it.  IBM will have both Xeon 5500 and Xeon 5600 processor offerings.  Why is this?  I think for a couple of reasons:

a) the Xeon 5500 and the Xeon 5600 will use the same chipset (motherboard) so it will be easy for IBM to make one server board, and plop in either the Nehalem EP or the Westmere EP

b) simple – IBM wants to get this product into the marketplace sooner than later.

Questions

1) Will it fit into the BladeCenter E?
YES – however there may be certain limitations, so I’d recommend you reference the IBM BladeCenter Interoperability Guide for details.

2) Is it certified to run VMware ESX 4?
YES

3) Why didn’t IBM call it HS22XM?
According to IBM, the “XM” name is feature focused while “V” is workload focused – a marketing strategy we’ll probably see more of from IBM in the future.

That’s it for now.  If there are any questions you have about the HS22v, let me know in the comments and I’ll try to get some answers.

For more on the IBM HS22v, check out IBM’s web site here.

Check back with me in a few weeks when I’m able to give some more info on what’s coming from IBM!

IBM’s 4 Processor Intel Nehalem EX Blade Server

2-2-10 CORRECTION Made Below

Okay, I’ve seen the details on IBM’s next generation 4 processor blade server that is based on the Intel Nehalem EX cpu and I can tell you that IBM’s about to change the way people look at workloads for blade servers.  Out of respect for IBM (and at the risk of getting in trouble) I’m not going to disclose any confidential details, but I can tell you a few things:

1) my previous post about what the server will look like is not far off.  In fact it was VERY close.  However IBM up’d the ante and made a few additions that I didn’t expect that will make it appealing for customers who need the ability to run large workloads.

2) the scheduled announce date for this new 4 processor IBM blade server based on the Nehalem EX (whose name I guessed correctly) will be before April 1, 2010 but after March 15, 2010.  Ship date is currently scheduled sometime after May but before July.

As a final teaser, there’s another IBM blade server annoucement scheduled for tomorrow.  Once it’s officially announced on Feb 3rd  Feb 9th I’ll let you know and give you some details.

More IBM BladeCenter Rumours…

Okay, I can’t hold back any longer – I have more rumours. The next 45 days is going to be an EXTREMELY busy month with Intel announcing their Westmere EP processor, the predecessor to the Nehalem EP CPU and with the announcement of the Nehalem EX CPU, the predecessor to the Xeon 7400 CPU.  I’ll post more details on these processors in the future, as it becomes available, but for now, I want to talk on some additional rumours that I’m hearing from IBM.  As I’ve mentioned in my previous rumour post: this is purely speculation, I have no definitive information from IBM so this may be false info.  That being said, here we go:

Rumour #1:  As I previously posted, IBM has announced they will have a blade server based on their eX5 architecture  – the next generation of their eX4 architecture found in their IBM System x3850 M2 and x3950M2.  I’ve posted what I think this new blade server will look like (you can see it here) and  I had previously speculated that the server would be called  HS43 – however it appears that IBM may be changing their nomenclature for this class of blade to “HX5“.  I can see this happening – it’s a blend of “HS” and “eX5”.  It is a new class of blade server, so it makes sense.   I like the HX5 blade server name, although if you Google HX5 right now, you’ll get a lot of details about the Sony CyberShot DSC-HX5 digital camera.  (Maybe IBM should re-consider using HS43 instead of HX5 to avoid any lawsuits.)  It also makes it very clear that it is part of their eX5 architecture, so we’ll see if it gets announced that way.

Speaking of announcements…

Rumour #2:  While it is clear that Intel is waiting until March (31, I think) to announce the Nehalem EX and Westmere EP processors, I’m hearing rumours that IBM will be announcing their product offerings around the new Intel processors on March 2, 2010 in Toronto.  It will be interesting to see if this happens so soon (4 weeks away) but when it does, I’ll be sure to give you all the details!

That’s all I can talk about for now as “rumours”.  I have more information on another IBM announcement that I can not talk about, but come back to my site on Feb. 9 and you’ll find out what that new announcement is.

The IBM BladeCenter S Is Going to the Super Bowl

Unless you’ve been hiding in a cave in Eastern Europe, you know by now that the New Orleans Saints are headed to the Super Bowl.  According to IBM, this is all due to the Saints having an IBM BladeCenter S running their business.  Okay, well, I’m sure there’s other reasons, like having stellar tallent, but let’s take a look at what IBM did for the Saints.

Other than the obvious threat of having to relocate or evacuate due to the weather, the Saints’ constant travel required them to search for a portable IT solution that would make it easier to quickly set up operations in another city.  The Saints were a long-time IBM customer, so they looked at the IBM BladeCenter S for this solution, and it worked great.  (I’m going to review the BladeCenter S below, so keep reading.)  The Saints consolidated 20 physical servers onto the BladeCenter S, virtualizing the environment with VMware.   Although the specific configuration of their blade environment is not disclosed, IBM reports that the Saints are using 1 terabyte of built-in storage, which enables the Saints to go on the road with the essential files (scouting reports, financial apps, player stats, etc) and tools the coaches and the staff need.  In fact, in the IBM Case Study video, the Assistant Director of IT for the New Orleans Saints, Jody Barbier, says, “The Blade Center S definitely can make the trip with us if we go to the Super Bowl.”  I guess we’ll see.  Be looking for the IBM Marketing engine to jump on this bandwagon in the next few days.

A Look at the IBM BladeCenter S
The IBM BladeCenter S is a 7u high (click image on left for larger view of details) chassis that has the ability to hold 6 blade servers and up to 12 disk drives held in Disk Storage Modules located on the left and right of the blade server bays.  The chassis has the option to either segment the disk drives to an individual blade server, or the option to create a RAID volume and allow all of the servers to access the data.  As of this writing, the drive options for the Disk Storage Module are: 146GB, 300GB, 450GB SAS, 750GB and 1TB Near-Line SAS and 750GB and 1TB SATA.  Depending on your application needs, you could have up to 12TB of local storage for 6 servers.  That’s pretty impressive, but wait, there’s more!  As I reported a few weeks ago, there’s is a substantial rumour that there is a forthcoming option to use 2.5″ drives.  This would enable the ability to have up to 24 drives (12 per Disk Storage Module.)  Although that would provide more spindles, the current capacities of 2.5″ drives aren’t quite to the capacities of the 3.5″ drives.  Again, that’s just “rumour” – IBM has not disclosed whether that option is coming (but it is…)

IBM BladeCenter – Rear View
I love pictures – so I’ve attached an image of the BladeCenter S, as seen from the back.  A few key points to make note of:
110v Capable – yes, this can run on the average office power.  That’s the idea behind it.  If you have a small closet or an area near a desk, you can plug this bad boy in.   That being said, I always recommend calculating the power with IBM’s Power Configurator to make sure your design doesn’t exceed what 110v can handle.  Yes, this box will run on 220v as well.  Also, the power supplies are auto-sensing so there’s no worry about having to buy different power supplies based on your needs.

I/O Modules – if you are familar with the IBM BladeCenter or IBM BladeCenter H I/O architecture, you’ll know that the design is redundant, with dual paths.  With the IBM BladeCenter S, this isn’t the case.   As you can see below, the onboard network adapters (NICs) both are mapped to the I/O module in Bay #1.  The expansion card is mapped to Bay #3 and 4 and the high speed card slot (CFF-h) is mapped to I/O Bay 2.  Yes, this design put I/O Bays 1 and 2 as single points of failure (since both paths connect intothe module bay), however when you look at the typical small office or branch office environment that the IBM BladeCenter S is designed for, you’ll realize that very rarely do they have redundant network fabrics – so this is no different.

Another key point here is that I/O Bays 3 and 4 are connected to the Disk Storage Modules mentioned above.  In order for a blade server to access the external disks in the Disk Storage Module bays, the blade server must:

a) have a SAS Expansion or Connectivity card installed in the expansion card slot
b) have 1 or 2 SAS Connectivity or RAID modules attached in Bays 3 and 4

This means that there is currently no way to use the local drives (in the Disk Storage Modules) and have external access to a fibre storage array.

BladeCenter S Office Enablement Kit
Finally – I wanted to show you the optional Office Enablement Kit.  This is an 11U enclosure that is based on IBM’s NetBay 11.  It has security doors and special acoustics and air filtration to suit office environements.  The Kit features:
*
an acoustical module (to lower the sound of the environment)  Check out this YouTube video for details.
*
a locking door
*
4U of extra space (for other devices)
*
wheels

There is also an optional Air Contaminant Filter that is available that assists in keeping the IBM BladeCenter S functional in a dusty environment (i.e. shops or production floors) using air filters.

If the BladeCenter S is going to be used in an environment without a rack (i.e. broom closet) or in a mobile environment (i.e. going to the Super Bowl) the Office Enablement Kit is a necessary addition.

So, hopefully, you can now see the value that the New Orleans Saints saw in the IBM BladeCenter S for their flexible, mobile IT needs.  Good luck in the Super Bowl, Saints.  I know that IBM will be rooting for you.

Weta Digital, Production House for AVATAR, Donates IBM Blade Servers to Schools

Weta Digital, the digital production house that designed the hit movie AVATAR recently donated about 300 IBM HS20 blade servers to Whitireia Community Polytechnic in Porirua which will use them to help teach students how to create 3-D animations. The IBM HS20 blade servers were originally bought to produce special effects for The Lord of the Rings at a cost of more than $1 million (for more details on this, check out this November 2004 article from DigitalArtsOnline.co.uk.) Weta Digital has since replaced them with more powerful HP BL 2x220c G5 servers supplied by Hewlett-Packard, which were used for AVATAR.

According to the school, these older IBM blade servers will help the schoolexpand its graphics and information technology courses and turn out students with more experience of 3-D rendering.

Thanks to Stuff.co.nz for the information mentioned above.

UNVEILED: First Blade Server Based on Intel Nehalem EX

The first blade server with the upcoming Intel Nehalem EX processor has finally been unveiled.  While it is known that IBM will be releasing a 2 or 4 socket blade server with the Nehalem EX, no other vendor has revealed plans up until now.  SGI recently announced they will be offering the Nehelem EX on their Altix® UV platform. 

Touted as a “The World’s Fastest Supercomputer”, the UV line features the fifth generation of the SGI NUMAlink interconnect, which offers up a whopping 15 GB/sec transfer rate, as well as direct access up to 16 TB of shared memory. The system will have the ability to be configured with up to 2048 Nehalem-EX cores (via 256 processors, or 128 blades) in a single federation with a single global address space.

According to the SGI website, the UV will come in two flavors:

SGI Altix UV 1000

Altix UV 1000  – designed for maximum scalability, this system ships as a fully integrated cabinet-level solution with up to 256 sockets (2,048 cores) and 16TB of shared memory in four racks.

Altix UV 100 (not pictured) – same design as the UV 1000, but designed for the mid-range market;  based on an industry-standard 19″ rackmount 3U form factor. Altix UV 100 scales to 96 sockets (768 cores) and 6TB of shared memory in two racks.

SGI has given quite a bit of techinical information about these servers in this whitepaper, including details about the Nehalem EX architecture that I haven’t even seen from Intel.  SGI has also published several customer testimonials, including one from the University of Tennessee – so check it out here.

Hopefully, this is just the first of many announcements to come around the Intel Nehalem EX processor.

IBM BladeCenter Rumours

I recently heard some rumours about IBM’s BladeCenter products that I thought I would share – but FIRST let me be clear:  this is purely speculation, I have no definitive information from IBM so this may be false info, but my source is pretty credible, so…

4 Socket Nehalem EX Blade
I posted a few weeks ago my speculation IBM’s announcement that they WILL have a 4 socket blade based on the upcoming Intel Nehalem EX processor (https://bladesmadesimple.com/2009/09/ibm-announces-4-socket-intel-blade-server/) – so today I got a bit of an update on this server.

Rumour 1:  It appears IBM may call it the HS43 (not HS42 like I first thought.) I’m not sure why IBM would skip the “HS42” nomenclature, but I guess it doesn’t really matter.  This is rumoured to be released in March 2010.

Rumour 2:  It seems that I was right in that the 4 socket offering will be a double-wide server, however it appears IBM is working with Intel to provide a 2 socket Intel Nehalem EX blade as the foundation of the HS43.   This means that you could start with a 2 socket blade, then “snap-on” a second to make it a 4 socket offering – but wait, there’s more…  It seems that IBM is going to enable these blade servers to grow to up to 8 sockets via snapping on 4 x 2 socket servers together.  If my earlier speculations (https://bladesmadesimple.com/2009/09/ibm-announces-4-socket-intel-blade-server/) are accurate and each 2 socket blade module has 12 DIMMs, this means you could have an 8 socket, 64 cores, 96 DIMM, 1.5TB of RAM (using 16GB per DIMM slot) all in a single BladeCenter chassis.  This, of course, would take up 4 blade server slots.  Now the obvious question around this bit of news is WHY would anyone do this?  The current BladeCenter H only holds 14 servers so you would only be able to get 3 of these monster servers into a chassis.  Feel free to offer up some comments on what you think about this.

Rumour 3: IBM’s BladeCenter S chassis currently uses local drives that are 3.5″.  The industry is obviously moving to smaller 2.5″ drives, so it’s only natural that the BladeCenter S drive cage will need to be updated to provide 2.5″ drives.  Rumour is that this is coming in April 2010 and it will offer up to 24 x 2.5″ SAS or SATA drives.  

Rumour 4:  What’s missing from the BladeCenter S right now that HP currently offers?  A tape drive.  Rumour has it that IBM will be adding a “TS Family” tape drive offering to the BladeCenter S in upcoming months.  This makes total sense and is well-needed.  Customers buying the BladeCenter S are typically smaller offices or branch offices, so using a local backup device is a critical component to insuring data protection.  I’m not sure if this will be in the form of taking up a blade slot (like HP’s model) or it will be a replacement for one of the 2 drive cages.  I would imagine it will be the latter since the BladeCenter S architecture allows for all servers to connect to the drive cages, but we’ll see.

That’s all I have.  I’ll continue to keep you updated as I hear rumours or news.