Tag Archives: IBM BladeCenter E

Blade Chassis I/O Diagrams

Many people get confused as to why so many I/O modules are needed within a given blade chassis.  The basic concept is simple (in most cases) – for each port you need on a given blade server, you need to have a corresponding I/O module.  For example, if you need 4 NICs, you’re going to need 4 Ethernet modules (in most cases.)  In today’s post, I thought I would keep it simple and publish the I/O diagrams of Cisco, Dell, HP and IBM chassis.  Of course, I am human and “have been known to make mistakes – from time to time” so please feel free to correct me on any errors you see.  Enjoy.

(Updated 8/3/2011 – fixed Dell M1000e Full Height I/O Diagram)

Continue reading

Tagged , , , , , , , , , ,

(UPDATED) How IBM Can Provide 1,800 Processing Cores on a Single Blade Server

UPDATED 11-16-2010

IBM BladeCenter GPU Expansion Blade with HS22IBM recently announced a new addition to their BladeCenter family – the IBM BladeCenter GPU Expansion blade.  This new offering provides a single HS22 with the capability of hosting up to 4 x NVIDIA Tesla M2070 or Tesla M2070Q GPUs each running 448 processing cores each.  Doing the math, this equals the possibility of having 4,928 processing cores in a single 9u IBM BladeCenter H chassis.  That means you could have 19,712 processing cores PER RACK.  With such astonishing numbers, let’s take a deeper look at the IBM BladeCenter GPU Expansion blade.  

Continue reading

Tagged , , , , , , , , , ,
Translate »