Tag Archives: FlexNode partioning

Announcing the IBM BladeCenter HX5 Blade Server (with detailed pics)

(UPDATED 11:29 AM EST 3/2/2010)
IBM announced today the BladeCenter┬« HX5 – their first 4 socket blade since the HS41 blade server. IBM calls the HX5 “a scalable, high-performance blade server with unprecedented compute and memory performance, and flexibility ideal for compute and memory-intensive enterprise workloads.”

The HX5 will have the ability to be coupled with a 2nd HX5 to scale to 4 CPU Sockets, grow beyond the base memory with the MAX5 memory expansion and be offer hardware partition to split a dual node server into 2 x single node servers and back again. I’ll review each of these features in more detail below, but first, let’s look at the basics of the HX5 blade server.

X5 features:

  • Up to 2 x Intel Xeon 7500 CPUs per node
  • 16 DIMMs per node
  • 2 x Solid State Disk (SSD) slots per node
  • 1 x CIOv and 1 CFFh daughter card expansion slot per node, providing up to 8 I/O ports per node
  • 1 x scale connector per node

CPU Scalability
In the fashion of the eX5 architecture, IBM is enabling the HX5 blade server to grow from 2 CPUs to 4 CPUs (and theoretically more) via connecting the servers through a “scale connector“. This connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering. This means you could have any number of combinations from 2 x HX5 blade servers to 2 x HX5 blade servers + a MAX5 memory blade.

Memory Scalability
With the addition of a new 24 DIMM memory blade, called the MAX5, IBM is enabling users to grow the base memory from 16 memory DIMMS to 48 40 (16+24) memory DIMMs. The MAX5 will be connected via the scale connector mentioned above, and in fact, when coupled with a 2 node, 4 socket system, could enable the entire system to have 72 80 DIMMS (16 DIMMs per HX5 plus 24 DIMMs per MAX5). Granted, this will be a 4 server wide offering, but this will be a powerful offering for database servers, or even virtualization.

Hardware Partitioning
The final feature, known as FlexNode partitioning is the ability to split up a combined server node into individual server nodes and back again as needed. Performed using IBM Software, this feature will enable a user to automatically take a 2 node HX5 system acting as a single 4 socket system and split it up into 2 x 2 socket systems then revert back to a single 4 socket system once the workload is completed.

For example, during the day, the 4 socket HX5 server is used for as a database server, but at night, the database server is not being used, so the system is partitioned off into 2 x 2 socket physical servers that can each run their own applications.

As I’ve mentioned previously, the pricing and part number info for the IBM BladeCenter HX5 blade server is not expected to show up until the Intel Xeon 7500 processor announcement on March 30, so when that info is released, you can find it here.

For more details, head over to IBM’s

Let me know your thoughts – leave your comments below.