VMware made very clear that all the changes were motivated by the feedbacks they have received: Continue reading
As you no doubt heard by now, VMware has announced a new version of vSphere along with some new or improved features however, this post will not highlight these features. In this post, I want to talk about what did not improve – the licensing. Continue reading
A reader recently commented on my article about HP’s new 32GB DIMM, “At $8039 per DIMM, HP can support 384GB in a BL460c at the cost of $96,000 per server just for the memory! If you filled just one rack with these servers, you would spend $6 million just for the memory. And the memory would run at a paltry 800MHz. Continue reading
Updated 5/24/2010 – I’ve received some comments about expandability and I’ve received a correction about the speed of Dell’s memory, so I’ve updated this post. You’ll find the corrections / additions below in GREEN.
Since I’ve received a lot of comments from my post on the Dell FlexMem Bridge technology, I thought I would do an unbiased comparison between Dell’s FlexMem Bridge technology (via the PowerEdge 11G M910 blade server) vs IBM’s MAX5 + HX5 blade server offering. In summary both offerings provide the Intel Xeon 7500 CPU plus the ability to add “extended memory” offering value for virtualization, databases and any other workloads that benefit from large amounts of memory. Continue reading
(Updated 4/22/2010 at 2:44 p.m.)
IBM officially announced the HX5 on Tuesday, so I’m going to take the liberty to dig a little deeper in providing details on the blade server. I previously provided a high-level overview of the blade server on this post, so now I want to get a little more technical, courtesy of IBM. It is my understanding that the “general availability” of this server will be in the mid-June time frame, however that is subject to change without notice.
Below is the details of the actual block diagram of the HX5. There’s no secrets here, as they’re using the Intel Xeon 6500 and 7500 chipsets that I blogged about previously.
As previously mentioned, the value that the IBM HX5 blade server brings is scalability. A user has the ability to buy a single blade server with 2 CPUs and 16 DIMMs, then expand it to 40 DIMMs with a 24 DIMM MAX 5 memory blade. OR, in the near future, a user could combine 2 x HX5 servers to make a 4 CPU server with 32 DIMMs, or add a MAX5 memory DIMM to each server and have a 4 CPU server with 80 DIMMs.
The diagrams below provide a more technical view of the the HX5 + MAX5 configs. Note, the “sideplanes” referenced below are actualy the “scale connector“. As a reminder, this connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering.
(Updated) Since the original posting, IBM released the “eX5 Porfolio Technical Overview: IBM System x3850 X5 and IBM BladeCenter HX5” so I encourage you to go download it and give it a good read. David’s Redbook team always does a great job answering all the questions you might have about an IBM server inside those documents.
If there’s something about the IBM BladeCenter HX5 you want to know about, let me know in the comments below and I’ll see what I can do.
Thanks for reading!
(UPDATED 11:29 AM EST 3/2/2010)
IBM announced today the BladeCenter® HX5 – their first 4 socket blade since the HS41 blade server. IBM calls the HX5 “a scalable, high-performance blade server with unprecedented compute and memory performance, and flexibility ideal for compute and memory-intensive enterprise workloads.”
The HX5 will have the ability to be coupled with a 2nd HX5 to scale to 4 CPU Sockets, grow beyond the base memory with the MAX5 memory expansion and be offer hardware partition to split a dual node server into 2 x single node servers and back again. I’ll review each of these features in more detail below, but first, let’s look at the basics of the HX5 blade server.
- Up to 2 x Intel Xeon 7500 CPUs per node
- 16 DIMMs per node
- 2 x Solid State Disk (SSD) slots per node
- 1 x CIOv and 1 CFFh daughter card expansion slot per node, providing up to 8 I/O ports per node
- 1 x scale connector per node
In the fashion of the eX5 architecture, IBM is enabling the HX5 blade server to grow from 2 CPUs to 4 CPUs (and theoretically more) via connecting the servers through a “scale connector“. This connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering. This means you could have any number of combinations from 2 x HX5 blade servers to 2 x HX5 blade servers + a MAX5 memory blade.
With the addition of a new 24 DIMM memory blade, called the MAX5, IBM is enabling users to grow the base memory from 16 memory DIMMS to 48 40 (16+24) memory DIMMs. The MAX5 will be connected via the scale connector mentioned above, and in fact, when coupled with a 2 node, 4 socket system, could enable the entire system to have 72 80 DIMMS (16 DIMMs per HX5 plus 24 DIMMs per MAX5). Granted, this will be a 4 server wide offering, but this will be a powerful offering for database servers, or even virtualization.
The final feature, known as FlexNode partitioning is the ability to split up a combined server node into individual server nodes and back again as needed. Performed using IBM Software, this feature will enable a user to automatically take a 2 node HX5 system acting as a single 4 socket system and split it up into 2 x 2 socket systems then revert back to a single 4 socket system once the workload is completed.
For example, during the day, the 4 socket HX5 server is used for as a database server, but at night, the database server is not being used, so the system is partitioned off into 2 x 2 socket physical servers that can each run their own applications.
As I’ve mentioned previously, the pricing and part number info for the IBM BladeCenter HX5 blade server is not expected to show up until the Intel Xeon 7500 processor announcement on March 30, so when that info is released, you can find it here.
For more details, head over to IBM’s
Let me know your thoughts – leave your comments below.
IBM’s Enterprise x-Architecture has been around for quite a while providing unique Scalability, Reliability and Flexibility in the x86 4-socket platforms. You can check out the details of the eX4 technology here.
Today’s announcement offered up a few facts:
a) the existing x3850 and x3950 M2 will be called x3850 and x3950 X5 signifying a trend for IBM to move toward product naming designations that reflect the purpose of the server.
b) the x3850 and x3950 X5’s will use the Intel Nehalem EX – to be officially announced/released on March 30. At this time we can expect full details including part numbers, pricing and technical specifications.
e) IBM also announced a new technology, known as “FlexNode” that offers up physical partitioning capability for servers to move from being a single system to 2 different unique systems and back again.
An observation from the pictures of the HX5 is that it will not have hot-swap drives, like the HS22’s do. This means there will be internal drives – most like solid state drives (SSDs). You may recall from my previous rumour post that the lack of hot-swap drives is pretty evident – IBM needed the real estate for the memory. Unfortunately until memristors become available, blade vendors will need to sacrifice real estate for memory.
2) As part of the MAX5 technology, IBM will also be launching a memory blade to increase the overall memory on the HX5 blade server. Expect more details on this in the near future.
Visit IBM’s website for their Live eX5 Event at 2 p.m. Eastern time at this site:
As more information comes out on the new IBM eX5 portfolio, check back here and I’ll keep you posted. I’d love to hear your thoughts in the comments below.