A reader recently commented on my article about HP’s new 32GB DIMM, “At $8039 per DIMM, HP can support 384GB in a BL460c at the cost of $96,000 per server just for the memory! If you filled just one rack with these servers, you would spend $6 million just for the memory. And the memory would run at a paltry 800MHz.
Cisco can stuff 384GB of RAM into a B250 blade for almost one fifth the cost using 8GB DIMMs… and run that memory bus at 1333MHz. I realize the Cisco B250 is twice as big as the HP BL460c, but when you’re saving $75,000+ in memory on EVERY SERVER and getting better performance out of that memory, you can afford to buy an extra slot in a blade chassis.” This reader brought up some good points, so in today’s article, figured I’d dig into this a little.
First, let’s examine the maximums (per 42U rack):
Cisco UCS B250 M2 (using 8GB DIMMs) |
HP BL460 G7 (using 32GB DIMMs) |
28 servers | 64 servers |
56 CPUs | 128 CPUs |
448 cores | 768 cores |
10.752TB Max Memory | 24.5TB Max Memory |
The table above shows that the HP BL460G7 with 32GB DIMMs offers a higher server density, more CPUs and cores as well as memory over the Cisco B250 M2. As the reader commented in the opening above, the Cisco UCS B250 M2 offering is much cheaper but is half of the max memory of the HP 32GB DIMM offering. Based on the differences in the table, I’m not sure comparing the Cisco UCS B250 M2 with the HP BL460 G7 with 32GB DIMMs is a fair comparison. Perhaps a better comparison would be to look at the HP BL460 G7 using 16GB DIMMs per 42U rack:
Cisco B250 M2 (using 8 GB DIMMs) |
HP BL460 G7 (using 16GB DIMMs) |
28 servers | 64 servers |
56 CPUs | 128 CPUs |
448 cores | 768 cores |
10.752TB Max Memory | 12.28TB Max Memory |
The full-width form factor of the Cisco B250 M2 limits the quantity of servers you can into a 42U rack, which affects the overall maximums when compared to the HP BL460 G7 using 16GB DIMMs. Of course, at $3619 (U.S. List) each, the 16GB DIMMs aren’t cheap either. In fact, a fully loaded BL460 G7 would cost $43,428 in memory alone – which equals $2,779,392 per 42U rack. Keep in mind, this is memory costs alone. The chassis, servers, etc would require a lot more.
If you are trying to get the maximum amount of RAM in a 42U rack, without breaking the bank, check out the maximums using HP’s 2x220c G7 blade servers in a 42U rack:
Cisco B250 M2 (using 8GB DIMMs) |
HP BL2x220c G7 (using 16GB DIMMs) |
28 servers | 128 servers |
56 CPUs | 256 CPUs |
448 cores | 1536 cores |
10.752TB Max Memory | 12.28TB Max Memory |
The 16GB DIMMs used in the HP BL2x220c are $999 (U.S. List) so that makes a maxed out BL2x220c G7 node cost $5,994. Multiplied out for the 128 server nodes you can get in a 42U rack, the HP BL2x220c G7 would cost $767,232 for all of the memory in a 42U rack. How this compares, in price, to the Cisco UCS B250 M2, I don’t know – Cisco doesn’t have a method of looking up list prices.
The reality is that it is very unlikely that someone will fill up a rack full of servers with maximum memory, much less large memory DIMM sizes like 32GB. If there was a sole need for maximum RAM per 42U, there are better ways to achieve than using large memory DIMM sizes, as shown in the examples above. Yes, in the examples above, I used HP but they aren’t necessarily the leaders of max memory per rack. The other blade server vendors also have large memory solutions. Dell’s PowerEdge M710HD server (https://bladesmadesimple.com/2010/06/dell-announces-new-blade-servers-m710hd-and-m610x/) can offer 12.28TB of maximum memory in a rack. IBM’s HX5 (https://bladesmadesimple.com/2010/03/announcing-the-ibm-bladecenter-hx5-blade-server-with-detailed-pics/) that has memory scalability to large amounts of memory per rack using the MAX5.
I have to give Cisco credit. The way they get 384GB RAM is very impressive. The B250 blade architecture takes 4 x 8GB DIMMs and presents it to the system as a single 32GB DIMM. Check out this post here on details on how that works:
https://bladesmadesimple.com/2010/01/384gb-ram-in-a-single-blade-server-how-ciscos-making-it-happen/. I will be curious to know if there are plans for Cisco to create the ability to run 768GB on the B250 M2 with 16GB DIMMs.
Conclusion
Yes, the $8k per 32GB DIMM is hefty. I don’t know of a lot of users who are buying 16GB DIMMs because the price tag is too large. The best thing that the 32GB DIMM may bring to the market as Dell, IBM and even Cisco begin selling it is that it will drive down the cost of the 16GB DIMMs making the 8GB DIMMs become “the standard” and possibly eliminate the 4GB DIMMs.
You’re spot on: The 32GB DIMM won’t generally be consumed by the mainstream user. That’s not who is demanding them right now, though.
Demand comes from folks like CAD/EDA and other segments you might call specialty. The big memory footprints enable x86 to be used in places that were strictly UNIX before. In that midrange area of the server market, this DIMM can actually mean a dramatically lower overall system price. Not all vendors supply solutions for that set of users, though.
I agree, it would be helpful to you if Cisco was more transparent with pricing.
Disclaimer: I work for HP.
Disclaimer: I work for Cisco
Hey Kevin – some thoughts:
1. Cisco is not supporting 16GB dimms in the B250 today. Some of your column heading say 16GB above.
2. The BL2x220 has a serious limitation in that it cannot attach to external FC storage.
3. Westmere EP is currently at 6-cores unless I missed something. So that would bring the core count down for both vendors.
4. I’d love to see a dual-CPU Nehalem-EX comparison using the B230 vs the BL620c if you have time.
5.Thanks for your compliments on the B250 ASIC design. It’s quite a marvel of technology for sure.
What I take away from this is that if I want an HP chassis with BL460 G7 blades and max RAM in each blade, I can pay either $2,779,392 per rack or $6,000,000 per rack (the “not included” chassis and interconnect costs are eclipsed by these staggering numbers).
I wish we had an online configuration tool too – because it would show just how much more these prices are than when you go with Cisco UCS. The decision as to when we will have one is made well beyond my pay grade though…
Dan: If you needed pricing, why didn’t you come to me? If you’re interested in purchasing Cisco UCS hardware, I can connect you with a partner (provided you don’t already have a Cisco sales rep). :)
I disagree that CAD/EDA customers will want this 32GB DIMM and the 20-25% performance hit that comes with running that memory at 800MHz. Cisco can run that same memory footprint at 1333MHz at 1/5th the price (running forty-eight 8GB DIMMs in a B250). Kudos if you can get a customer to spend $96,000 for slow memory for a single server.
Today the Cisco B250 supports a maximum of 384GB of memory using forty-eight 8GB DIMMs. Stevie Chambers explains why here: http://viewyonder.com/2009/06/30/why-has-cisco-ucs-stop-at-348gb-ram/
Pingback: Kevin Houston
Pingback: Prateek Sharma
Pingback: failathon
Hi Doron,
Are you really running forty-eight 8GB dimms at 1333MHz. speed. If so I like the official specifications and the chipset you are using.
Barend van Arnhem
Barend,
The UCS B250 uses a standard Intel 5520 Chipset and 5600 series CPUs. the differentiation comes from Cisco Extended Memory Technology. Have a look at this for more info on Extended memory: http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300.pdf
-Jim
At least you’re acknowledging that the B200 isn’t competitive here. :)
I’ll call you out on the price claim you’ve just made. What is the Cisco list price for a rack of B250 servers in your configuration? Or any of Kevin’s configurations?
I think the key here is Cisco has some interesting options. With the B250, the Westmere CPUs are very cost effective, and combined with extended memory and cost effective 4GB and 8GB DIMMs, leads to a very compelling solution for certain applications. The B230 has better server density, but may not be the best fit.
Disclaimer: I work for HP
2 questions I have regarding UCS memory extension.
1. Every DIMM in a system consumes power, so whilst it is clever to aggregate them together and build larger memory footprints using more smaller DIMM’s this has to have an adverse effect on power consumption (4 x 4GB DIMM’s will consume 4 times the power consumption then 1 x 16GB DIMM) and seeing as power is the biggest factor in TCO (and one of the biggest power draws in a server) then that is surely a critical adverse effect of this approach.
2. If you are using some form of chip to act as the aggregator for the DIMM’s (to turn them into a vDIMM for want of a better phrase) then surely that acts as latency in the system seeing as it is having to do some form of addressing schema, latency is the biggest problem area for memory performance so how does this affect it?
As a side point I would also point out that what I am seeing out there is customers not adopting these high memory footprint blades (from any vendor!) as there is still the eggs in one basket problem and probably wont until ther 8 vm limit on vmotion is lifted in esx.
The #Cisco ASIC (codename Catalina) does add a small amount of latency to the first word of data fetched from meory. Subsequent data words arrive at the full memory bus speed with no additional delay. For example, in a typical Nehalem configuration, the memory latency grows from ~63 nanoseconds to ~69 nanoseconds (+10%), but it is still well below the 102 nanoseconds (+62%) required to access the memory on a different socket, and is in orders of magnitude below the time required to fetch data from a disk. – Reference taken from a Cisco reference book titled, “Project California: a Data Center Virtualization Server” (
Why do people seem to have forgotten about the B230? Thats 2 socket, half hight and 32 DIMMs. I think that would provide a much more fair comparission to the 460’s since they’re also half hight and 2 socket blades.
Also, FWIW, 8GB DIMMs are the standard nowadays as far as I’m concerned. They’re less than twice as expensive than the 4GB’s.
Pingback: Artek
In a practical environment , you would never be able to populate the entire rack due to power & cooling limitations unless you had water cooled racks .
If the comparison is based on a standard rack power footprint, example 6 kw , the rack densities would probably be the same for both vendors . The rack density might be in favour of HP if it was a 10 kw rack but probably by only 1 or 2 servers .
It would really come down to price performance based on power / rack space etc and given the current pricing structure , the 32 GB modules might not make it a very attractive proposition .
Pingback: Go Communications UK