Dell recently published a new whitepaper that compares the performance and power efficiency of four of the Dell PowerEdge M710HD and M620 blades vs. four of the Cisco B250 M2 blade servers. Here is a summary of the key findings:
Performance / watt
The higher performance and lower power draw of the four-blade Dell solutions compared to the UCS B250 M2 blade solution led to the PowerEdge M710HD solution’s 76% higher performance per watt score and the PowerEdge M620 solution’s 108% higher performance per watt score.
Power at Idle
Even with all blades configured with the same amount of system memory, the four-blade PowerEdge M710HD solution consumed 58% as much power at idle as the four-blade UCS B250 M2 solution with its extra DIMMs and supporting circuitry. Similarly, the four-blade PowerEdge M620 blade solution drew just 55% as much power at idle as the Cisco blade solution.
Power at 100% Load
Both of the four-blade PowerEdge solutions, again with the same amount of system memory installed per blade, drew 64% to 67% as much power as the four-blade Cisco UCS B250 M2 blade solution with all blades running at 100% load.
Performance
With the same processor models and the same memory capacity installed in each blade, the four-blade solution based on PowerEdge M710HD blades provided up to 11% higher performance than the four-blade solution based on UCS B250 M2 blades, and the four-blade solution based on PowerEdge M620 blades provided up to 25% higher performance than the UCS blade solution.
Rack density
When the 10U M1000e Modular Blade Enclosure is equipped with its maximum of sixteen M710HD or M620 servers, the solution can fit 1.6 servers per rack unit of space, 2.4 times as dense as the solution with Cisco UCS B250 M2 blades.
Cost
In the configuration tested, the Cisco UCS B250 M2 blade solution costs $112,591.02[1], while the similarly configured Dell PowerEdge M710HD solution costs 34% less at $73,820.00, and the PowerEdge M620 solution costs 33% less at $75,372.00.
To read the report in its entirety, please visit:
To read the writer’s blog post on this report, visit:
Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 15 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Specialist covering the Global 500 East market.
Pingback: Dell PartnerDirect
Pingback: Technology Short Take #22 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers
Pingback: 大植 吉浩
I appreciate Dell testing out our servers. Yes, I do work for Cisco.
While I appreciate the numbers that the Dell servers put up, this test is flawed in a few areas.
1. Comparing these servers to the B250 is like comparing the speed & Performance of a car to that of a truck. The B250 is a unique server designed to provide more memory than the intel design would allow. It is best suited for memory hungry applications.
2. A better comparison if using the B250 would be to equip it with 384GB of RAM and use 8 dell servers with 192GB of RAM – for equal memory footprint comparisons.
3. In regards to power – using 48 x 4GB DIMM’s on the B250 is inefficient. A more efficient power design for 192GB of RAM in the B250 is to use 24 x 8GB DIMM’s. This not only saves a significant amount of power but cost as well.
4. A more fair test between Cisco and Dell would have been to test a similar configuration with the B200-M2 configured w/192GB of RAM or the new B200-M3 using the new E5 processors. The B250 naturally consumes more power due to its size – thus it is an unfair comparison with the Dell servers.
5. Another problem with the design is that the Chassis was only configured with one IOM. This would not be a recommended design and will impact I/O performance.
I did not see any notation where and how the pricing for the UCS solution was calculated. It would be fair to do so as that pricing was not accurate.
It was good noting the BIOS settings in all the servers, and while not optimal for performance it does fairly note how the servers were configured.
For actual power consumption of UCS configurations one can access the power calculator and links to public server performance comparisons these links are also available on the Cisco.com website.
I don’t think either the DELL or the Cisco servers are offering 1333MHz at 3 DPC which both IBM and HP are able to do (as they are offering HyperCloud in addition to LRDIMMs).
The HyperCloud underpins the IP which is in LRDIMMs and the expected DDR4 – and delivers superior performance c.f. the LRDIMMs which are not able to deliver 1333MHz at 3 DPC.
I think Cisco has dropped it’s ASIC-on-motherboard approach (“Catalina” ASIC ?) – and has adopted an LRDIMM type of approach – which means Cisco will be offering the LRDIMM/HyperCloud products.
But I don’t think DELL or Cisco currently can offer 1333MHz at 3 DPC.
Comparing power efficiency across servers is problematic if on one server you cannot install to full capacity (3 DIMMs per channel or 3 DPC) while on the other you can.
For many virtualization tasks (esp. with processors getting faster) you can cram a lot of VMs on a 2-socket server – and total memory capacity is the bottleneck.
So if you compare a 2-socket server offering 384GB (24 DIMMs x 16GB LRDIMMs/HyperCloud) or 768GB (24 DIMMs x 32GB LRDIMMs/HyperCloud as are expected soon) then you may want to compare according to the virtualization task – for many tasks the fully memory loaded server will outclass the less memory loaded server:
– less power consumption per VM
– less footprint in the data center
– less footprint of the UPS/generator power (if you need fewer servers – as doubling memory does the job)
I have posted the choices available for IBM and HP (check out the other posts on the blog for thoughts on “memory loading”, load reduction and other issues with memory LRDIMM/HyperCloud and how it relates to the move to DDR4 in 2013):
http://ddr3memory.wordpress.com/2012/05/24/installing-memory-on-2-socket-servers-memory-mathematics/
May 24, 2012
Installing memory on 2-socket servers – memory mathematics
For HP:
http://ddr3memory.wordpress.com/2012/05/24/memory-options-for-the-hp-dl360p-and-dl380p-servers-16gb-memory-modules/
May 24, 2012
Memory options for the HP DL360p and DL380p servers – 16GB memory modules
http://ddr3memory.wordpress.com/2012/05/24/memory-options-for-the-hp-dl360p-and-dl380p-servers-32gb-memory-modules/
May 24, 2012
Memory options for the HP DL360p and DL380p servers – 32GB memory modules
For IBM:
http://ddr3memory.wordpress.com/2012/05/25/memory-options-for-the-ibm-system-x3630-m4-server-16gb-memory-modules-2/
May 25, 2012
Memory options for the IBM System x3630 M4 server – 16GB memory modules
http://ddr3memory.wordpress.com/2012/05/25/memory-options-for-the-ibm-system-x3630-m4-server-32gb-memory-modules/
May 25, 2012
Memory options for the IBM System x3630 M4 server – 32GB memory modules