A recent article from AsiaOne.com reported that modern data centers are having problems handling the dense server environment that blade servers provide.  The article mentions that traditional data centers that were built less than five years ago were designed to have a uniform energy distribution of around 2kW to 4kW (kilowatts) per server rack.  With the growth of blade servers being at the highest since the inception eight years ago, today’s data centers are packed with dense blade servers that are now pushing the envelope beyond 12kW, thus putting a huge strain on the design of the data center.  In fact, according to Rakesh Kumar, a Gartner research vice-president, “‘A rack that is 60 per cent filled could have a power draw as high as 12kW.” The article goes on to mention that current data centers may need to be re-designed to handle the future power requirements of blades.

(Full article can be read here: http://business.asiaone.com/Business/Tech%2BSense/Highlights/Story/A1Story20110311-267475.html)

I personally remember when a key message of blade servers was that they would provide less power and energy than rack servers, but this information seems to contradict that.  I’m a bit baffled as to what has changed over the past 4 years.  The chassis holds the same quantity of blade servers.  The only things that have changed are the I/O modules and the server architecture (more RAM, more powerful CPUs.)

Why do you think power in blade servers has become such an issue?  Is it the server hardware from Intel and AMD driving high wattage requirements?  Is it simply, as the article alludes to, the fact that there are more blade servers in the typical rack than in the past?  I’d love to hear your thoughts.

  • Marc Farley

    Hi, I work for HP in storage, not servers, so I’m not an expert in blade server technology, but I know that we are working on reducing power and cooling requirements for our blade servers. Here is a link to a page on our web site that discusses this technology: http://h18000.www1.hp.com/products/blades/thermal-logic/

  • http://twitter.com/bladeguy Ken Henault

    Some countries in Asia like Japan only support 110V power infrastructures. As such they are not able to get the power densities we have in the US. Using 3-phase 220V power data centers in the US can support a 15KW rack. Special cooling solutions are required to handle this power density.

    Blades are not a viable option in situations where power is limited to 110V, regardless of what vendor. The exception would be departmental solutions like the HP BladeSystem C3000, or IBM Bladecenter S. But remember, these are departmental solutions, and typically don’t scale well in large enterprise deployments.

    Disclaimer: I work for HP.

  • http://twitter.com/TonyKnowsPower Tony Harvey

    This has nothing to do with blades or power efficiency it’s all about power density which has been going up since the first 1U and 2U servers came out.

    Anybody in 2005/2006 who was designing around server racks at 2kW – 4kW was living in a dream world. It was easy to build a 1U Rack server from any vendor that would pull 300W – 500W, it still is, so a full up rack would take anywhere from 12kW to 24kW. DO something exotic with HPC stuff and GPUs and you could easily build a 30kW – 40kW rack (I’ve actually been part of a team that built such a monster).

    Blade servers just tend to be more power dense than 1U or especially 2U servers. You can fit more of them in any given space so surprise, surprise your power per rack goes up.

    Disclosure I work for Cisco.

  • Pingback: Kevin Houston

  • Pingback: unix player

  • http://twitter.com/joechin13 joechin13

    I am not sure which a*s AsiaOne.com pulled the 2-4KW/rack figure from. It seems really dated, more like a flashback from a 1998 data centre. All data centres I’ve built in the last decade have been predicated on at least 8KW/rack, and should be 16-24KW these days. The as-built data centres in Asia (Taiwan, Philippines, etc.) I have worked with in the last few years have been built to support the same 8-24KW/rack standard. Blade servers are more power efficient than ever, but we are packing more of blades/CPU/cores into each rack.

    Disclosure: I work in the best interests of my clients :)

  • http://twitter.com/jscaramella Jed Scaramella

    Blades have been a bit of a double edge sword for power and cooling. Generally the overall power and cooling requirements for the datacenter are lowered but now needs to be deliver to a more confined/dense space. DC that use fully loaded chassis often do have to use targeted cooling vs. room level cooling.
    Many DCs keep chassis only 50-60% filled which keeps the energy challenges in check and still delivers most of the benefits realized from blades.

  • http://pulse.yahoo.com/_6IJQOMKNPYVQ7C5PDZNWOP5X2I ihavnoclew

    I have worked on the design side of multiple data centers that got pinched when they jumped on the high kW per rack bandwagon. Disclosure – I agree with the logic behind it and know that we are all headed in that direction. But proceed with caution; look at the equipment that you currently own, understand what new equipment you will be buying, and when, make a reasonable projection of virtualization assumptions. And most of all, don’t back yourself into a corner assuming that you can hit 30kW per rack tomorrow. I have, unfortunately, witnessed the termination of multiple IT executives that jumped on the bandwagon too soon. The best way to hedge is to work with a modular design – don’t build too much, too soon. Unfortunately, many large organizations will only approve a large CapEx once, and will not allow it to be phased in over time.

    If anyone is interested, I can give you the names of a handful of major US organizations that would love to have a few more square feet in their designed 15kW per rack center that is only utilizing 8kW per rack.

Set your Twitter account name in your settings to use the TwitterBar Section.
%d bloggers like this: