Blade Servers Causing Data Centers To Be Re-designed?

A recent article from AsiaOne.com reported that modern data centers are having problems handling the dense server environment that blade servers provide.  The article mentions that traditional data centers that were built less than five years ago were designed to have a uniform energy distribution of around 2kW to 4kW (kilowatts) per server rack.  With the growth of blade servers being at the highest since the inception eight years ago, today’s data centers are packed with dense blade servers that are now pushing the envelope beyond 12kW, thus putting a huge strain on the design of the data center.  In fact, according to Rakesh Kumar, a Gartner research vice-president, “‘A rack that is 60 per cent filled could have a power draw as high as 12kW.” The article goes on to mention that current data centers may need to be re-designed to handle the future power requirements of blades.

(Full article can be read here: http://business.asiaone.com/Business/Tech%2BSense/Highlights/Story/A1Story20110311-267475.html)

I personally remember when a key message of blade servers was that they would provide less power and energy than rack servers, but this information seems to contradict that.  I’m a bit baffled as to what has changed over the past 4 years.  The chassis holds the same quantity of blade servers.  The only things that have changed are the I/O modules and the server architecture (more RAM, more powerful CPUs.)

Why do you think power in blade servers has become such an issue?  Is it the server hardware from Intel and AMD driving high wattage requirements?  Is it simply, as the article alludes to, the fact that there are more blade servers in the typical rack than in the past?  I’d love to hear your thoughts.