Contrary to popular belief and growing market share, blade servers are NOT for everyone. You may be surprised to hear that from a site that focuses only on blade servers, but the reality is, there are a few situations that don’t warrant blade servers. Here’s the top 5 reasons you may not want blade servers.
Reason #1 – Space is Not a Concern
If you have a data center full of empty racks and space is not a concern, then blade servers may not be for you. At first, this may seem logical, but then I ask you, are you virtualizing at all? If so, why? You have plenty of space – so why not run each workload on its own individual server? You probably virtualize for several reasons, a few of which are ease of management and consolidation. The same reasoning could be applied toward running blade servers. You’ll have an easier way to manage your physical servers while consolidating network and storage ports which in turn lower your operational expenses.
Reason #2 – Power & Cooling is Not a Concern
I love talking to I.T. people who think that power doesn’t matter. It probably doesn’t matter to them but believe me – someone in the organization cares about power. In fact, even power companies care about their power usage in data centers. Even if you don’t see the power bill, consider comparing the power requirements for blade servers with those of your existing rack servers. When it comes to the topic of power/cooling and blade servers, many people disbelieve that blade servers can actually require less power/cooling than rack servers. The truth of the matter is they can – with the right economies of scale. A recent paper by Principled Technologies showed an example of 2U servers vs blade servers where a rack of 2U servers (20 servers) = 8kW of power. In comparison, 32 blade servers would only require 6.66kW of power offering a 1.4kW of savings. This translates to $1k per year of power savings. Imagine telling your CFO that you have a way to save a few $k per year…
Reason #3 – Your Networking Team Won’t Allow Switching in a Blade Chassis
Unfortunately, there are organizations where the networking group dictates every protocol that involves the network. The reality is there are political battles you just can’t win, but when you at the economics of using blade servers, I don’t understand why networking folks don’t want to use blade servers. Let’s assume a 48-port top of rack converged switch has a list price of $24,857. That equates to a price-per-port of $519. Looking at the ports required at the top of rack switch device, a rack of 20 x 2U servers would cost $10,357 whereas a chassis of blade servers would cost as low as $1036. Critics would argue that you are adding a layer of complexity to the network fabric, however there are devices like I/O Aggregators that can bridge the chassis network and top of rack network to reduce complexities, so don’t give up on blade servers just yet.
Reason #4 – You Can’t Have a Single Point of Failure
This is a serious concern that many organizations have. When discussing blade servers to customers, I sometimes get asked – isn’t the midplane of a chassis a single point of failure? The real answer is – yes. I don’t care what blade server vendor you look at, the midplane is the single point of failure in any case, simply because if it fails, you’ have to take the blade servers offline in order to service it. In all reality, IF a midplane is going to fail, it’ll fail out of the factory. Very rarely will it fail after years of being in service because the midplane is nothing but copper traces, so nothing short of a bolt of lightening should affect it. In my experiences, those customers who are concerned about the reliability of blade servers will split workloads across multiple blade chassis which insures that if there is an outage, they are protected. If this idea doesn’t seem to satisfy your concern, then stick with rack servers. (Like I said in the beginning– blade servers aren’t for everyone.)
Reason # 5 – Your I/O Requirements Exceed the Limits of Blade Servers
This is another valid concern that I see often. If your environment requires 8 x Fibre channel ports and 10 x 10GbE ports, then blade servers may not be a fit. But then again, do you really need that much I/O, or are the requirements set by a software vendor who tells everyone they need that much I/O. Also, consider looking at combining the Fibre and Ethernet workloads into a single, converged fabric. It’s a popular new trend and everyone is doing it…
In summary, there are real reasons that may necessitate using a server infrastructure other than blades, but my plea to you is – give blade servers a chance. Find your nearest blade server vendor or reseller and ask for a blade server evaluation. Blade servers will save you money and time and at the end of the day you might find that you actually like them.
————————————————————-
Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 15 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global 500 market.
Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.
Single point of failure can be other than the midplane.
Failures I’ve seen where firmware upgrade (Cisco UCS) and thermal shutdown (HP C-class).
Pingback: Technology Short Take #30 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers
Pingback: Technology Short Take #30 | Strategic HR
Pingback: VMware Virtual SAN Ready Node on a Blade Server » Blades Made Simple