Shared I/O – The Future of Blade Servers?

Last week, Blade.org invited me to their 3rd Annual Technology Symposium  – an online event with speakers from APC, Blade Network Technologies, Emulex, IBM, NetApp, Qlogic and Virtensys.  Blade.org is a collaborative organization and developer community focused on accelerating the development and adoption of open blade server platforms.   This year’s Symposium focused on “the dynamic data center of the future”.   While there were many interesting topics (check out the replay here), the one that appealed to me most was “Shared I/O” by Alex Nicolson, VP and CTO of Emulex.  Let me explain why. 

While there are many people who would (and probably will) argue with me, blade servers are NOT for all workloads.  When you take a look at the blade server ecosystem today, the biggest bottleneck you see is the limitation of on board I/O.  Without compromising server slots, the maximum amount of expansion you can achieve on nearly any blade server is 8 I/O ports (6 Ethernet + 2 storage.)  In addition, blade servers are often limited to 2 expansion cards so if a customer has a requirement for “redundant physical adapters” the amount of expansion is reduced even more.    Based on these observations, if you could remove the I/O from the server  the blade server limitations would be eliminated allowing for the adoption of blade servers into more environments.  This could be accomplished with shared I/O.  

When you stop and think about the blade infrastructure design, no matter who the vendor is, it has been the same for the past 9 years.  YES, the vendors have come out with better chassis designs that allow for “high-speed” connectivity, but the overall design is still the same: blade server with CPUs, Memory and I/O cards all on one system board.  It’s time for blade server evolution to a design where I/O is shared.The idea behind Shared I/O is simple: separate the I/O from the server.  Instead of having storage adapters inside a blade server, you would have an I/O Drawer outside containing the blade chassis with the I/O adapters for the blade servers.  No more I/O bottlenecks on your blade servers.  Your I/O potential is (nearly) unlimited!  The advantages to this design include:

  • More internal space for blade server design.  If the I/O – including the LAN on Motherboard – was moved off the server, there would be substantial space remaining for more CPUs, more RAM or even more disks.
  • Standardized I/O adapters no matter what blade vendor is used.  This is the thought that really excites me.  If you could remove the I/O from the blade server, you would be able to have IBM, Dell, HP and even Cisco in the same rack using the same I/O adapter.  Your investments would be limited to the blade chassis and server.  Not only that, but as blade server architecture changes, you would be able to KEEP the investments you make into your I/O adapters OR on the flip side of that, as I/O adapter speeds increase, you could replace them and keep your servers in place without having to buy new adapters for every server.
  • Sharing of I/O adapters means FEWER adapters are needed.  In order for this design to be beneficial, the adapters would need to have the ability to be shared between the servers.  This means that 1 storage HBA may provided resources for 6 servers, but as I/O adapter throughput continues to increase, this may be more of a desire.  Let’s face it – 10Gb is being discussed (and sold) today, but this time next year, 40Gb may be hot and in 3 years, 100Gb may be on the market.  Technology will continue to evolve and if the I/O adapters were separated from the servers, you would have the ability to share the technology across all of your servers.

Now, before you start commenting that this is old news and that companies like Xsiogo have been offering virtual I/O products for a couple of years – hear me out.  The evolution I’m referring to is not a particular vendor providing proprietary options.  I imagine a blade ecosystem across all the vendors that allow for a standardized I/O platform providing guidelines for all blade servers to connect to a shared I/O drawer made by any vendor.  Yes, this may be an unrealistic Nirvana, but look at USB.  All vendors provide them natively out of the chassis without any modifications, so why can’t we get to the same point with a shared I/O connectivity?

So, what do you think.  Am I crazy, or do you think blade server technology will evolve to allow for a separation of I/O.  Share you thoughts in the comments below.

frozen yogurt recipe
2004 acura tl
amtrak promotion code
iron man games
open heart surgery