In 1965, Gordon Moore predicted that engineers would be able to double the number of components on a microchip every two years. Known as Moore’s law, his prediction has come true – processors are continuing to become faster each year while the components are becoming smaller and smaller. In the footprint of the original ENIAC computer, we can today fit thousands of CPUs that offer a trillion more computes per seconds at a fraction of the cost. This continued trend is allowing server manufactures to shrink the footprint of the typical x86 blade server allowing more I/O expansion, more CPUs and more memory. Will this continued trend allow blade servers to gain market share, or could it possibly be the end of rack servers? My vision of the next generation data center could answer that question.
With all that is made of the competition between blade server manufacturers and the growth of the blade server market in general, is there room for another type of condensed computing in the data center? Have we been going about things all wrong with regard to architecture design?
Nutanix thinks so.
Nutanix is a start-up company geared towards delivering a simplified virtualization infrastructure with a strong focus towards eliminating the need for a SAN. Their clustered solution brings storage and compute together which theoretically reduces expense, reduces complexity, and improves performance. On its own it doesn’t really seem that innovative but the secret sauce is how they make the cluster scale and tier/span data across all nodes without sacrificing performance. Each node has the usual compute resources plus a mix of local SSD and SATA hard disks. There are 4 nodes per 2u enclosure called a “block”. Add more blocks and you have a Nutanix cluster. The software stack scales and balances everything between the nodes and blocks. The technology originated from the architecture that companies like Google and Facebook employ in their data centers. Assuming that can be taken at face value, the scalability potential is phenomenal.
So what’s the big deal?
Well my thinking is that if you can eliminate the need for a SAN (for virtualization) then you can definitely eliminate the need for an enclosure of blade servers. No interconnects. No Enclosure. Simplified network architecture. No SAN. What’s not to love?
Many people get confused as to why so many I/O modules are needed within a given blade chassis. The basic concept is simple (in most cases) – for each port you need on a given blade server, you need to have a corresponding I/O module. For example, if you need 4 NICs, you’re going to need 4 Ethernet modules (in most cases.) In today’s post, I thought I would keep it simple and publish the I/O diagrams of Cisco, Dell, HP and IBM chassis. Of course, I am human and “have been known to make mistakes – from time to time” so please feel free to correct me on any errors you see. Enjoy.
(Updated 8/3/2011 – fixed Dell M1000e Full Height I/O Diagram)
What if you could run a graphics intensive application, like CAD from Chicago, and you were sitting in Atlanta? What if you could work on a multi-million dollar animated movie feature from the luxury of your home? These and more could be possible with the HP WS460c G6 Workstation Blade. Continue reading