Monthly Archives: October 2011

Why Blade Servers Will be the Core of Future Data Centers

In 1965, Gordon Moore predicted that engineers would be able to double the number of components on a microchip every two years.  Known as Moore’s law, his prediction has come true – processors are continuing to become faster each year while the components are becoming smaller and smaller.  In the footprint of the original ENIAC computer, we can today fit thousands of CPUs that offer a trillion more computes per seconds at a fraction of the cost.  This continued trend is allowing server manufactures to shrink the footprint of the typical x86 blade server allowing more I/O expansion, more CPUs and more memory.  Will this continued trend allow blade servers to gain market share, or could it possibly be the end of rack servers?  My vision of the next generation data center could answer that question.

Continue reading

Nutanix Cluster: Disruptive to Blade Server Market?

 

With all that is made of the competition between blade server manufacturers and the growth of the blade server market in general, is there room for another type of condensed computing in the data center? Have we been going about things all wrong with regard to architecture design?

Nutanix thinks so.

Nutanix is a start-up company geared towards delivering a simplified virtualization infrastructure with a strong focus towards eliminating the need for a SAN. Their clustered solution brings storage and compute together which theoretically reduces expense, reduces complexity, and improves performance. On its own it doesn’t really seem that innovative but the secret sauce is how they make the cluster scale and tier/span data across all nodes without sacrificing performance. Each node has the usual compute resources plus a mix of local SSD and SATA hard disks. There are 4 nodes per 2u enclosure called a “block”. Add more blocks and you have a Nutanix cluster. The software stack scales and balances everything between the nodes and blocks. The technology originated from the architecture that companies like Google and Facebook employ in their data centers. Assuming that can be taken at face value, the scalability potential is phenomenal.

So what’s the big deal?

Well my thinking is that if you can eliminate the need for a SAN (for virtualization) then you can definitely eliminate the need for an enclosure of blade servers. No interconnects. No Enclosure. Simplified network architecture. No SAN. What’s not to love?

Continue reading

Dell Network Daughter Card (NDC) and Network Partitioning (NPAR) Explained

If you are a reader of BladesMadeSimple, you are no stranger to Dell’s Network Daughter Card (NDC), but if it is a new term for you, let me give you the basics. Up until now, blade servers came with network interface cards (NICs) pre-installed as part of the motherboard.  Most servers came standard with Dual-port 1Gb Ethernet NICs on the motherboard, so if you invested into a 10Gb Ethernet (10GbE) or other converged technologies, the onboard NICs were stuck at 1Gb Ethernet.  As technology advanced and 10Gb Ethernet became more prevalent in the data center, blade servers entered the market with 10GbE standard on the motherboard.  If, however, you weren’t implementing 10GbE then you found yourself paying for technology that you couldn’t use.  Basically, what ever came standard on the motherboard is what you were stuck with – until now.

Continue reading