Tag Archives: cna

Why Blade Servers Will be the Core of Future Data Centers

In 1965, Gordon Moore predicted that engineers would be able to double the number of components on a microchip every two years.  Known as Moore’s law, his prediction has come true – processors are continuing to become faster each year while the components are becoming smaller and smaller.  In the footprint of the original ENIAC computer, we can today fit thousands of CPUs that offer a trillion more computes per seconds at a fraction of the cost.  This continued trend is allowing server manufactures to shrink the footprint of the typical x86 blade server allowing more I/O expansion, more CPUs and more memory.  Will this continued trend allow blade servers to gain market share, or could it possibly be the end of rack servers?  My vision of the next generation data center could answer that question.

Continue reading

Tagged , , , , , , , , , ,

Dell Network Daughter Card (NDC) and Network Partitioning (NPAR) Explained

If you are a reader of BladesMadeSimple, you are no stranger to Dell’s Network Daughter Card (NDC), but if it is a new term for you, let me give you the basics. Up until now, blade servers came with network interface cards (NICs) pre-installed as part of the motherboard.  Most servers came standard with Dual-port 1Gb Ethernet NICs on the motherboard, so if you invested into a 10Gb Ethernet (10GbE) or other converged technologies, the onboard NICs were stuck at 1Gb Ethernet.  As technology advanced and 10Gb Ethernet became more prevalent in the data center, blade servers entered the market with 10GbE standard on the motherboard.  If, however, you weren’t implementing 10GbE then you found yourself paying for technology that you couldn’t use.  Basically, what ever came standard on the motherboard is what you were stuck with – until now.

Continue reading

Tagged , , , , , , , ,

Why Are Dell’s Blade Servers “Different”?

I’ve learned over the years that it is very easy to focus on the feeds and speeds of a server while overlooking features that truly differentiate.  When you take a look under the covers, a server’s CPU and memory are going to be equal to the competition, so the innovation that goes into the server is where the focus should be.  On Dell’s community blog, Rob Bradfield, a Senior Blade Server Product Line Consultant in Dell’s Enterprise Product Group, discusses some of the innovation and reliability that goes into Dell blade servers.  I encourage you to take a look at Rob’s blog post at http://dell.to/mXE7iJ. Continue reading

Tagged , , , , , , , , ,

Dell Announces Blade Refresh and NIC Partitioning (NPAR)

Dell announced today a refresh of the PowerEdge M910 blade server based on the Intel Xeon E7 processor.  The M910 is a full-height blade that can hold 512GB of RAM across 32 DIMMs.   The refreshed M910 blade server will also feature Dell’s FlexMem bridge that enables users to use all 32 DIMM slots with only 2 CPUs.  You can read more about the M910 blade server in an earlier blog post of mine here.

According to the Dell  press release issued today Continue reading

Tagged , , , , , , , ,

Dell Announces Converged 10GbE Switch for M1000e

Updated 1/27/2011
Dell quietly announced the addition of a 10 Gigabit Ethernet (10GbE) switch module, known as the M8428-k.  This blade module advertises 600 ns low-latency, wire-speed,  10GbE performance, Fibre Channel over Ethernet (FCoE) switching, and low-latency 8 Gb Fibre Channel (FC) switching and connectivity. Continue reading

Tagged , , , , , , , , , ,

More HP and IBM Blade Rumours

I wanted to post a few more rumours before I head out to HP in Houston for “HP Blades and Infrastructure Software Tech Day 2010” so it’s not to appear that I got the info from HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.

First off – the HP Rumour:
I’ve caught wind of a secret that may be truth, may be fiction, but I hope to find out for sure from the HP blade team in Houston.  The rumour is that HP’s development team currently has a Cisco Nexus Blade Switch Module for the HP BladeSystem in their lab, and they are currently testing it out.

Now, this seems far fetched, especially with the news of Cisco severing partner ties with HP, however, it seems that news tidbit was talking only about products sold with the HP label, but made by Cisco (OEM.)   HP will continue to sell Cisco Catalyst switches for the HP BladeSystem and even Cisco branded Nexus switches with HP part numbers (see this HP site for details.)  I have some doubt about this rumour of a Cisco Nexus Switch that would go inside the HP BladeSystem simply because I am 99% sure that HP is announcing a Flex10 type of BladeSystem switch that will allow converged traffic to be split out, with the Ethernet traffic going to the Ethernet fabric and the Fibre traffic going to the Fibre fabric (check out this rumour blog I posted a few days ago for details.)  Guess only time will tell.

The IBM Rumour:
I posted a few days ago a rumour blog that discusses the rumour of HP’s next generation adding Converged Network Adapters (CNA) to the motherboard on the blades (in lieu of the 1GB or Flex10 NICs), well, now I’ve uncovered a rumour that IBM is planning on following later this year with blades that will also have CNA’s on the motherboard.  This is huge!  Let me explain why. 

The design of IBM’s BladeCenter E and BladeCenter H have the 1Gb NICs onboard each blade server hard-wired to I/O Bays 1 and 2 – meaning only Ethernet modules can be used in these bays (see the image to the left for details.)  However, I/O Bays 1 and 2 are for “standard form factor I/O modules” while I/O Bays are for “high speed form factor I/O modules”.  This means that I/O Bays 1 and 2 can not handle “high speed” traffic, i.e. converged traffic. 

 This means that IF IBM comes out with a blade server that has a CNA on the motherboard, either:

 a) the blade’s CNA will have to route to I/O Bays 7-10
OR
b) IBM’s going to have to come out with a new BladeCenter chassis that allows the high speed converged traffic from the CNAs to connect to a high speed switch module in Bays 1 and 2.

So let’s think about this.  If IBM (and HP for that matter) does put CNA’s on the motherboard, is there a need for additional mezzanine/daughter cards?  This means the blade servers could have more real estate for memory, or more processors.   If there’s no extra daughter cards, then there’s no need for additional I/O module bays.  This means the blade chassis could be smaller and use less power – something every customer would like to have.

I can really see the blade market moving toward this type of design (not surprising very similar to Cisco’s UCS design) – one where only a pair of redundant “modules” are needed to split converged traffic to their respective fabrics.  Maybe it’s all a pipe dream, but when it comes true in 18 months, you can say you heard it here first.

Thanks for reading.  Let me know your thoughts – leave your comments below.

newt gingrich bio
city of omaha
dillards credit card
groupon portland
dsw printable coupons

Tagged , , , , , ,
Translate »