Tag Archives: converged network adapter

Dell Announces Blade Refresh and NIC Partitioning (NPAR)

Dell announced today a refresh of the PowerEdge M910 blade server based on the Intel Xeon E7 processor.  The M910 is a full-height blade that can hold 512GB of RAM across 32 DIMMs.   The refreshed M910 blade server will also feature Dell’s FlexMem bridge that enables users to use all 32 DIMM slots with only 2 CPUs.  You can read more about the M910 blade server in an earlier blog post of mine here.

According to the Dell  press release issued today Continue reading

Dell Announces Converged 10GbE Switch for M1000e

Updated 1/27/2011
Dell quietly announced the addition of a 10 Gigabit Ethernet (10GbE) switch module, known as the M8428-k.  This blade module advertises 600 ns low-latency, wire-speed,  10GbE performance, Fibre Channel over Ethernet (FCoE) switching, and low-latency 8 Gb Fibre Channel (FC) switching and connectivity. Continue reading

HP's Well Hidden Secret Blade Server

bl2x220cg5

BL2x220c G5 (2 server "nodes" shown)

HP’s BladeSystem server offering is quite extensive – everything from a 4 CPU Intel blade to an Itanium CPU blade, however their most well hidden, secret blade is their BL2x220c blade server.  Starting at $6,129, this blade server is an awesome feet of design because it is not just 1 server, it is 2 serversin 1 blade case – in a clam shell design (see below).  This means that in a HP C7000 BladeSystem chassis you could have 32 servers!    That’s 64 CPUs, 256 CORES, 2TB of RAM all in a 10U rack space.  That’s pretty impressive.  Let me break it down for you.  Each “node” on a single 2 node BL2x220c G5 server contains:

  • Up to two Quad-Core Intel® Xeon® 5400 sequence processors
  • Up to 32 GB (4 x 8 GB) of memory, supported by (4) slots of PC2-5300 Registered DIMMs, 667 MHz
  • 1 non-hot plug small form factor SATA or Solid State hard drive
  • Embedded Dual-port NC326i Gigabit Server Adapter
  • One (1) I/O expansion slots via mezzanine card
  • One (1) internal USB 2.0 connector for security key devices and USB drive keys

BL2x220

 

 

 

 

 

 

 

You may have noticed that this server is a “G5” version and currently has the older Intel 5400 series processors.  Based on HP’s current blade offering, expect to see HP refresh of this server to a “G6” model that will contain the Intel® Xeon® 5500 series processors.  Once that happens, I expect for more memoryslots to come with it, since the Intel® Xeon® 5500 series processors have 3 memory channels.  I’m guessing 12 memory slots “per node” or 24 memory slots per BL2x220c G6.  Purely speculation on my part, but it would make sense.  

Why do I consider this server to be one of HP’s best hidden secrets?  Simply because with that amount of server density, server processing power and server memory, the BL2x220c could become a perfect virtualization server.   Now if they’d only make a converged network adapter (CNA)…

How IBM's BladeCenter works with Cisco Nexus 5000

Cisco Nexus 4000 switch for blade chassis environments, I thought it would be good to discuss how IBM is able to connect blade servers via 10Gb Datacenter Ethernet (or Converged Enhanced Ethernet) to a Cisco Nexus 5000.

Other than Cisco’s UCS offering, IBM is currently the only blade vendor who offers a Converged Network Adapter (CNA) for the blade server.  The 2 port CNA sits on the server in a PCI express slot and is mapped to high speed bays with CNA port #1 going to High Speed Bay #7 and CNA port #2 going to High Speed Bay #9.  Here’s an overview of the IBM BladeCenter H I/O Architecture (click to open large image:)

BladeCenter H I-O

Since the CNAs are only switched to I/O Bays 7 and 9, those are the only bays that require a “switch” for the converged traffic to leave the chassis.  At this time, the only option to get the converged traffic out of the IBM BladeCenter H is via a 10Gb “pass-thru” module.  A pass-thru module is not a switch – it just passes the signal through to the next layer, in this case the Cisco Nexus 5000. 

10 Gb Ethernet Pass-thru Module for IBM BladeCenter

10 Gb Ethernet Pass-thru Module for IBM BladeCenter

The pass-thru module is relatively inexpensive, however it requires a connection to the Nexus 5000 for every server that has a CNA installed.  As a reminder, the IBM BladeCenter H can hold up to 14 servers with CNAs installed so that would require 14 of the 20 ports on a Nexus 5010.  This is a small cost to pay, however to gain the 80% efficiency that 10Gb Datacenter Ethernet (or Converged Enhanced Ethernet) offers.  The overall architecture for the IBM Blade Server with CNA + IBM BladeCenter H + Cisco Nexus 5000 would look like this (click to open larger image:)

BladeCenter H Diagram 6 x 10Gb Uplinks

 

Hopefully when IBM announces their Cisco Nexus 4000 switch for the IBM BladeCenter H later this month, it will provide connectivity to CNAs on the IBM Blade server and it will help consolidate the amount of connections required to the Cisco Nexus 5000 from 14 to perhaps 6 connections ;)