Dell announced today the addition a full-height 4 socket PowerEdge M915 blade server based on the AMD Opteron 6100 series CPU family.  Known best with the code name, “Magny-Cours”, this CPU family boasts up to 12 CPU cores with a 512k per core L2 cache and a 12MB of shared L3 cache.  The AMD Opeteron 6100 family also has AMD CoolCore™ technology, AMD PowerNow!™ technology, Enhanced C1 state, AMD CoolSpeed technology.

PowerEdge M915 Blade ServerDell PowerEdge M915 Specs

  • 4 x AMD Opteron 6100 CPUs
  • 32 DIMM slots (up to 512GB using 16 DIMMs)
  • 2 x hot-swap drive bays supporting 2.5” SATA SSD, or SAS (15K, 10K)
  • RAID option includes: PERC H200 (6Gb/s) or PERC H700 (6Gb/s) with 512MB battery-backed cache; 512MB, 1GB non-volatile battery-backed cache
  • Integrated Matrox® G200eW with 8MB memory
  • iDRAC6 Enterprise (standard)
  • Optional Dual-Media Redundant Hypervisor
  • 4 x Mezzanine Card Slots for I/O Expansion
  • Flexible Network Daughter Card offering the choice of GbE network interface cards (NICs) or 10Gb converged network adapters (CNAs)

Why Another AMD Blade?

When the Dell Product Marketing team told me about this new offering, my first question was why are you making another AMD blade when IBM appears to be dropping their AMD portfolio and Cisco only has Intel blades.  HP is the only Tier 1 manufacturer continuing to make AMD-based blades.  The reason is two-fold:

1) Dell wants to provide an upgrade option to their existing AMD customers. When virtualization became “hot” a few years ago, AMD’s architecture was a better choice.  Now those AMD systems are coming to the end of their lease and need to be refreshed.  Since most AMD customers prefer to stay with AMD, it only made since for Dell to create an offering.

2) Dell wants to stay competitive with HP.
Realistically, if Dell didn’t make a 4 socket AMD Opteron 6100 series blade, HP would corner the market.

 

Why Now?

My next question to the Dell Product Marketing team was why now?  HP’s announced their 4 socket AMD Opteron 6100 Series CPU in the HP ProLiant BL685 G7 at last year’s HP Tech Forum in Vegas – so why is Dell so late to the game.  Again, there were a couple of answers to this question:

1) Dell wanted to wait until the timing was right.
The industry is in a gap right now between the announcement of the Intel E7 CPU last quarter and the expected product announcement of the Intel Sandy Bridge at the end of the year.  Since there’s not a lot being announced, this is a great time to bring a new product to market.

2) Dell wanted to wait until they had all of their competitive features “fully baked.”
No explanation needed there – let’s jump into what makes the M915 competitive.

 

Why the Dell PowerEdge M915 over the HP BL685c Gen 7?

When it comes down to the AMD architecture of any given blade server, they are all practically the same.  So I asked the Dell Product Marketing team, what makes the Dell PowerEdge M915 different than their competition?

  • Flexibility of the Network Daughter Card
    The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removable daughter card, very similar to the mezzanine cards.  This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows.  For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent.  When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment
  • With up to 12 x 10Gb Ports, the M915 Offers more I/O
    With 4 x 10Gb ports on the Network Daughter Card and 4 Mezzanine Card slots on the Dell PowerEdge M915, it’s possible to get 12 x 10Gb ports (4 ports on NDC + 4 mezz cards with 2 ports each)
  • Network Partioning (NPAR)
    “network partitioning” or NPAR scheme makes it possible to split the 10GbE pipe with granularity free of any fabric vendor lock-in.  NPAR enables optimal use of physical network links allowing each 10GbE port to be carved up into multiple physical 1GbE NICs without the use of software and without any CPU overhead.  For example, each 10GbE port can be divided into up to four multiple physical NICs totaling 10Gb offering more flexibility.  The NPAR scheme is handled by the Unified Server Configurator, enabled by the Lifecycle Controller that is embedded on the server
  • Optional Dell PowerEdge Failsafe Hypervisor
    Dell provides an option to use the dual SD slots on the Dell PowerEdge M915 blade server to create a redundant hypervisor.  Essentially, if the primary SD slot fails, the secondary slot takes over.  This is a nice feature that should be considered if you are running VMware ESXi or any other hypervisor via SD card.  You can see what this feature looks like in my review of the Dell PowerEdge M710HD.

 

Dell is taking orders now for the PowerEdge M915, with shipping to begin in the 3rd week of May.

  • Anonymous

    good post , but not so important to the end users . most of the market go with Intel in blades . any IDC report on this ? maybe you sometime post on NPAR and why it vendor agnostic

    Regards

  • http://BladesMadeSimple.com/ Kevin Houston

    Thanks for your comments. I’m not sure of the #Intel vs #AMD market share on blade servers, but I’ve reached out to #IDC to see if this is information they can provide. I’ll post additional information if they provide it.

    Regarding the NPAR – my understanding is the “vendor agnostic” comment is based on the technology to reside on the network adapters, not on the I/O Modules like other blade vendors. This allows the technology to work with a variety of vendors. Perhaps a blog post needs to be written up on it in the near future – thanks for the suggestion. I’ll see what I can do.

    Thanks for your support!

  • Pingback: Kevin Houston

  • Pingback: Marc Schreiber

  • Pingback: unix player

  • Pingback: AlexanderJN

  • http://twitter.com/ersontech Andreas Erson

    I’m assuming that there are two NDCs with two 10Gb ports per NDC since it’s a full height blade and not a single NDC with four 10Gb ports?

  • Anonymous

    In the benchmarks (SPECINT), TCO and throughput studies I have been seeing, this 4 Socket AMD appears to beat the current itteration of Intel processors on specific workloads.  But as with screwdrivers… most of the market have flat headed screwdrivers, and you can use that to turn a philips screw or hammer a nail, but it easier to just get the correct tool for the job.  I will give Dell Props for being the ONLY OEM in the industry to be shipping a SINGLE IMAGE OS… what I mean is the same exact image used on their Intel Systems work on their AMD systems… 

  • Anonymous

    yes, that is correct andreas
     

Set your Twitter account name in your settings to use the TwitterBar Section.
%d bloggers like this: