Dell announced today two new additions to their blade server family – the PowerEdge 11G M710HD and the M610x.  The two new servers are just a part of Dell’s “Blade 3.0 Launch” – a campaign highlighting Dell’s ongoing effort to become the leader in blade server technology.  Over the next several months, Dell will be making changes in their chassis infrastructure introducing more efficient power supplies and fans that will require up to 10% less power over existing chassis.  Don’t worry though, there will not be a new chassis.  They’ll simply be upgrading the fans and power supplies that ship standard at no charge to the customer. 

Dell also has announced a significant upgrade to their Chassis Management Controller Software (CMC).  This is great news, as Dell’s chassis management software interface had not had an update since the early part of the decade.  The CMC 3.0 release offers a better user interface and ease of use.  One of the key features that CMC 3.0 will offer is the ability to upgrade the iDRAC, BIOS, RAID, NIC and Diagnostic firmware to all the blades at one time offering huge time savings.  Expect the CMC 3.0 software to be available in early July 2010.  For demo’s of the new interface, jump over to Dell TechCenter.

Dell PowerEdge 11G M710HDPowerEdge 11G M710HD
Ideal for virtualization or applications requiring large amounts of memory, the M710HD is a half-height blade server that offers up:

*  Up to 2 Intel 5500 or 5600 Xeon Processors 
* 18 memory DIMMs
*  2 hot-swap drives (SAS and Solid State Drive Option)
* 2 mezzanine card slots
* dual SD slots for redundant hypervisor
*2 or 4 x 1Gb NICs

On paper – the Dell M710HD looks like a direct competitor to the HP Proliant BL490 G6, and it is, however Dell has added something that could change the blade server market – a flexible embedded network controller.  The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removeable daughter card, very similar to the mezzanine cards.  This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows.  For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent.  When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment.  Any time a manufacturer offers investment protection I get excited.  An important note – the M710HD will come with a NDC that will provide up to 4 x 1Gb NICs when the Dell PowerConnect M6348 Ethernet Switch is used. 

PowerEdge 11G M610x
Dell PowerEdge 11G M610xAs the industry continues to hype up GPGPU (General Purpose computing on Graphic Processor Units), it’s no surprise to see that Dell has announced the availability of a blade server with dedicated PCIe 16xGen2 slots.  Here’s some quick details about this blade server:

* Full-height blade server
*
Up to 2 Intel 5500 or 5600 Xeon Processors 
* 12 memory DIMMs
*  2 hot-swap drives
* 2 mezzanine card slots
* 2 x PCIe 16x(Gen2) slots

I know the skeptical reader will think, “so what – HP and IBM have PCIe expansion blades,” which is true – however the M610x blade server differenciates itself by offering 2 x PCIe 16x Generation 2 slots that can hold up to 250w cards, allowing this blade server to handle many of the graphics cards designed for GPGPU or even the latest I/O Adapters from Fusion I/O.  Although this blade server can handle these niche PCIe cards, don’t overlook the opportunity to take advantage of the PCIe slots for situations like fax modems, dedicated SCSI controller needs, or even dedicated USB requirements. 

I’m curious to know what your thoughts are about these new servers.  Leave me a comment and let me know.

For your viewing pleasure, here’s some more views of the M610x.Dell PowerEdge 11G M610x

  • elidezman80

    good to see Dell making a progress in blade market . can it have the same effect when HP created c7000 ?

  • Pingback: Tweets that mention Blades Made Simple » Blog Archive » Dell Announces New Blade Servers: M710HD and M610x -- Topsy.com

  • Pingback: Kevin Houston

  • Pingback: KerryatDell

  • Pingback: Kevin Houston

  • Pingback: Frank Owen

  • Pingback: Puneet Dhawan

  • http://BladesMadeSimple.com/ Kevin Houston

    I'm not sure if #Dell 's continued blade server innovations will propel them to the top of the blade server market anytime soon, but I'm glad to see some creativity by Dell. The Network Daughter Card feature is a really exciting design. I'll be interested to see if it becomes copied by other manufacturers. Thanks for your comment and your continued support!

  • Barend

    The Network Daughter card is nice but can I have virtual NICs now in the 10GbE Dell Blades? HP and IBM do!

  • http://BladesMadeSimple.com/ Kevin Houston

    I'm not sure if #Dell will be implementing virtual NICs on the motherboard, but with the Network Daughter Card, you would have the option to replace the 1Gb with that future technology. Think about if/when 40Gb Ethernet comes out – you'll have investment protection to upgrade that 10Gb to 40Gb simply by replacing the daughter card on the server. That's why I think this announcement is exciting! I appreciate the comment and thanks for reading!

  • mike roberts

    Barend,
    you can have virtual NICs with any standard 10GbE NIC, all hypervisors do that for you. one of the greatest misconceptions out there is that you need to use proprietary partitioning schemes to “carve” up a 10Gb pipe in order to make it useful. in reality all the capabilities to do that are native in all hypervisors and provide segregation of traffic, isolation and even bandwidth reservation per port channel in the case of vSphere 4 if you need it (which in most cases you dont). i guarantee you that if you partition the NIC with hardware and slice bandwidth, the decisions you make for the bandwidth allocation between partitions is going to be WRONG.
    check out this paper we wrote with Intel. using the native capabilities of the hypervisor, vs. buying expensive proprietary and complex partitioning solutions is clearly the best path for customers. Flex10 & IBM's solutions (at least for virtualized servers) are nothing more than a marketing story to sell customers unreasonably expensive networking gear.

    http://www.dell.com/downloads/global/products/p

    also check out the InfoWorld Blade shootout the kevin posted before. Dell was the ONLY vendor to demonstrate FULL LAN/SAN convergence in that test running VM traffic, vmotion, service console, AND iSCSI traffic over a standard 10GbE pipe. HP & IBM's “virtual NIC” solutions didnt do that, in fact the tester commented that the engineers HP/IBM sent out didnt even understand how to use their “virtual NIC” solutions fully.
    i'll give you that in a non-virtualized OS there could be a use for NIC partitioning, but GENERALLY these applications dont need the amount of connectivity that virtualized hosts need. to address these customers we do have quad port 1Gb solutions available.
    mike

  • Pingback: Mark S A Smith

  • http://OCEinc.com Mark S A Smith

    Although HP is deeply entrenched in the blade market, I believe IBM is vulnerable. Blades need high-touch sales tactics to be successful. Technically, this product introduction changes the game. I think Dell could take second place if they figure out how to recruit resellers to bring it to market.

  • mike roberts

    Kevin,
    the upgrade/investment protection scenario is interesting and could have some value to some customers, but the main reason we did this is to provide flexibility & choice give the wide range of options today. remember in a blade chassis, if you want to go from 1Gb to 10Gb or 40Gb you'd need a new set of switches also, so its not as simple as just swapping an adapter. the switch investment is much larger.

  • Brian N Jean

    Can you confirm that the 2 full high slots can each hold an NVidia 225W adapter coincident. Each adapter is a 2 wide PCIe pitch including the fan sink. Thanks.

  • mike roberts

    brian,
    M610x can hold ONE dual wide Nvidia card, we expect to have single wide Nvidia cards available in the near future

  • http://twitter.com/ersontech Andreas Erson

    Will both Broadcom- and Intel-based NDCs be available at the launch? The ability to have only one brand of network adapters would be beneficial.

  • mike roberts

    andreas,
    we'll start with a broadcom 4x1Gb option (w/ TOE & iSCSI offload), additional options will be available later in the 2nd half of this year but can't disclose those yet.
    mike

  • Pingback: Dell ads GPU capable blades « High Performance Computing Info

  • dmitry

    I’m not that familiar with various NIC flavors for blade servers, but I believe they do come in a mezz form factor. So is NDC then essentially a mezz card with a switch silicon now on it releasing the main board of being tied to a fixed network rate. And if my assumption is correct, does this mean that the cost of NDC goes up accordingly with that extra switch circuitry on board that aims to offer investment protection along with forward compatibility and I should expect the cost of NDCs to be higher than that of a standard NIC, and when seeking higher speeds, would need to swap in a new higher cost NDC (with new controller and switch silicon)?

  • Twadeus

    What is the speed of the Mezzanine Card Slots? PCIe x8 or x16

  • Pingback: QLogic Delivers Flexibility with 3rd Generation Converged Networking « Wikibon Blog

  • Pingback: QLogic Delivers Flexibility with 3rd Generation Converged Networking | MemeConnect: QLogic

  • Pingback: GLCDellRep

Set your Twitter account name in your settings to use the TwitterBar Section.
%d bloggers like this: