A Review of the Dell PowerEdge M710 HD Blade Server

Dell’s Product Marketing team recently provided me with a pair of Dell PowerEdge M710HD blade servers, so I decided to give you a review, but today I’m taking a different approach and providing you with a review via video.  Since this blog is YOUR blog, let please let me know if you like this format.


The Dell PowerEdge M710HD is a half-height blade server that holds up to two (2) Intel Xeon 5500 or 5600 Xeon processors.  The server has two (2) hot-swappable drive bays and 18 memory slot capable of holding up to 192GB of RAM.  Here are the high-level quick specs:

  • half-height form factor
  • Up to 2 x Intel Xeon 5500 or 5600 CPUs (as of this writing, the top CPU offered is the Intel Xeon X5690 3.46Ghz, with 12M Cache CPU)
  • 4 Broadcom 5709s 1Gb NICs on the motherboard via the Network Daughter Card (NDC)
  • 18 DIMM slots (up to 192GB RAM at 1333Mhz)
  • 2 x Hot-Swap Drive Bays supporting 2.5” SSD, SAS , nearline SAS and SATA drives
  • RAID 0, 1 with option for onboard battery-backed cache
  • 3 x USB ports (2 on front, 1 internal)
  • 2 x SD card slots (for embedded hypervisor use)
  • Integrated Dell Remote Access Controller 6 (or iDRAC6)

Disclaimer: while Dell provided the Dell M710HD blade server for me to review, the thoughts, observations and opinions about the Dell M710HD are solely my own.

An External View of the Dell PowerEdge M710HD


I did not show the video graphics adapter since it is integrated on the blade server motherboard, but for those of you wondering – it is a Matrox G200 w/ 8MB memory.


An Internal View of the Dell PowerEdge M710HD


On paper – the Dell M710HD looks like a direct competitor to the HP ProLiant BL490 G6, and it is, however as I showed you in the video, Dell has added something that I really believe could change the blade server market – a flexible embedded network controller.  The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removable daughter card, very similar to the mezzanine cards.  This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows.  For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent.  When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment.  Any time a manufacturer offers investment protection I get excited.  An important note – the M710HD comes with a NDC that will provide up to 4 x 1Gb NICs when the Dell PowerConnect M6348 Ethernet Switch is used.  Dell is continuing with the development of the NDC with last month’s announcement of a Converged Network Adapter (CNA) network daughter card option for the M710HD.

In case you are wondering what I/O expansion cards, the M710HD supports, here’s a list from Dell’s website:

1Gb & 10Gb Ethernet:

  • Dual-Port Broadcom®  Gb Ethernet w/ TOE (BCM-5709S)
  • Quad-Port Intel®  Gb Ethernet
  • Quad-Port Broadcom®  Gb Ethernet (BCM-5709S)
  • Dual-Port Intel®  10Gb Ethernet
  • Dual-Port Broadcom®  10Gb Ethernet (BCM-57711)

10Gb Enhanced Ethernet & Converged Network Adapters (CEE/DCB/FCoE):

  • Dual-Port Intel®  10Gb Enhanced Ethernet (FcoE Ready for Future Enablement)
  • Dual-Port Emulex®  Converged Network Adapter (OCM10102-F-M) – Supports CEE/DCB 10GbE + FCoE
  • Dual-Port Qlogic®  Converged Network Adapter (QME8142) – Supports CEE/DCB 10GbE + FCoE
  • Brocade®  BR1741M-k Dual-Port Mezzanine CNA

Fibre Channel:

  • Dual-Port QLogic®  FC8 Fibre Channel Host Bus Adapter (HBA) (QME2572)
  • Dual-Port Emulex®  FC8 Fibre Channel Host Bus Adapter (HBA) (LPe1205-M)
  • Emulex®  8 or 4 Gb/s Fibre Channel Pass-Through Module


  • Dual-Port Mellanox®  ConnectX-2TM  Dual Data Rate (DDR) and Quad Data Rate (QDR) InfiniBand

For more information about the Dell PowerEdge M710HD, please visit Dell’s website at http://www.dell.com/us/en/enterprise/servers/poweredge-m710hd/pd.aspx?refid=poweredge-m710hd&cs=555&s=biz.

17 thoughts on “A Review of the Dell PowerEdge M710 HD Blade Server

  1. Pingback: Kevin Houston

  2. Pingback: Marc Schreiber

  3. Pingback: Jeff Sullivan

  4. Pingback: Peter Tsai

  5. Pingback: Christopher Collins

  6. Pingback: MSCloudJago

  7. Pingback: Linda Lisle

  8. Pingback: Kong Yang

  9. Pingback: Kong Yang✔

  10. Andreas Erson

    Concise and to the point information as usual. Great job Kevin!

    In the first video you mention that redundant SD slots for hypervisors is a unique feature to the M710HD. That feature also exists on the four socket full height Dell PE M910 blade.

    The SD slot with a wrench is a unique feature for Dell called vFlash which allows you to have SD card that is tightly coupled with the Dell Lifecycle Controller and the IDRAC6. This card is for example needed to be able to use the Part Replacement-feature of the USC/LC (auto-adjust firmware and/or config when you replace a part). You can also partition this SD card with up to 16 partitions for storage (backup) purposes, deployment solutions, bootable diagnostics and so forth.

    More information about vFlash:

    In the fibre channel section of the mezzanine list the FC pass-through module seems to have snuck in. The two available FC4 mezzanines are also not listed (maybe considered deprecated?).

    I also recently found out that the infiniband mezzanine also supports 10Gbit ethernet beside SDR/DDR/QDR infiniband.

    Last months announcement of the two-port CNA NDC is surely to be followed by mezzanine implemementations of that Broadcom 57712 controller.

  11. Joe Lemaire

    Great review. I love the video. As an engineer at a company about to (finally) enter the blade market, the video makes it feel more like a hands-on, which is always great when looking for information and opinion.

    Currently, we’re looking at a Dell solution (M710HD, Cisco 5020s, and an Equallogic SAN), Cisco’s UCS (B200-M2, Cisco 6120, and a NetApp FAS2040) and an IBM Solution (To Be Presented next week). At first I didn’t like the Dell solution as it was presented using 2 additional 10Gbe mezzanine cards and Pass-thrus, but now with the NDC, I’m thinking that what was going to be a cable nightmare, might not be so bad.

    One of the (if not THE) best blogs on blade technology out there – keep up the great work!

  12. Pingback: cmiller237

  13. Pingback: controzo linea_umts

  14. Pingback: Matt McGinnis

  15. Pingback: Linda Lisle

  16. Pingback: Laura Goebel

  17. Pingback: Why Are Dell’s Blade Servers “Different”? – Making blade servers simple

Comments are closed.