Why Are Dell’s Blade Servers “Different”?

I’ve learned over the years that it is very easy to focus on the feeds and speeds of a server while overlooking features that truly differentiate.  When you take a look under the covers, a server’s CPU and memory are going to be equal to the competition, so the innovation that goes into the server is where the focus should be.  On Dell’s community blog, Rob Bradfield, a Senior Blade Server Product Line Consultant in Dell’s Enterprise Product Group, discusses some of the innovation and reliability that goes into Dell blade servers.  I encourage you to take a look at Rob’s blog post at http://dell.to/mXE7iJ.

I also want to highlight some other innovations that Dell is offering on certain blade servers:

Network Daughter Card (NDC) – unlike the network interface cards built into the blade server motherboard, the NDC is a daughter card that offers choices of 4 x 1Gb NICs, 10Gb NICs or CNA.  The NDC is a new feature and not offered on every blade server, but for more info, check out this earlier blog post I wrote: https://bladesmadesimple.com/2011/05/a-review-of-the-dell-poweredge-m710-hd-blade-server/

Network Interface Card Partitioning  (NPAR) – this is a feature found on certain blade server models that allows you to divide up the onboard 10Gb NICs into “virtual NICs”.  The cool thing is this can be performed without a specific network I/O module so you don’t have to worry about being locked into any specific I/O module and without the use of any CPU overhead or specialized software.   Read more about this at https://bladesmadesimple.com/2011/04/dellapril5announcements/.

Let me put these innovative features into a real life scenario (note – this is simply an example and doesn’t confirm or deny any future product release from Dell.)  Imagine today you invest into a blade server with a 10Gb NDC using NPAR to split up the 10Gb pipe into smaller 1Gb virtual NICs. and 10Gb capable Ethernet I/O Module and in 18 months, a 40Gb Ethernet Switch module comes out.  You could theoretically replace the NDC and the Ethernet module with the 40Gb flavor (if/when it ever is available)

The next time you choose a server, look beyond the speeds and feeds and look at the innovation, the reliability and the value the server can offer.

 

Kevin Houston is the founder of BladesMadeSimple.com.  He has over 14 plus years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.    Kevin works for Dell as a Server Sales Engineer covering the Global 500 market.

17 thoughts on “Why Are Dell’s Blade Servers “Different”?

  1. Pingback: Kevin Houston

  2. Pingback: unix player

  3. Pingback: Dell ProSupport

  4. Pingback: Arseny Chernov

  5. Pingback: Assyrus Srl

  6. Pingback: Marc Schreiber

  7. Pingback: Sarah Vela

  8. Pingback: Emulex Links

  9. Pingback: Dell SMB News

  10. Pingback: Assyrus Srl

  11. Pingback: Geoff Crozier

  12. Andreas Erson

    Actually it wouldn’t be possible to have a NDC with 40GbE-ports since the LOM is “only” connected by 1-2 lanes to each of the two fabric A I/O-modules. The mezzanines are connected by 1-4 lanes to each of the two fabric B (or C) I/O-modules.

    Utilizing 10GBASE-KR the M1000e is designed to handle the following max configuration for a half-heigt single-width blade:
    LOM/NDC = 4 port 10GbE (using two 10GBASE-KR lanes to each fabric A I/O-module)
    Mezzanine, fabric B = 2 port 40GbE (using four 10GBASE-KR lanes to each fabric B I/O-module)
    Mezzanine, fabric C = 2 port 40GbE (using four 10GBASE-KR lanes to each fabric C I/O-module)

    My guesstimate is that the next step in the M1000e will be 4port 10GbE mezzanines and a corresponding new I/O-module perhaps called M8048-k (32 ports internal, 16 ports external). If you take a current 4 port 1GbE Intel the two dual port gigabit ethernet controllers are 25x25mm in area. Intel also produces dual port 10GbE ethernet controllers with the same 25x25mm size. Each mezzanine is connected with PCIe x8 2.0/2.1 to the motherboard giving it a total of 32Gbps total bandwidth (4port 10GbE would be oversubscribe the connection to the motherboard). With the upcoming 12G servers using PCIe x8 3.0 this will increase to 64Gbps which would avoid any “problem” with oversubscription (let me see the server that actually uses this amount of bandwidth). So in essence the only real issue from my layman point of view would be thermal issues.

    To further speculate about 2port 40Gbps without oversubscribing the connection to the motherboard we would need to use PCIe which will probably provide 128Gbps if using a PCIe x8 4.0. But now we are talking a couple of years from now.

    Anyhow, give me a 4port 10GbE mezzanine with NPAR and a corresponding I/O-module and I would be very happy. :)

  13. Pingback: Andreas Erson

  14. Pingback: Kevin Houston

  15. Pingback: Steve Chambers

  16. Pingback: unix player

  17. Pingback: Jaime Eduardo Rubio

Comments are closed.