Dell Network Daughter Card (NDC) and Network Partitioning (NPAR) Explained

If you are a reader of BladesMadeSimple, you are no stranger to Dell’s Network Daughter Card (NDC), but if it is a new term for you, let me give you the basics. Up until now, blade servers came with network interface cards (NICs) pre-installed as part of the motherboard.  Most servers came standard with Dual-port 1Gb Ethernet NICs on the motherboard, so if you invested into a 10Gb Ethernet (10GbE) or other converged technologies, the onboard NICs were stuck at 1Gb Ethernet.  As technology advanced and 10Gb Ethernet became more prevalent in the data center, blade servers entered the market with 10GbE standard on the motherboard.  If, however, you weren’t implementing 10GbE then you found yourself paying for technology that you couldn’t use.  Basically, what ever came standard on the motherboard is what you were stuck with – until now.

Dell Network Daughter Card (NDC)Dell has broken the long-standing design concept of embedding the LAN onto the motherboard (aka LOM) and replaced it with a small, removable mezzanine card called a Network Daughter Card, or NDC.  The NDC provides the buyer with a flexibility of choosing what they want ( 4 x 1GbE, 2 x 10GbE or 2 x Converged Network Adapter.)  This innovation is exciting to me, as it not only provides a possible upgrade path to future technologies, but it also changes the way we look at server technology.  No longer does the on-board NIC have to be integrated onto the motherboard, but it can be a removable card that can be easily replaced or upgraded.  In a few years when this is standard architecture on every x86 server, remember where you saw it first.

But wait – there’s more.  In addition the NDC is the first adapter to offer the industry’s first network partitioning, or “NPAR” scheme that makes it possible to split the 10GbE pipe while working with any of the Dell PowerEdge M1000e 10GbE Ethernet Switch Modules.  So, what’s the big deal about NPAR?  Let me explain.

Dell Network Partitioning (NPAR) ExampleWith the increased amount of virtualization in the data center, combined with an increase in data and cloud computing, the network’s efficiency is becoming compromised driving many organizations to embrace a 10GbE network.  While moving to a more robust 10GbE environment may be ideal for an organization, it also brings challenges like ensuring that the appropriate bandwidth for all resources is available in both the physical and virtual environments.  This is where NPAR comes in.  Network Partitioning allows for administrators to split up  the 10GbE pipes on the NDC into 4 separate partitions or physical functions and allocate bandwidth and resources as needed.  Each of the four partitions is an actual PCI Express function that appears in the blade server’s system ROM, O/S or virtual O/S as a separate physical NIC.

Each partition can support networking features such as:

  • TCP checksum offload
  • Large send offload
  • Transparent Packet Aggregation (TPA)
  • Multiqueue receive-side scaling
  • VM queue (VMQ) feature of the Microsoft® Hyper-V™ hypervisor
  • Internet SCSI (iSCSI) HBA
  • Fibre Channel over Ethernet (FCoE) HBA.

Administrators can enable/disable any of the features per partition and they configure a partition to run iSCSI, FCoE, and TCP/ IP Offload Engine (TOE) simultaneously.

Each of the four partitions per port (8 per NDC) can be set up with a specific size and a specific weight.  In the example shown on the above, you see that Physical Port 1 has 4 partitions:

  • Partition 1 (red) = 2Gbps, running as an iSCSI HBA on Microsoft Windows Server 2008 R2
  • Partition 2 (orange) = 2Gbps, running as an FCoE HBA on Microsoft Windows Server 2008 R2
  • Partition 3 (green) = 1Gbps, running TOE on on Microsoft Windows Server 2008 R2
  • Partition 4 (blue) = 5Gbps, running as a Layer 2 NIC on Microsoft Windows Server 2008 R2

Each partition’s “Maximum Bandwidth” can be set to any increment of 100Mbps (or .1Gbps) up to 10000 Mbps or 10 Gbps.  Also, note, this is for send/transmit only.  The receive direction bandwidth is always 10 Gbps.

Furthermore, admins can configure the weighting of each partition to provide increased bandwidth presence when an application requires it.  In the example above, Physical Port 2 has the “Relative Bandwidth Weight” on all 4 partitions set for an equal weight at 25% – giving each port equal weight.  If, however VMkernel NIC 1 (red) needed to have more weight, or priority, over the other NICs, we could set the weight to 100% giving that port top priority.

If you are feeling really adventurous, you can oversubscribe a port.  This is accomplished by setting the 4 partitions of that single port to having a Maximum Bandwidth setting of more than 100%.  This allows each of the partitions to take as much bandwidth as allowed as their individual traffic flow needs change – based on the Relative Bandwidth Weight assigned.  Take a look at the following:

NPAR Example

The example above shows each of the four partitions’ Maximum Bandwidth (shown in .1 increments so 10 = 1 Gbps)

  • Partition 1 = 1 Gbps
  • Partition 2 = 1 Gbps
  • Partition 3 = 8 Gbps
  • Partition 4 = 8 Gpbs

Total for all 4 partitions = 18 Gbps, which means the port is 80% (8 Gbps) oversubscribed.

Some additional rules to note from the NPAR User’s Manual:

  • For Microsoft Windows Server, you can have the Ethernet Protocol enabled on all, some, or none of the four partitions on an individual port.
  • For Linux OSs, the Ethernet protocol will always be enabled (even if disabled in Dell Unified Server Configuration screen).
  • A maximum of two iSCSI Offload Protocols (HBA) can be enabled over any of the four available partitions of a single port. For simplicity, it is recommended to always using the first two partitions of a port for any offload protocols.
  • For Microsoft Windows Server , the Ethernet protocol does not have to be enabled for the iSCSI offload protocol to be enabled and used on a specific partition.

For more information on the Network Partitioning capabilities of the Dell Network Daughter Card, check out the white paper at: Dell Broadcom NPAR White Paper

Kevin Houston is the founder of BladesMadeSimple.com.  He has over 14 plus years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.    Kevin works for Dell as a Server Sales Engineer covering the Global 500 market.

32 thoughts on “Dell Network Daughter Card (NDC) and Network Partitioning (NPAR) Explained

  1. Pingback: Kevin Houston

  2. Pingback: Pablo Stevens

  3. Pingback: Kevin Houston

  4. Pingback: Marc Schreiber

  5. Pingback: unix player

  6. Pingback: Dana Iyer

  7. Pingback: Assyrus Srl

  8. Pingback: Johnny Ochoa

  9. Pingback: Daniel Bowers

  10. Pingback: Sarah Vela

  11. Pingback: Jamie Duff

  12. Pingback: Mike Upton

  13. Kevin Houston

    While carving up network ports into virtual NICs has been around for a while, the #Dell NPAR capability is on the NIC chip and does not rely on any specific module.  The ease of use, and the ability to use with any of the 10Gb Ethernet Modules offered by Dell makes it more appealing (to me at least) then other competitive offerings.  Thanks for reading, and thanks for your comments!

  14. Pingback: hi there

  15. Pingback: Peter Verwulgen

  16. Pingback: Kevin Houston

  17. Pingback: Kong Yang✔

  18. Pingback: Tony Foster

  19. Pingback: AlexanderJN

  20. tonybourke

    Great write-up and great for getting familiar with Dell’s offering. However, like HP Cisco has also been able to carve up NICs for a few years with UCS. The M81KR and VIC1280 cards can crave about 50 (M81KR) or 100 (VIC1280) abstracted NICs/FC HBAs and assign bandwidth/QoS to them. 

    Here’s a great writeup by Brad Hedlund (who’s just joined Dell Force10).

    http://bradhedlund.com/2010/09/15/vmware-10ge-qos-designs-cisco-ucs-nexus/

    Still, cool stuff all around. 

  21. Brad Hedlund

    The “big deal” with the Dell approach is that NIC partitioning is *independent* of the upstream switch.  Both the HP and Cisco approach lock you in to their switches (Flex-10, or Nexus).

    Cheers,
    Brad 

  22. Pingback: Jaime Eduardo Rubio

  23. Pingback: Blades Made Simple™ » Blog Archive » Details on the Dell PowerEdge M420 Blade Server

  24. Pingback: Some thoughts about NIC Partitioning (NPAR) - TechCenter - Blog - TechCenter – Dell Community

  25. Pingback: Some thoughts about NIC Partitioning (NPAR) - Dell TechCenter - TechCenter - Dell Community

  26. Pingback: Flo's Datacenter Report &raquo Some thoughts on NIC Partitioning (NPAR)

  27. Pingback: 有关网卡分区 (NPAR) 的几点想法 - 戴尔技术中心-博客 - 戴尔技术中心 - Dell 社区

  28. Pingback: Installing ESXi 5.x on DELL 12th generation server - "No network adapters were detected" | VirtuALL

  29. Pingback: Network Convergence with Dell M1000e Chassis – My IT Daily

  30. Pingback: Blog Dell – Code Insane

Comments are closed.