If you are a reader of BladesMadeSimple, you are no stranger to Dell’s Network Daughter Card (NDC), but if it is a new term for you, let me give you the basics. Up until now, blade servers came with network interface cards (NICs) pre-installed as part of the motherboard. Most servers came standard with Dual-port 1Gb Ethernet NICs on the motherboard, so if you invested into a 10Gb Ethernet (10GbE) or other converged technologies, the onboard NICs were stuck at 1Gb Ethernet. As technology advanced and 10Gb Ethernet became more prevalent in the data center, blade servers entered the market with 10GbE standard on the motherboard. If, however, you weren’t implementing 10GbE then you found yourself paying for technology that you couldn’t use. Basically, what ever came standard on the motherboard is what you were stuck with – until now.
Dell has broken the long-standing design concept of embedding the LAN onto the motherboard (aka LOM) and replaced it with a small, removable mezzanine card called a Network Daughter Card, or NDC. The NDC provides the buyer with a flexibility of choosing what they want ( 4 x 1GbE, 2 x 10GbE or 2 x Converged Network Adapter.) This innovation is exciting to me, as it not only provides a possible upgrade path to future technologies, but it also changes the way we look at server technology. No longer does the on-board NIC have to be integrated onto the motherboard, but it can be a removable card that can be easily replaced or upgraded. In a few years when this is standard architecture on every x86 server, remember where you saw it first.
But wait – there’s more. In addition the NDC is the first adapter to offer the industry’s first network partitioning, or “NPAR” scheme that makes it possible to split the 10GbE pipe while working with any of the Dell PowerEdge M1000e 10GbE Ethernet Switch Modules. So, what’s the big deal about NPAR? Let me explain.
With the increased amount of virtualization in the data center, combined with an increase in data and cloud computing, the network’s efficiency is becoming compromised driving many organizations to embrace a 10GbE network. While moving to a more robust 10GbE environment may be ideal for an organization, it also brings challenges like ensuring that the appropriate bandwidth for all resources is available in both the physical and virtual environments. This is where NPAR comes in. Network Partitioning allows for administrators to split up the 10GbE pipes on the NDC into 4 separate partitions or physical functions and allocate bandwidth and resources as needed. Each of the four partitions is an actual PCI Express function that appears in the blade server’s system ROM, O/S or virtual O/S as a separate physical NIC.
Each partition can support networking features such as:
- TCP checksum offload
- Large send offload
- Transparent Packet Aggregation (TPA)
- Multiqueue receive-side scaling
- VM queue (VMQ) feature of the Microsoft® Hyper-V™ hypervisor
- Internet SCSI (iSCSI) HBA
- Fibre Channel over Ethernet (FCoE) HBA.
Administrators can enable/disable any of the features per partition and they configure a partition to run iSCSI, FCoE, and TCP/ IP Offload Engine (TOE) simultaneously.
Each of the four partitions per port (8 per NDC) can be set up with a specific size and a specific weight. In the example shown on the above, you see that Physical Port 1 has 4 partitions:
- Partition 1 (red) = 2Gbps, running as an iSCSI HBA on Microsoft Windows Server 2008 R2
- Partition 2 (orange) = 2Gbps, running as an FCoE HBA on Microsoft Windows Server 2008 R2
- Partition 3 (green) = 1Gbps, running TOE on on Microsoft Windows Server 2008 R2
- Partition 4 (blue) = 5Gbps, running as a Layer 2 NIC on Microsoft Windows Server 2008 R2
Each partition’s “Maximum Bandwidth” can be set to any increment of 100Mbps (or .1Gbps) up to 10000 Mbps or 10 Gbps. Also, note, this is for send/transmit only. The receive direction bandwidth is always 10 Gbps.
Furthermore, admins can configure the weighting of each partition to provide increased bandwidth presence when an application requires it. In the example above, Physical Port 2 has the “Relative Bandwidth Weight” on all 4 partitions set for an equal weight at 25% – giving each port equal weight. If, however VMkernel NIC 1 (red) needed to have more weight, or priority, over the other NICs, we could set the weight to 100% giving that port top priority.
If you are feeling really adventurous, you can oversubscribe a port. This is accomplished by setting the 4 partitions of that single port to having a Maximum Bandwidth setting of more than 100%. This allows each of the partitions to take as much bandwidth as allowed as their individual traffic flow needs change – based on the Relative Bandwidth Weight assigned. Take a look at the following:
The example above shows each of the four partitions’ Maximum Bandwidth (shown in .1 increments so 10 = 1 Gbps)
- Partition 1 = 1 Gbps
- Partition 2 = 1 Gbps
- Partition 3 = 8 Gbps
- Partition 4 = 8 Gpbs
Total for all 4 partitions = 18 Gbps, which means the port is 80% (8 Gbps) oversubscribed.
Some additional rules to note from the NPAR User’s Manual:
- For Microsoft Windows Server, you can have the Ethernet Protocol enabled on all, some, or none of the four partitions on an individual port.
- For Linux OSs, the Ethernet protocol will always be enabled (even if disabled in Dell Unified Server Configuration screen).
- A maximum of two iSCSI Offload Protocols (HBA) can be enabled over any of the four available partitions of a single port. For simplicity, it is recommended to always using the first two partitions of a port for any offload protocols.
- For Microsoft Windows Server , the Ethernet protocol does not have to be enabled for the iSCSI offload protocol to be enabled and used on a specific partition.
For more information on the Network Partitioning capabilities of the Dell Network Daughter Card, check out the white paper at: Dell Broadcom NPAR White Paper
Kevin Houston is the founder of BladesMadeSimple.com. He has over 14 plus years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global 500 market.
Follow me on TwitterMy Tweets
- AMD (1)
- ARM (1)
- Cisco (55)
- Citrix (1)
- Cloud Computing (1)
- Comdex (1)
- Dell (74)
- Fujitsu (1)
- Future Technologies (9)
- Gartner (3)
- History (4)
- HP (76)
- HPC (2)
- IBM (74)
- IDC (18)
- Intel (13)
- Market Analysis (6)
- Microserver (1)
- Microsoft (1)
- Networking (1)
- News (2)
- Performance (4)
- Power (4)
- Reviews (2)
- Server Comparisons (18)
- Uncategorized (12)
- Virtualization (1)
- VMmark (2)
- VMware (11)