Dell Announces Blade Refresh and NIC Partitioning (NPAR)

Dell announced today a refresh of the PowerEdge M910 blade server based on the Intel Xeon E7 processor.  The M910 is a full-height blade that can hold 512GB of RAM across 32 DIMMs.   The refreshed M910 blade server will also feature Dell’s FlexMem bridge that enables users to use all 32 DIMM slots with only 2 CPUs.  You can read more about the M910 blade server in an earlier blog post of mine here.

According to the Dell  press release issued today, the systems being refreshed with the Intel Xeon E7 processor, which include the PowerEdge M910 blade server  as well as the PowerEdge R910 and R810, offer customers substantial performance gains including;

• Up to 38 percent improvement in Oracle application server and database performance over previous generation Xeon 7500 “Nehalem-EX” based servers *.
• Up to 18:1 server consolidation ratio over four socket dual core processor based systems offering up to 93 percent lower operation costs resulting in a one year return on investment**.
• Up to 34 percent improvement in SQL database virtualization performance and 49 percent higher performance per watt with the combination of Xeon E7 processors and new Low Voltage memory (LV RDIMM) offerings***.

According to Dell, the new Intel Xeon E7 CPUs will be shipping this week.

NIC Partitioning (NPAR)

Dell is also announcing the addition of a Converged Network Adapter (CNA) for the M710HD’s network daughter card, or NDC.  I wrote about the NDC in this post, but as a recap,  it’s a removable card that provides the blade server’s LOM (LAN on Motherboard) network adapter.  Previously Dell offered a 4 port Gigabit Ethernet card, but this 2nd offering opens the ability for users with the M710HD to upgrade to a fully converged 10Gb infrastructure.  This card is also the first adapter to offer the industry’s first “network partitioning” or NPAR scheme that makes it possible to split the 10GbE pipe with granularity free of any fabric vendor lock-in.  NPAR enables optimal use of physical network links allowing each 10GbE port to be carved up into multiple physical 1GbE NICs without the use of software and without any CPU overhead.  For example, each 10GbE port can be divided into up to four multiple physical NICs totaling 10Gb offering more flexibility.  The NPAR scheme is handled by the Unified Server Configurator, enabled by the Lifecycle Controller that is embedded on the server.  For more information on the Unified Server Configurator, check out Dell’s website:
http://content.dell.com/us/en/enterprise/d/solutions/unified-server-configurator.aspx

*Oracle: Based on Dell and Oracle testing performed in March 2011 running an industry-standard SPEC Java Enterprise benchmark . SPEC® is a registered trademark of the Standard Performance Evaluation Corporation.  Actual performance will vary based on configuration, usage and manufacturing variability.
**Consolidation: Up to 18:1 server consolidation performance with return on investment in about one year” claim estimated based on comparison between 4S MP Intel® Xeon® processor 7041 (dual-core with Intel® HyperThreading Technology, 4M cache, 3.00GHz, 800MHz FSB, formerly code named Paxville) and 4S Intel® Xeon® processor E7-4870 (30M cache, 2.40GHz, 6.4GT/s Intel® QPI, code named Westmere-EX) based servers. Calculation includes analysis based on performance, power, cooling, electricity rates, operating system annual license costs and estimated server costs. This assumes 42U racks, $0.10 per kWh, cooling costs are 2x the server power consumption costs, operating system license cost of $900/year per server, per server cost of $36,000 based on estimated list prices, and estimated server utilization rates. All dollar figures are approximate. Estimated SPECint*_rate_base2006 performance and power results are measured for Intel® Xeon® processor E7-4870 and estimated for Intel Xeon processor 7041 based servers. Platform power was measured during the steady state window of the benchmark run and at idle. Performance gain compared to baseline was 18x (truncated).
***Based on the DVD Store 2 benchmark testing  performed by Dell Labs in March 2011.  Actual performance and power draw will vary based on configuration, usage and manufacturing variability.

16 thoughts on “Dell Announces Blade Refresh and NIC Partitioning (NPAR)

  1. Pingback: Kevin Houston

  2. Pingback: kevin houston

  3. Pingback: Luigi Danakos

  4. Pingback: Kevin Houston

  5. Pingback: Dell Enterprise Team

  6. Pingback: Toru OZAKI

  7. Pingback: Marc Schreiber

  8. Pingback: Dell SMB News

  9. Pingback: mdomsch

  10. SomeOne

    NIC partitioning sounds a lot like HP Virtual Connect with their Flex NICs, which was released, what 2 years ago? Their recent LOMs are also CNAs, and have similar partitioning features.

  11. Srinivas Thodati

    Major differences are: First it is switch agnostic, that is first in industry….You can use any stadard switch….not restricted to one…like HP does with Virtual Connect Flex-10.

    Second major difference, “Dynamic bandwidth allocation/utilization” meaning even if a partitioned NIC is carved out to have a specific bw…say 3Gig…and that NIC needs more to support a burst traffics that needs more that 3Gig at a time…system is smart to allocate that if other partitioned NICs are not using it…meaning BW is efficiently and dynamically allocated….that is first in industry.

    Configurable at the time of POST…

    Supports Jumbo frame configuration per partition basis…so that you could assign one of it as iSCSI HBA support jumbos…great convinience if customer decides to enable one partition as iSCSI HBA and rest as normal NICS.

  12. Joe Lemaire

    I thought Cisco had the same technology with their UCS M81KR (Palo) VIC? It was my understanding that the advantage of that card was to be able to carve it up into different interfaces all running over the single connection.

  13. Joe Lemaire

    What’s different between Dell’s NPAR and Cisco’s UCS M81KR (Palo) VIC? I thought the benefit of the Palo card was to be able to split the card up into different interfaces, all running over the same pipe?

  14. Pingback: Blades Made Simple™ » Blog Archive » A Review of the Dell PowerEdge M710 HD Blade Server

  15. sagarzx

    hi All
    can you tell me the difference between PowerEdge M910 Blade Server, Intel Xeon E7 Support, TPM v/s poweredge M710HD blade server.

    it is urgent so a quick reply will be highly appreciated.

    Thank You

Comments are closed.