Dell announced today a refresh of the PowerEdge M910 blade server based on the Intel Xeon E7 processor.  The M910 is a full-height blade that can hold 512GB of RAM across 32 DIMMs.   The refreshed M910 blade server will also feature Dell’s FlexMem bridge that enables users to use all 32 DIMM slots with only 2 CPUs.  You can read more about the M910 blade server in an earlier blog post of mine here.

According to the Dell  press release issued today, the systems being refreshed with the Intel Xeon E7 processor, which include the PowerEdge M910 blade server  as well as the PowerEdge R910 and R810, offer customers substantial performance gains including;

• Up to 38 percent improvement in Oracle application server and database performance over previous generation Xeon 7500 “Nehalem-EX” based servers *.
• Up to 18:1 server consolidation ratio over four socket dual core processor based systems offering up to 93 percent lower operation costs resulting in a one year return on investment**.
• Up to 34 percent improvement in SQL database virtualization performance and 49 percent higher performance per watt with the combination of Xeon E7 processors and new Low Voltage memory (LV RDIMM) offerings***.

According to Dell, the new Intel Xeon E7 CPUs will be shipping this week.

NIC Partitioning (NPAR)

Dell is also announcing the addition of a Converged Network Adapter (CNA) for the M710HD’s network daughter card, or NDC.  I wrote about the NDC in this post, but as a recap,  it’s a removable card that provides the blade server’s LOM (LAN on Motherboard) network adapter.  Previously Dell offered a 4 port Gigabit Ethernet card, but this 2nd offering opens the ability for users with the M710HD to upgrade to a fully converged 10Gb infrastructure.  This card is also the first adapter to offer the industry’s first “network partitioning” or NPAR scheme that makes it possible to split the 10GbE pipe with granularity free of any fabric vendor lock-in.  NPAR enables optimal use of physical network links allowing each 10GbE port to be carved up into multiple physical 1GbE NICs without the use of software and without any CPU overhead.  For example, each 10GbE port can be divided into up to four multiple physical NICs totaling 10Gb offering more flexibility.  The NPAR scheme is handled by the Unified Server Configurator, enabled by the Lifecycle Controller that is embedded on the server.  For more information on the Unified Server Configurator, check out Dell’s website:
http://content.dell.com/us/en/enterprise/d/solutions/unified-server-configurator.aspx

*Oracle: Based on Dell and Oracle testing performed in March 2011 running an industry-standard SPEC Java Enterprise benchmark . SPEC® is a registered trademark of the Standard Performance Evaluation Corporation.  Actual performance will vary based on configuration, usage and manufacturing variability.
**Consolidation: Up to 18:1 server consolidation performance with return on investment in about one year” claim estimated based on comparison between 4S MP Intel® Xeon® processor 7041 (dual-core with Intel® HyperThreading Technology, 4M cache, 3.00GHz, 800MHz FSB, formerly code named Paxville) and 4S Intel® Xeon® processor E7-4870 (30M cache, 2.40GHz, 6.4GT/s Intel® QPI, code named Westmere-EX) based servers. Calculation includes analysis based on performance, power, cooling, electricity rates, operating system annual license costs and estimated server costs. This assumes 42U racks, $0.10 per kWh, cooling costs are 2x the server power consumption costs, operating system license cost of $900/year per server, per server cost of $36,000 based on estimated list prices, and estimated server utilization rates. All dollar figures are approximate. Estimated SPECint*_rate_base2006 performance and power results are measured for Intel® Xeon® processor E7-4870 and estimated for Intel Xeon processor 7041 based servers. Platform power was measured during the steady state window of the benchmark run and at idle. Performance gain compared to baseline was 18x (truncated).
***Based on the DVD Store 2 benchmark testing  performed by Dell Labs in March 2011.  Actual performance and power draw will vary based on configuration, usage and manufacturing variability.
  • Pingback: Kevin Houston

  • Pingback: kevin houston

  • Pingback: Luigi Danakos

  • Pingback: Kevin Houston

  • Pingback: Dell Enterprise Team

  • Pingback: Toru OZAKI

  • Pingback: Marc Schreiber

  • Pingback: Dell SMB News

  • Pingback: mdomsch

  • http://twitter.com/supertsai Peter Tsai

    For more information about NIC Partitioning (NPAR) on Dell Blade Servers, I just posted a whitepaper authored by Broadcom on Dell TechCenter – “Enhancing Scalability Through Network Interface Card Partitioning” @ http://www.delltechcenter.com/page/Blades+White+Papers+and+Third-Party+Studies

  • http://pulse.yahoo.com/_65PNS47CC5PBYTBTBEK4OT2DRY SomeOne

    NIC partitioning sounds a lot like HP Virtual Connect with their Flex NICs, which was released, what 2 years ago? Their recent LOMs are also CNAs, and have similar partitioning features.

  • http://twitter.com/sNivasT Srinivas Thodati

    Major differences are: First it is switch agnostic, that is first in industry….You can use any stadard switch….not restricted to one…like HP does with Virtual Connect Flex-10.

    Second major difference, “Dynamic bandwidth allocation/utilization” meaning even if a partitioned NIC is carved out to have a specific bw…say 3Gig…and that NIC needs more to support a burst traffics that needs more that 3Gig at a time…system is smart to allocate that if other partitioned NICs are not using it…meaning BW is efficiently and dynamically allocated….that is first in industry.

    Configurable at the time of POST…

    Supports Jumbo frame configuration per partition basis…so that you could assign one of it as iSCSI HBA support jumbos…great convinience if customer decides to enable one partition as iSCSI HBA and rest as normal NICS.

  • http://profiles.google.com/sendemail2joe Joe Lemaire

    I thought Cisco had the same technology with their UCS M81KR (Palo) VIC? It was my understanding that the advantage of that card was to be able to carve it up into different interfaces all running over the single connection.

  • http://profiles.google.com/sendemail2joe Joe Lemaire

    What’s different between Dell’s NPAR and Cisco’s UCS M81KR (Palo) VIC? I thought the benefit of the Palo card was to be able to split the card up into different interfaces, all running over the same pipe?

  • Pingback: Blades Made Simple™ » Blog Archive » A Review of the Dell PowerEdge M710 HD Blade Server

  • sagarzx

    hi All
    can you tell me the difference between PowerEdge M910 Blade Server, Intel Xeon E7 Support, TPM v/s poweredge M710HD blade server.

    it is urgent so a quick reply will be highly appreciated.

    Thank You

Set your Twitter account name in your settings to use the TwitterBar Section.
%d bloggers like this: