Dell announced today a refresh of the PowerEdge M910 blade server based on the Intel Xeon E7 processor. The M910 is a full-height blade that can hold 512GB of RAM across 32 DIMMs. The refreshed M910 blade server will also feature Dell’s FlexMem bridge that enables users to use all 32 DIMM slots with only 2 CPUs. You can read more about the M910 blade server in an earlier blog post of mine here.
According to the Dell press release issued today, the systems being refreshed with the Intel Xeon E7 processor, which include the PowerEdge M910 blade server as well as the PowerEdge R910 and R810, offer customers substantial performance gains including;
• Up to 38 percent improvement in Oracle application server and database performance over previous generation Xeon 7500 “Nehalem-EX” based servers *.
• Up to 18:1 server consolidation ratio over four socket dual core processor based systems offering up to 93 percent lower operation costs resulting in a one year return on investment**.
• Up to 34 percent improvement in SQL database virtualization performance and 49 percent higher performance per watt with the combination of Xeon E7 processors and new Low Voltage memory (LV RDIMM) offerings***.
According to Dell, the new Intel Xeon E7 CPUs will be shipping this week.
NIC Partitioning (NPAR)
Dell is also announcing the addition of a Converged Network Adapter (CNA) for the M710HD’s network daughter card, or NDC. I wrote about the NDC in this post, but as a recap, it’s a removable card that provides the blade server’s LOM (LAN on Motherboard) network adapter. Previously Dell offered a 4 port Gigabit Ethernet card, but this 2nd offering opens the ability for users with the M710HD to upgrade to a fully converged 10Gb infrastructure. This card is also the first adapter to offer the industry’s first “network partitioning” or NPAR scheme that makes it possible to split the 10GbE pipe with granularity free of any fabric vendor lock-in. NPAR enables optimal use of physical network links allowing each 10GbE port to be carved up into multiple physical 1GbE NICs without the use of software and without any CPU overhead. For example, each 10GbE port can be divided into up to four multiple physical NICs totaling 10Gb offering more flexibility. The NPAR scheme is handled by the Unified Server Configurator, enabled by the Lifecycle Controller that is embedded on the server. For more information on the Unified Server Configurator, check out Dell’s website:
http://content.dell.com/us/en/enterprise/d/solutions/unified-server-configurator.aspx
Pingback: Kevin Houston
Pingback: kevin houston
Pingback: Luigi Danakos
Pingback: Kevin Houston
Pingback: Dell Enterprise Team
Pingback: Toru OZAKI
Pingback: Marc Schreiber
Pingback: Dell SMB News
Pingback: mdomsch
For more information about NIC Partitioning (NPAR) on Dell Blade Servers, I just posted a whitepaper authored by Broadcom on Dell TechCenter – “Enhancing Scalability Through Network Interface Card Partitioning” @ http://www.delltechcenter.com/page/Blades+White+Papers+and+Third-Party+Studies
NIC partitioning sounds a lot like HP Virtual Connect with their Flex NICs, which was released, what 2 years ago? Their recent LOMs are also CNAs, and have similar partitioning features.
Major differences are: First it is switch agnostic, that is first in industry….You can use any stadard switch….not restricted to one…like HP does with Virtual Connect Flex-10.
Second major difference, “Dynamic bandwidth allocation/utilization” meaning even if a partitioned NIC is carved out to have a specific bw…say 3Gig…and that NIC needs more to support a burst traffics that needs more that 3Gig at a time…system is smart to allocate that if other partitioned NICs are not using it…meaning BW is efficiently and dynamically allocated….that is first in industry.
Configurable at the time of POST…
Supports Jumbo frame configuration per partition basis…so that you could assign one of it as iSCSI HBA support jumbos…great convinience if customer decides to enable one partition as iSCSI HBA and rest as normal NICS.
I thought Cisco had the same technology with their UCS M81KR (Palo) VIC? It was my understanding that the advantage of that card was to be able to carve it up into different interfaces all running over the single connection.
What’s different between Dell’s NPAR and Cisco’s UCS M81KR (Palo) VIC? I thought the benefit of the Palo card was to be able to split the card up into different interfaces, all running over the same pipe?
Pingback: Blades Made Simple™ » Blog Archive » A Review of the Dell PowerEdge M710 HD Blade Server
hi All
can you tell me the difference between PowerEdge M910 Blade Server, Intel Xeon E7 Support, TPM v/s poweredge M710HD blade server.
it is urgent so a quick reply will be highly appreciated.
Thank You