Network interface card partitioning (NPAR) allows users to minimize the implementation of physical Network interface cards (NICs) and separates Local Area Network (LAN) and Storage Area Network (SAN) connections. NPAR improves bandwidth allocation, network traffic management, and utilization in virtualized and non-virtualized network environments. The number of physical servers may be fewer, but the demand for the NIC ports is more. This blog describes how to validate, enable, and configure NPAR on a Dell PowerEdge MX Platform through the server System Setup and the MX compute sled Server Templates within Dell Open Manage Enterprise – Modular (OME-M). Read the full blog here.
As my final blog post of the year, I always like taking a look into my 2022 metrics. It’s fun to see what people search for, and where they come from, so here’s what I found. Before I begin, let me say thank you! I appreciate each and every one of you who take the time to read this simple little blog on blade servers. While I’d love to be the next ServeTheHome, I’m happy with continuing to provide blade server news as my little hobby. Continue reading
Last week I mentioned NPAR as a feature in most Broadcom like the ones found in the PowerEdge MX740c blade server. I realized although NPAR is nearly a decade old, it may not be well-known by readers, so I thought I’d take a few minutes to break it down for you. Continue reading
If you are a reader of BladesMadeSimple, you are no stranger to Dell’s Network Daughter Card (NDC), but if it is a new term for you, let me give you the basics. Up until now, blade servers came with network interface cards (NICs) pre-installed as part of the motherboard. Most servers came standard with Dual-port 1Gb Ethernet NICs on the motherboard, so if you invested into a 10Gb Ethernet (10GbE) or other converged technologies, the onboard NICs were stuck at 1Gb Ethernet. As technology advanced and 10Gb Ethernet became more prevalent in the data center, blade servers entered the market with 10GbE standard on the motherboard. If, however, you weren’t implementing 10GbE then you found yourself paying for technology that you couldn’t use. Basically, what ever came standard on the motherboard is what you were stuck with – until now.
I’ve learned over the years that it is very easy to focus on the feeds and speeds of a server while overlooking features that truly differentiate. When you take a look under the covers, a server’s CPU and memory are going to be equal to the competition, so the innovation that goes into the server is where the focus should be. On Dell’s community blog, Rob Bradfield, a Senior Blade Server Product Line Consultant in Dell’s Enterprise Product Group, discusses some of the innovation and reliability that goes into Dell blade servers. I encourage you to take a look at Rob’s blog post at http://dell.to/mXE7iJ. Continue reading
Dell announced today the addition a full-height 4 socket PowerEdge M915 blade server based on the AMD Opteron 6100 series CPU family. Known best with the code name, “Magny-Cours”, this CPU family boasts up to 12 CPU cores with a 512k per core L2 cache and a 12MB of shared L3 cache. The AMD Opeteron 6100 family also has AMD CoolCore™ technology, AMD PowerNow!™ technology, Enhanced C1 state, AMD CoolSpeed technology.
Dell announced today a refresh of the PowerEdge M910 blade server based on the Intel Xeon E7 processor. The M910 is a full-height blade that can hold 512GB of RAM across 32 DIMMs. The refreshed M910 blade server will also feature Dell’s FlexMem bridge that enables users to use all 32 DIMM slots with only 2 CPUs. You can read more about the M910 blade server in an earlier blog post of mine here.
According to the Dell press release issued today Continue reading