Tag Archives: Nehalem EX

Cisco Announces 32 DIMM, 2 Socket Nehalem EX UCS B230-M1 Blade Server

 Thanks to fellow blogger, M. Sean McGee (http://www.mseanmcgee.com/) I was alerted to the fact that Cisco announced on today, Sept. 14, their 13th blade server to the UCS family – the Cisco UCS B230 M1

This newest addition performs a few tricks that no other vendor has been able to perform. Continue reading

Dell M910 Blade Server – Based on the Nehalem EX

Dell appears to be first to the market today with complete details on their Nehalem EX blade server, the PowerEdge M910. Based on the Nehalem EX technology (aka Intel Xeon 7500 Chipset), the server offers quite a lot of horsepower in a small, full-height blade server footprint.

Some details about the server:

  • uses Intel Xeon 7500 or 6500 CPUs
  • has support for up to 512GB using 32 x 16 DIMMs
  • comes standard two embedded Broadcom NetExtreme II Dual Port 5709S Gigabit Ethernet NICs with failover and load balancing.
  • has two 2.5″ Hot-Swappable SAS/Solid State Drives
  • 3 4 available I/O mezzanine card slots
  • comes with a Matrox G200eW w/ 8MB memory standard
  • can function on 2 CPUs with access to all 32 DIMM slots

Dell (finally) Offers Some Innovation
I commented a few weeks ago that Dell and innovate were rarely used in the same sentence, however with today’s announcement, I’ll have to retract that statement. Before I elaborate on what I’m referring to, let me do some quick education. The design of the Nehalem architecture allows for each processor (CPU) to have access to a dedicated bank of memory along with its own memory controller. The only downside to this is that if a CPU is not installed, the attached memory banks are not useable. THIS is where Dell is offering some innovation. Today Dell announced the “FlexMem Bridge” technology. This technology is simple in concept as it allows for the memory of a CPU socket that is not populated to still be used. In essence, Dell’s using technology that bridges the memory banks across un-populated CPU slots to the rest of the server’s populated CPUs. With this technology, a user could start of with only 2 CPUs and still have access to 32 memory DIMMs. Then, over time, if more CPUs are needed, they simply remove the FlexMem Bridge adapters from the CPU sockets then replace with CPUs – now they would have a 4 CPU x 32 DIMM blade server.

Congrats to Dell. Very cool idea. The Dell PowerEdge M910 is available to order today from the Dell.com website.

 Let me know what you guys think.

Details on Intel’s Nehalem EX (Xeon 7500 and Xeon 6500)

Intel is scheduled to “officially” announce today the details of their Nehalem EX CPU platform, although the details have been out for quite a while, however I wanted to highlight some key points.

Intel Xeon 7500 Chipset
This chipset will be the flagship replacement for the existing Xeon 7400 architecture.  Enhancements include:
•Nehalem uarchitecture
•8-cores per CPU 
•24MB Shared L3 Cache
• 4 Memory Buffers per CPU
•16 DIMM slots per CPU for a total of 64 DIMM slots supporting up to 1 terabyte of memory (across 4 CPUs)
•72 PCIe Gen2 lanes
•Scaling from 2-256 sockets  
•Intel Virtualization Technologies

Intel Xeon 6500 Chipset
Perhaps the coolest addition to the Nehalem EX announcement by Intel is the ability for certain vendors to cut the architecture in half, and use the same quality of horsepower across 2 CPUs.  The Xeon 6500 chipset will offer 2 CPUs, each with the same qualities of it’s bigger brother, the Xeon 7500 chipset.  See below for details on both of the offerings.

Additional Features
Since the Xeon 6500/7500 chipsets are modeled off the familiar Nehalem uarchitecture, there are certain well-known features that are available.  Both Turbo Boost and HyperThreading have been added to the and will provide users for the ability to have better performance in their high-end servers (shown left to right below.)

HyperThreading

Memory
Probably the biggest winner of the features that Intel’s bringing with the Nehalem EX announcement is the ability to have more memory and bigger memory pipes.  Each CPU will have 4 x high speed “Scalable Memory Interconnects” (SMI’s) that will be the highways for the memory to communicate with the CPUs.  As with the existing Nehalem architecture, each CPU has a dedicated memory controller that provides access to the memory.  In the case of the Nehalem EX design, each CPU has 4 pathways that each have a Scalable Memory Buffer, or SMB, that provide access to 4 memory DIMMs.  So, in total, each CPU will have access to 16 DIMMs across 4 pathways.  Based on the simple math, a server with 4 CPUs will be able to have up to 64 memory DIMMs.  Some other key facts:
• it will support up to 16GB DDR3 DIMMs
•it will support up to 1TB with 16GB DIMMS
•it
will support DDR3 DIMMs up to 1066MHz, in Registered, Single-Rank, Dual-Rank and Quad-Rank flavors.

Another important note is the actual system memory speed will depend on specific processor capabilities (see reference table below for max SMI link speeds per CPU):
•6.4GT/s SMI link speed capable of running memory speeds up to 1066Mhz
•5.86GT/s SMI link speed capable of running memory speeds up to 978Mhz
•4.8GT/s SMI link speed capable of running memory speeds up to 800Mhz

Here’s a great chart to reference on the features across the individual CPU offerings, from Intel:

Finally, take a look at some comparisons between the Nehalem EX (Xeon 7500) and the previous generation, Xeon 7400:

That’s it for now.  Check back later for more specific details on Dell, HP, IBM and Cisco’s new Nehalem EX blade servers.

4 Socket Blade Servers Density: Vendor Comparison

IMPORTANT NOTE – I updated this blog post on Feb. 28, 2011 with better details.  To view the updated blog post, please go to:

https://bladesmadesimple.com/2011/02/4-socket-blade-servers-density-vendor-comparison-2011/

Original Post (March 10, 2010):

As the Intel Nehalem EX processor is a couple of weeks away, I wonder what impact it will have in the blade server market.  I’ve been talking about IBM’s HX5 blade server for several months now, so it is very clear that the blade server vendors will be developing blades that will have some iteration of the Xeon 7500 processor.  In fact, I’ve had several people confirm on Twitter that HP, Dell and even Cisco will be offering a 4 socket blade after Intel officially announces it on March 30.  For today’s post, I wanted to take a look at how the 4 socket blade space will impact the overall capacity of a blade server environment.  NOTE: this is purely speculation, I have no definitive information from any of these vendors that is not already public.

The Cisco UCS 5108 chassis holds 8 “half-width” B-200 blade servers or 4 “full-width” B-250 blade servers, so when we guess at what design Cisco will use for a 4 socket Intel Xeon 7500 (Nehalem EX) architecture, I have to place my bet on the full-width form factor.  Why?  Simply because there is more real estate.  The Cisco B250 M1 blade server is known for its large memory capacity, however Cisco could sacrifice some of that extra memory space for a 4 socket, “Cisco B350 blade.  This would provide a bit of an issue for customers wanting to implement a complete rack full of these servers, as it would only allow for a total of 28 servers in a 42U rack (7 chassis x 4 servers per chassis.)

Estimated Cisco B300 with 4 CPUs

On the other hand, Cisco is in a unique position in that their half-width form factor also has extra real estate because they don’t have 2 daughter card slots like their competitors.  Perhaps Cisco would create a half-width blade with 4 CPUs (a B300?)  With a 42U rack, and using a half-width design, you would be able to get a maximum of 56 blade servers (7 chassis x 8 servers per chassis.)

Dell
The 10U M1000e chassis from Dell can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that Dell would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that Dell’s 4 socket blade will be a full-height blade server, probably named a PowerEdge M910.  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

HP
Similar to Dell, HP’s 10U BladeSystem c7000 chassis can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that HP would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that HP’s 4 socket blade will be a full-height blade server, probably named a Proliant BL680 G7 (yes, they’ll skip G6.)  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

IBM
Finally, IBM’s 9U BladeCenter H chassis offers up 14 servers.  IBM has one size server, called a “single wide.”  IBM will also have the ability to combine servers together to form a “double-wide”, which is what is needed for the newly announced IBM BladeCenter HX5.  A double-width blade server reduces the IBM BladeCenter’s capacity to 7 servers per chassis.  This means that you would be able to put 28 x 4 socket IBM HX5 blade servers into a 42u rack (4 chassis x 7 servers each.)

Summary
In a tie for 1st place, at 32 blade servers in a 42u rack, Dell and HP would have the most blade server density based on their existing full-height blade server design.  IBM and Cisco would come in at 3rd place with 28 blade servers in a 42u rack..  However IF Cisco (or HP and Dell for that matter) were able to magically re-design their half-height servers to hold 4 CPUs, then they would be able to take 1st place for blade density with 56 servers. 

Yes, I know that there are slim chances that anyone would fill up a rack with 4 socket servers, however I thought this would be good comparison to make.  What are your thoughts?  Let me know in the comments below.

Mark Your Calendar – Upcoming Announcements

As I mentioned previously, the next few weeks are going to be filled with new product / technology annoucements.  Here’s a list of some dates that you may want to mark on your calendar (and make sure to come back here for details:)

Feb 9 – Big Blue new product announcement (hint: in the BladeCenter family)

Mar 2 – Big Blue non-product annoucement (hint: it’s not the eX4 family)

Mar 16  – Intel Westmere (Intel Xeon 5600) Processor Announcement (expect HP and IBM to announce their Xeon 5600 offerings)

Mar 30 – Intel Nehalem EX (Xeon 7600) Processor Annoucement (expect HP and IBM to announce their Intel Xeon 7600 offerings)

As always, you can expect for me to give you coverage on the new blade server technology as it gets announced!

More IBM BladeCenter Rumours…

Okay, I can’t hold back any longer – I have more rumours. The next 45 days is going to be an EXTREMELY busy month with Intel announcing their Westmere EP processor, the predecessor to the Nehalem EP CPU and with the announcement of the Nehalem EX CPU, the predecessor to the Xeon 7400 CPU.  I’ll post more details on these processors in the future, as it becomes available, but for now, I want to talk on some additional rumours that I’m hearing from IBM.  As I’ve mentioned in my previous rumour post: this is purely speculation, I have no definitive information from IBM so this may be false info.  That being said, here we go:

Rumour #1:  As I previously posted, IBM has announced they will have a blade server based on their eX5 architecture  – the next generation of their eX4 architecture found in their IBM System x3850 M2 and x3950M2.  I’ve posted what I think this new blade server will look like (you can see it here) and  I had previously speculated that the server would be called  HS43 – however it appears that IBM may be changing their nomenclature for this class of blade to “HX5“.  I can see this happening – it’s a blend of “HS” and “eX5”.  It is a new class of blade server, so it makes sense.   I like the HX5 blade server name, although if you Google HX5 right now, you’ll get a lot of details about the Sony CyberShot DSC-HX5 digital camera.  (Maybe IBM should re-consider using HS43 instead of HX5 to avoid any lawsuits.)  It also makes it very clear that it is part of their eX5 architecture, so we’ll see if it gets announced that way.

Speaking of announcements…

Rumour #2:  While it is clear that Intel is waiting until March (31, I think) to announce the Nehalem EX and Westmere EP processors, I’m hearing rumours that IBM will be announcing their product offerings around the new Intel processors on March 2, 2010 in Toronto.  It will be interesting to see if this happens so soon (4 weeks away) but when it does, I’ll be sure to give you all the details!

That’s all I can talk about for now as “rumours”.  I have more information on another IBM announcement that I can not talk about, but come back to my site on Feb. 9 and you’ll find out what that new announcement is.

UNVEILED: First Blade Server Based on Intel Nehalem EX

The first blade server with the upcoming Intel Nehalem EX processor has finally been unveiled.  While it is known that IBM will be releasing a 2 or 4 socket blade server with the Nehalem EX, no other vendor has revealed plans up until now.  SGI recently announced they will be offering the Nehelem EX on their Altix® UV platform. 

Touted as a “The World’s Fastest Supercomputer”, the UV line features the fifth generation of the SGI NUMAlink interconnect, which offers up a whopping 15 GB/sec transfer rate, as well as direct access up to 16 TB of shared memory. The system will have the ability to be configured with up to 2048 Nehalem-EX cores (via 256 processors, or 128 blades) in a single federation with a single global address space.

According to the SGI website, the UV will come in two flavors:

SGI Altix UV 1000

Altix UV 1000  – designed for maximum scalability, this system ships as a fully integrated cabinet-level solution with up to 256 sockets (2,048 cores) and 16TB of shared memory in four racks.

Altix UV 100 (not pictured) – same design as the UV 1000, but designed for the mid-range market;  based on an industry-standard 19″ rackmount 3U form factor. Altix UV 100 scales to 96 sockets (768 cores) and 6TB of shared memory in two racks.

SGI has given quite a bit of techinical information about these servers in this whitepaper, including details about the Nehalem EX architecture that I haven’t even seen from Intel.  SGI has also published several customer testimonials, including one from the University of Tennessee – so check it out here.

Hopefully, this is just the first of many announcements to come around the Intel Nehalem EX processor.

Cisco, EMC and VMware Announcement – My Thoughts


By now I’m sure you’ve read, heard or seen Tweeted the announcement that Cisco, EMC and VMware have come together and created the Virtual Computing Environment coalition .   So what does this announcement really mean?  Here are my thoughts:

Greater Cooperation and Compatibility
Since these 3 top IT giants are working together, I expect to see greater cooperation between all three vendors, which will lead to understanding between what each vendor is offering.  More important, though, is we’ll be able to have reference architecturethat can be a starting point to designing a robust datacenter.  This will help to validate that an “optimized datacenter” is a solution that every customer should consider.

Technology Validation
With the introduction of the Xeon 5500 processor from Intel earlier this year and the announcement of the Nehalem EX coming early in Q1 2010, the ability to add more and more virtual machines onto a single host server is becoming more prevalent.  No longer is the processor or memory the bottleneck – now it’s the I/O.  With the introduction of Converged Network Adapters (CNAs), servers now have access to  Converged Enhanced Ethernet (CEE) or DataCenter Ethernet (DCE) providing up to 10Gb of bandwidth running at 80% efficiency with lossless packets.  With this lossless ethernet, I/O is no longer the bottleneck.

VMware offers the top selling virtualization software, so it makes sense they would be a good fit for this solution.

Cisco has a Unified Computing System that offers up the ability to combine a server running a CNA to a Interconnect switch that allows the data to be split out into ethernet and storage traffic.  It also has a building block design to allow for ease of adding new servers – a key messaging in the Coalition announcement.

EMCoffers a storage platform that will enable the storage traffic from the Cisco UCS 6120XP Interconnect Switch and they have a vested interest in VMware and Cisco, so this marriage of the 3 top IT vendors is a great fit.

Announcement of Vblock™ Infrastructure Packages
According to the announcement, the Vblock Infrastructure Package “will provide customers with a fundamentally better approach to streamlining and optimizing IT strategies around private clouds.”  The packages will be fully integrated, tested, validated, and that combine best-in-class virtualization, networking, computing, storage, security, and management technologies from Cisco, EMC and VMware with end-to-end vendor accountability.  My thought on these packages is that they are really nothing new.  Cisco’s UCS has been around, VMware vSphere has been around and EMC’s storage has been around.  The biggest message from this announcement is that there will soon be  “bundles” that will simplify customers solutions.  Will that take away from Solution Providers’ abilities to implement unique solutions?  I don’t think so.  Although this new announcement does not provide any new product, it does mark the beginning of an interesting relationship between 3 top IT giants and I think this announcement will definitely be an industry change – it will be interesting to see what follows.

UPDATE – click here check out a 3D model of the vBlocks Architecture.

IBM Announces 4 Socket Intel Blade Server-UPDATED

IBM announced last week they will be launching a new blade server modeled with the upcoming 4 socket Intel Nehalem EX.  While details have not yet been provided on this new server, I wanted to provide an estimation of what this server could look like, based on previous IBM models.  I’ve drawn up what I think it will look like below, but first let me describe it.

“New Server Name”
IBM’s naming schema is pretty straight forward: Intel blades are “HS”, AMD blades are “LS”, Power blades are “JS”.  Knowing this, I the new server will most likely be called a “HS42“.  IBM previously had an HS40 and HS41, so calling it an HS42 would make the most sense. 

“Size
With the amount of memory that each CPU will have access to, I don’t see any way for IBM to create a 4 socket blade that wasn’t a “double-wide” form factor.  A “double-wide” design means the server is 2 server slots wide, so in a single IBM BladeCenter H chassis, customers would be limited to 7  x HS42’s per chassis.

“Memory”
The Intel Nehalem EX will tentatively support 16 memory slots PER CPU, across 4 memory channels, so a 4 socket server will have 64 memory slots.  Each memory channel can hold up to 4 DIMMs each.  This is great, but this is the MAX for an upcoming Intel Nehalem EX server.  I do not expect for any blade server vendor to achieve 64 memory slots with 4 CPUs.  Since this is the maximum, it makes sense that vendors, like IBM, will be able to use less memory.  I expect for these new servers to have 12 memory slots per CPUs (or 3 DIMMs per memory channel).  This will still provide 48 memory dimms per” HS42″ blade server; and with 16Gb DIMMs, that would equal 768Gb per blade server.

“CPU”
The “HS42” would have up to 4 x Intel Nehalem EX CPU’s, each with 8 cores, for a total of 32 CPU cores per “HS42” server.  HOWEVER, Intel is offering Hyperthreading with this CPU so an 8 core CPU now looks like 16 CPUs.

“Internal Drive Capacity”
I don’t see any way for IBM to have hot-swap drives in this server.  There is just not enough real estate.  So, I believe they would consider putting in Solid State drives (SSD’s) toward the front of the server.  Will they put it on both sides of the server, probably not.  The role of these drives would be just to provide space for your boot O/S.  The data will sit on a storage area network. 

“I/O Expansion”
I don’t think that IBM will re-design their existing I/O architecture for the blade servers.  Therefore, I expect for each side of the double-wide “HS42” to have a single CIOv and a CFF-h daughter card expansion slot, so a single HS42 would have 4 expansion slots.  This is assuming that IBM designs connector pins that interconnect the two halves of the server together that don’t interfere with the card slots (presumably at the upper half of the connections.)HS42 Estimation

As we come closer to the release date of the Intel Nehalem EX processor later in Q4 of 2009, I expect to hear more definitive details on the announced 4 socket IBM Blade server, so make sure to check back here later this year.

UPDATE (10/6/09):   I’m hearing rumors that IBM’s Nehalem EX processor offerings (aka “X5″ offerings” will be shipping in Q2 of 2010.)  Once that is confirmed by IBM, I’ll post a new post.