Tag Archives: UCS

Blade Server Q&A

I’ve been getting some questions via email and I’ve seen some questions being asked in my LinkedIn group, “Blade Server Technologies” so I thought I’d take a few minutes in today’s post to answer these questions, as well as get your feedback.  Feel free to post your thoughts on these questions in the comments below. Continue reading

Tagged , , , , ,

Cisco Announces 32 DIMM, 2 Socket Nehalem EX UCS B230-M1 Blade Server

 Thanks to fellow blogger, M. Sean McGee (http://www.mseanmcgee.com/) I was alerted to the fact that Cisco announced on today, Sept. 14, their 13th blade server to the UCS family – the Cisco UCS B230 M1

This newest addition performs a few tricks that no other vendor has been able to perform. Continue reading

Tagged , , , , , , , , , ,

384GB RAM in a Single Blade Server? How Cisco Is Making it Happen (UPDATED 1-22-10)

UPDATED 1/22/2010 with new pictures 
Cisco UCS B250 M1 Extended Memory Blade Server
Cisco UCS B250 M1 Extended Memory Blade Server

 Cisco’s UCS server line is already getting lots of press, but one of the biggest interests is their upcoming Cisco UCS B250 M1 Blade Server.  This server is a full-width server occupying two of the 8 server slots available in a single Cisco UCS 5108 blade chassis.  The server can hold up to 2 x Intel Xeon 5500 Series processors, 2 x dual-port mezzanine cards, but the magic is in the memory – it has 48 memory slots.  

This means it can hold 384GB of RAM using 8GB DIMMS.  This is huge for the virtualization marketplace, as everyone knows that virtual machines LOVE memory.  No other vendor in the marketplace is able to provide a blade server (or any 2 socket Intel Xeon 5500 server for that matter) that can achieve 384GB of RAM. 

 

So what’s Cisco’s secret?  First, let’s look at what Intel’s Xeon 5500 architecture looks like.

 
 

intel ram

 

As you can see above, each Intel Xeon 5500 CPU has its own memory controller, which in turn has 3 memory channels.  Intel’s design limitation is 3 memory DIMMs (DDR3 RDIMM) per channel, so the most a traditional server can have is 18 memory slots or 144GB RAM with 8GB DDR3 RDIMM. 

With the UCS B-250 M1 blade server, Cisco adds an additional 15 memory slots per CPU, or 30 slots per server for a total of 48 memory slots which leads to 384GB RAM with 8GB DDR3 RDIMM. 

 

b250-ram

How do they do it?  Simple – they put in 5 more memory DIMM slots then they present all 24 memory DIMMs across all 3 channels to an ASIC that sits between the memory controller and the memory channels.  The ASIC presents the 24 memory DIMMs as 1 x 32GB DIMM to the memory controller.  For each 8 memory DIMMs, there’s an ASIC.  3 x ASICs per CPU that represents 192GB RAM (or 384GB in a dual CPU config.) 

It’s quite an ingenious approach, but don’t get caught up in thinking about 384GB of RAM – think about 48 memory slots.  In the picture below I’ve grouped off the 8 DIMMs with each ASIC in a green square (click to enlarge.)

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

With that many slots, you can get to 192GB of RAM using 4GB DDR3 RDIMMs– which currently cost about 1/5th of the 8GB DIMMs.  That’s the real value in this server.

Cisco has published a white paper on this patented technology at http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300.html so if you want to get more details, I encourage you to check it out.

Tagged , , , , ,

Cisco’s Unified Computing System Management Software

Cisco’s own Omar Sultan and Brian Schwarz recently blogged about Cisco’s Unified Computing System (UCS) Manager software and offered up a pair of videos demonstrating its capabilities.  In my opinion, the management software of Cisco’s UCS is the magic that is going to push Cisco out of the Visionary quadrant of the Gartner Magic Quadrant for Blade Servers to the “Leaders” quadrant. 

The Cisco UCS Manager is the centralized management interface that integrates the entire set of Cisco Unified Computing System components.   The management software  not only participates in UCS blade server provisioning, but also in device discovery, inventory, configuration, diagnostics, onitoring, fault detection, auditing, and statistics collection. 

On Omar’s Cisco blog, located at http://blogs.cisco.com/datacenter, Omar and Brian created two videos.  Part 1 of their video offers a general overview of the Management software, where as in Part 2 they highlight the capabilities of profiles

I encourage you to check out the videos – they did a great job with them.

Tagged , , , , , , ,

Cisco's New Virtualized Adapter (aka "Palo")

Previously known as “Palo”, Cisco’s virtualized adapter allows for a server to split up the 10Gb pipes into numerous virtual pipes (see belowpalo adapter) like multiple NICs or multiple Fibre Channel HBAs.  Although the card shown in the image to the left is a normal PCIe card, the initial launch of the card will be in the Cisco UCS blade server. 

So, What’s the Big Deal?

When you look at server workloads, their needs vary – web servers need a pair of NICs, whereas database servers may need 4+ NICs and 2+HBAs.  By having the ability to split the 10Gb pipe into virtual devices, you can set up profiles inside of Cisco’s UCS Manager to apply the profiles for a specific servers’ needs.  An example of this would be a server being used for VMware VDI (6 NICs and 2 HBAs) during the day, and at night, it’s repurposed for a computational server needing only 4 NICs.

Another thing to note is although the image shows 128 virtual devices, that is only the theoretical limitation.  The reality is that the # of virtual devices depends on the # of connections to the Fabric Interconnects.  As I previously posted, the servers’ chassis has a pair of  4 port Fabric Extenders (aka FEX) that uplink to the UCS 6100 Fabric Interconnect.  If only 1 of the 4 ports is uplinked to the UCS 6100, then only 13 virtual devices will be available.  If 2 FEX ports are uplinked, then 28 virtual devices will be available.  If 4 FEX uplink ports are used, then 58 virtual devices will be available. 

Will the ability to carve up your 10Gb pipes into smaller ones make a difference?  It’s hard to tell.  I guess we’ll see when this card starts to ship in December of 2009.

Tagged , , , , , , , , , , , ,

Cisco's UCS Software

eWeek recently posted snapshots of Cisco’s Unified Computing System (UCS) Software on their site: http://www.eweek.com/c/a/IT-Infrastructure/LABS-GALLERY-Cisco-UCS-Unified-Computing-System-Software-199462/?kc=rss

Take a good look at the software because the software is the reason this blade system will be successful because they are treating the physical blades as a resource – just CPUs, memory and I/O.  “What” the server should be and “How” the server should act is a feature of the UCS Management software.  It will show you the physical layout of the blades to the UCS 6100 Interconnect, it can show you the configurations of the blades in the attached UCS 5108 chassis, it can set the boot order of the blades, etc.  Quite frankly there are too many features to mention and I don’t want to steal their fire, so take a few minutes to go to:  http://www.eweek.com/c/a/IT-Infrastructure/LABS-GALLERY-Cisco-UCS-Unified-Computing-System-Software-199462/?kc=rss.

Tagged , , , , ,
Translate »