Category Archives: Cisco

Cisco, IBM and HP Update Blade Portfolio with Westmere Processor

Intel officially announced today the Xeon 5600 processor, code named “Westmere.” Cisco, HP and IBM also announced their blade servers that have the new processor. The Intel Xeon 5600 offers:

  • 32nm process technology with 50% more threads and cache
  • Improved energy efficiency with support for 1.35V low power memory

There will be 4 core and 6 core offerings. This processor also provide the option of HyperThreading, so you could have up to 8 threads and 12 threads per processor, or 16 and 24 in a dual CPU system. This will be a huge advantage to applications that like multiple threads, like virtualization. Here’s a look at what each vendor has come out with:

Cisco
Cisco B200 blade serverThe B200 M2 provides Cisco users with the current Xeon 5600 processors. It looks like Cisco will be offering a choice of the following Xeon 5600 processors: Intel Xeon X5670, X5650, E5640, E5620, L5640, or E5506. Because Cisco’s model is a “built-to-order” design, I can’t really provide any part numbers, but knowing what speeds they have should help.

HP
HP is starting off with the Intel Xeon 5600 by bumping their existing G6 models to include the Xeon 5600 processor. The look, feel, and options of the blade servers will remain the same – the only difference will be the new processor. According to HP, “the HP ProLiant G6 platform, based on Intel Xeon 5600 processors, includes the HP ProLiant BL280c, BL2x220c, BL460c and BL490c server blades and HP ProLiant WS460c G6 workstation blade for organizations requiring high density and performance in a compact form factor. The latest HP ProLiant G6 platforms will be available worldwide on March 29.It appears that HP’s waiting until March 29 to provide details on their Westmere blade offerings, so don’t go looking for part numbers or pricing on their website.

IBM
IBM is continuing to stay ahead of the game with details about their product offerings. They’ve refreshed their HS22 and HS22v blade servers:

HS22
7870ECU – Express HS22, 2x Xeon 4C X5560 95W 2.80GHz/1333MHz/8MB L2, 4x2GB, O/Bay 2.5in SAS, SR MR10ie

7870G4U – HS22, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870GCU – HS22, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 2.5in SAS, Broadcom 10Gb Gen2 2-port

7870H2U -HS22, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870H4U – HS22, Xeon 6C X5670 95W 2.93GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870H5U – HS22, Xeon 4C X5667 95W 3.06GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870HAU – HS22, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS, Emulex Virtual Fabric Adapter

7870N2U – HS22, Xeon 6C L5640 60W 2.26GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870EGU – Express HS22, 2x Xeon 4C E5630 80W 2.53GHz/1066MHz/12MB, 6x2GB, O/Bay 2.5in SAS

IBM HS22V Blade ServerHS22V
7871G2U HS22V, Xeon 4C E5620 80W 2.40GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871G4U HS22V, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871GDU HS22V, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871H4U HS22V, Xeon 6C X5670 95W 2.93GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871H5U HS22V, Xeon 4C X5667 95W 3.06GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871HAU HS22V, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871N2U HS22V, Xeon 6C L5640 60W 2.26GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871EGU Express HS22V, 2x Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 6x2GB, O/Bay 1.8in SAS

7871EHU Express HS22V, 2x Xeon 6C X5660 95W 2.80GHz/1333MHz/12MB, 6x4GB, O/Bay 1.8in SAS

I could not find any information on what Dell will be offering, from a blade server perspective, so if you have information (that is not confidential) feel free send it my way.

4 Socket Blade Servers Density: Vendor Comparison

IMPORTANT NOTE – I updated this blog post on Feb. 28, 2011 with better details.  To view the updated blog post, please go to:

https://bladesmadesimple.com/2011/02/4-socket-blade-servers-density-vendor-comparison-2011/

Original Post (March 10, 2010):

As the Intel Nehalem EX processor is a couple of weeks away, I wonder what impact it will have in the blade server market.  I’ve been talking about IBM’s HX5 blade server for several months now, so it is very clear that the blade server vendors will be developing blades that will have some iteration of the Xeon 7500 processor.  In fact, I’ve had several people confirm on Twitter that HP, Dell and even Cisco will be offering a 4 socket blade after Intel officially announces it on March 30.  For today’s post, I wanted to take a look at how the 4 socket blade space will impact the overall capacity of a blade server environment.  NOTE: this is purely speculation, I have no definitive information from any of these vendors that is not already public.

The Cisco UCS 5108 chassis holds 8 “half-width” B-200 blade servers or 4 “full-width” B-250 blade servers, so when we guess at what design Cisco will use for a 4 socket Intel Xeon 7500 (Nehalem EX) architecture, I have to place my bet on the full-width form factor.  Why?  Simply because there is more real estate.  The Cisco B250 M1 blade server is known for its large memory capacity, however Cisco could sacrifice some of that extra memory space for a 4 socket, “Cisco B350 blade.  This would provide a bit of an issue for customers wanting to implement a complete rack full of these servers, as it would only allow for a total of 28 servers in a 42U rack (7 chassis x 4 servers per chassis.)

Estimated Cisco B300 with 4 CPUs

On the other hand, Cisco is in a unique position in that their half-width form factor also has extra real estate because they don’t have 2 daughter card slots like their competitors.  Perhaps Cisco would create a half-width blade with 4 CPUs (a B300?)  With a 42U rack, and using a half-width design, you would be able to get a maximum of 56 blade servers (7 chassis x 8 servers per chassis.)

Dell
The 10U M1000e chassis from Dell can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that Dell would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that Dell’s 4 socket blade will be a full-height blade server, probably named a PowerEdge M910.  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

HP
Similar to Dell, HP’s 10U BladeSystem c7000 chassis can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that HP would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that HP’s 4 socket blade will be a full-height blade server, probably named a Proliant BL680 G7 (yes, they’ll skip G6.)  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

IBM
Finally, IBM’s 9U BladeCenter H chassis offers up 14 servers.  IBM has one size server, called a “single wide.”  IBM will also have the ability to combine servers together to form a “double-wide”, which is what is needed for the newly announced IBM BladeCenter HX5.  A double-width blade server reduces the IBM BladeCenter’s capacity to 7 servers per chassis.  This means that you would be able to put 28 x 4 socket IBM HX5 blade servers into a 42u rack (4 chassis x 7 servers each.)

Summary
In a tie for 1st place, at 32 blade servers in a 42u rack, Dell and HP would have the most blade server density based on their existing full-height blade server design.  IBM and Cisco would come in at 3rd place with 28 blade servers in a 42u rack..  However IF Cisco (or HP and Dell for that matter) were able to magically re-design their half-height servers to hold 4 CPUs, then they would be able to take 1st place for blade density with 56 servers. 

Yes, I know that there are slim chances that anyone would fill up a rack with 4 socket servers, however I thought this would be good comparison to make.  What are your thoughts?  Let me know in the comments below.

Tolly Report: HP Flex-10 vs Cisco UCS (Network Bandwidth Scalability Comparison)

Tolly.com announced on 2/25/2010 a new Test Report that compares the network bandwidth scalability between HP BladeSystem c7000 with BL460 G6 Servers and Cisco UCS 5100 with B200 Servers, and the results were interesting.   The report simply tested 6 HP blades, with a single Flex-10 Module vs 6 Cisco blades using their Fabric Extender + a single Fabric Interconnect.  I’m not going to try and re-state what the report says (for that you can download it directly), instead, I’m going to highlight the results.  It is important to note that the report was “commissioned by Hewlett-Packard Dev. Co, L.P.”

Result #1:  HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Physical-to-Physical Comparison
>The test shows when 4 physical servers were tested, Cisco can achieve an aggregate throughput of 36.59 Gbps vs HP achieving 35.83Gbps (WINNER: Cisco)

>When 6 physical servers were tested, Cisco achieved an aggregate throughput of 27.37 Gbps vs HP achieving 53.65 Gbps – a difference of 26.28 Gbps (WINNER: HP)

Result #2:
 HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Virtual-to-Virtual Comparison
>Testing 2 servers each running 8 VMware Red Hat Linux hosts showed that HP achieved an aggregate throughput of 16.42 Gbps vs Cisco UCS achieving 16.70 Gbps (WINNER: Cisco). 

The results of the above was performed with the 2 x Cisco B200 blade servers each mapped to a dedicated 10Gb uplink port on the Fabric Extender (FEX).  When the 2 x Cisco B200 blade servers were designed to share the same 10Gb uplink port on the FEX, the achieved aggregate throughput on the Cisco UCS decreased to 9.10 Gbps.

A few points to note about these findings:
a) the HP Flex-10 Module has 8 x 10Gb uplinks whereas the Cisco Fabric Extender (FEX) has 4 x 10Gb uplinks

b) Cisco’s FEX Design allows for the 8 blade servers to extend out the 4 external ports in the FEX a 2:1 ratio (2 blades per external FEX port.) The current Cisco UCS design requires the servers to be “pinned”, or permanently assigned, to the respective FEX uplink. This works well when there are 4 blade servers, but when you get to more than 4 blade servers, the traffic is shared between two servers, which could cause bandwidth contention. 

 Furthermore, it’s important to understand that the design of the UCS blade infrastructure does not allow communication to go from Server 1 to Server 2 without leaving the FEX, connecting to the Fabric Interconnect (top of the picture) then returning to the FEX and connecting to the server.  This design is the potential cause of the decrease in aggregate throughput from 16.70Gbps to 9.10Gbps as shown above.


One of the “Bottom Line” conclusions from this report states, “throughput degradation on the Cisco UCS cased by bandwidth contention is a cause of concern for customers considering the use of UCS in a virtual server environment”  however I encourage you to take a few minutes, download this full report from the Tolly.com website and make your own conclusions about this report. 

Let me know your thoughts about this report – leave a comment below.

Disclaimer: This report was brought to my attention while attending the HP Tech Day event where airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

More HP and IBM Blade Rumours

I wanted to post a few more rumours before I head out to HP in Houston for “HP Blades and Infrastructure Software Tech Day 2010” so it’s not to appear that I got the info from HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.

First off – the HP Rumour:
I’ve caught wind of a secret that may be truth, may be fiction, but I hope to find out for sure from the HP blade team in Houston.  The rumour is that HP’s development team currently has a Cisco Nexus Blade Switch Module for the HP BladeSystem in their lab, and they are currently testing it out.

Now, this seems far fetched, especially with the news of Cisco severing partner ties with HP, however, it seems that news tidbit was talking only about products sold with the HP label, but made by Cisco (OEM.)   HP will continue to sell Cisco Catalyst switches for the HP BladeSystem and even Cisco branded Nexus switches with HP part numbers (see this HP site for details.)  I have some doubt about this rumour of a Cisco Nexus Switch that would go inside the HP BladeSystem simply because I am 99% sure that HP is announcing a Flex10 type of BladeSystem switch that will allow converged traffic to be split out, with the Ethernet traffic going to the Ethernet fabric and the Fibre traffic going to the Fibre fabric (check out this rumour blog I posted a few days ago for details.)  Guess only time will tell.

The IBM Rumour:
I posted a few days ago a rumour blog that discusses the rumour of HP’s next generation adding Converged Network Adapters (CNA) to the motherboard on the blades (in lieu of the 1GB or Flex10 NICs), well, now I’ve uncovered a rumour that IBM is planning on following later this year with blades that will also have CNA’s on the motherboard.  This is huge!  Let me explain why.

The design of IBM’s BladeCenter E and BladeCenter H have the 1Gb NICs onboard each blade server hard-wired to I/O Bays 1 and 2 – meaning only Ethernet modules can be used in these bays (see the image to the left for details.)  However, I/O Bays 1 and 2 are for “standard form factor I/O modules” while I/O Bays are for “high speed form factor I/O modules”.  This means that I/O Bays 1 and 2 can not handle “high speed” traffic, i.e. converged traffic.

This means that IF IBM comes out with a blade server that has a CNA on the motherboard, either:

a) the blade’s CNA will have to route to I/O Bays 7-10
OR
b) IBM’s going to have to come out with a new BladeCenter chassis that allows the high speed converged traffic from the CNAs to connect to a high speed switch module in Bays 1 and 2.

So let’s think about this.  If IBM (and HP for that matter) does put CNA’s on the motherboard, is there a need for additional mezzanine/daughter cards?  This means the blade servers could have more real estate for memory, or more processors.   If there’s no extra daughter cards, then there’s no need for additional I/O module bays.  This means the blade chassis could be smaller and use less power – something every customer would like to have.

I can really see the blade market moving toward this type of design (not surprising very similar to Cisco’s UCS design) – one where only a pair of redundant “modules” are needed to split converged traffic to their respective fabrics.  Maybe it’s all a pipe dream, but when it comes true in 18 months, you can say you heard it here first.

Thanks for reading.  Let me know your thoughts – leave your comments below.

10 Things That Cisco UCS Polices Can Do (That IBM, Dell or HP Can’t)

ViewYonder.com recently posted a great write up on some things that Cisco’s UCS can do that IBM, Dell or HP really can’t. You can go to ViewYonder.com to read the full article, but here are 10 things that Cisco’s UCS Polices do:

  • Chassis Discovery – allows you to decide how many links you should use from the FEX (2104) to the FI (6100).  This affects the path from blades to FI and the oversubscription rate.  If you’ve cabled 4 I can just use 2 if you want, or even 1.
  • MAC Aging – helps you manage your MAC table?  This affects ability to scale, as bigger MAC tables need more management.
  • Autoconfig – when you insert a blade, depending on its hardware config enables you to apply a specific template for you and put it in a organization automatically.
  • Inheritence – when you insert a blade, allows you to automatically create a logical version (Service Profile) by coping the UUID, MAC, WWNs etc.
  • vHBA Templates – helps you to determine how you want _every_ vmhba2 to look like (i.e. Fabric,  VSAN,  QoS, Pin to a border port)
  • Dynamic vNICs – helps you determine how to distribute the VIFs on a VIC
  • Host Firmware – enables you to determine what firmware to apply to the CNA, the HBA, HBA ROM, BIOS, LSI
  • Scrub – provides you with the ability to wipe the local disks on association
  • Server Pool Qualification – enables you to determine which hardware configurations live in which pool
  • vNIC/vHBA Placement – helps you to determine how to distribute VIFs over one/two CNAs?

For more on this topic, visit Steve’s blog at ViewYonder.com.  Nice job, Steve!

Cisco Takes Top 8 Core VMware VMmark Server Position

Cisco is getting some (more) recognition with their UCS blade server product, as they recently achieved the top position for “8 Core Server” on VMware’s VMmark benchmark tool.  VMmark is the industry’s first (and only credible) virtualization benchmark for x86-based computers.  According to the VMmark website, the Cisco UCS B200 blade server reached a score of 25.06 @ 17 tiles.  A “tile” is simple a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment.   The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity.

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot?  Here’s the configuration:

server config:

  • 2 x Intel Xeon X5570 Processors
  • 96GB of RAM (16 x 8GB)
  • 1 x Converged Network Adapter (Cisco UCS M71KR-Q)

storage config:

  • EMC CX4-240
  • Cisco MDS 9130
  • 1154.27GB Used Disk Space
  • 1024MB Array Cache
  • 41 disks used on 4 enclosures/shelves (1 with 14 disk, 3 with 9 disks)
  • 37 LUNs used
    *17 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *17 at 15GB (database) + 2 LUNs at 400GB (Misc) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

While first place on the VMmark page (8 cores) shows Fujitsu’s RX300, it’s important to note that it was reached using Intel’s W5590 processor – a processor that is designed for “workstations” – not servers.  Second place, of server processors, currently shows HP’s BL490 with 24.54 (@ 17 tiles)

Thanks to Omar Sultan (@omarsultan) for Tweeting about this and to Harris Sussman for blogging about it.

384GB RAM in a Single Blade Server? How Cisco Is Making it Happen (UPDATED 1-22-10)

UPDATED 1/22/2010 with new pictures 
Cisco UCS B250 M1 Extended Memory Blade Server
Cisco UCS B250 M1 Extended Memory Blade Server

 Cisco’s UCS server line is already getting lots of press, but one of the biggest interests is their upcoming Cisco UCS B250 M1 Blade Server.  This server is a full-width server occupying two of the 8 server slots available in a single Cisco UCS 5108 blade chassis.  The server can hold up to 2 x Intel Xeon 5500 Series processors, 2 x dual-port mezzanine cards, but the magic is in the memory – it has 48 memory slots.  

This means it can hold 384GB of RAM using 8GB DIMMS.  This is huge for the virtualization marketplace, as everyone knows that virtual machines LOVE memory.  No other vendor in the marketplace is able to provide a blade server (or any 2 socket Intel Xeon 5500 server for that matter) that can achieve 384GB of RAM. 

 

So what’s Cisco’s secret?  First, let’s look at what Intel’s Xeon 5500 architecture looks like.

 
 

intel ram

 

As you can see above, each Intel Xeon 5500 CPU has its own memory controller, which in turn has 3 memory channels.  Intel’s design limitation is 3 memory DIMMs (DDR3 RDIMM) per channel, so the most a traditional server can have is 18 memory slots or 144GB RAM with 8GB DDR3 RDIMM. 

With the UCS B-250 M1 blade server, Cisco adds an additional 15 memory slots per CPU, or 30 slots per server for a total of 48 memory slots which leads to 384GB RAM with 8GB DDR3 RDIMM. 

 

b250-ram

How do they do it?  Simple – they put in 5 more memory DIMM slots then they present all 24 memory DIMMs across all 3 channels to an ASIC that sits between the memory controller and the memory channels.  The ASIC presents the 24 memory DIMMs as 1 x 32GB DIMM to the memory controller.  For each 8 memory DIMMs, there’s an ASIC.  3 x ASICs per CPU that represents 192GB RAM (or 384GB in a dual CPU config.) 

It’s quite an ingenious approach, but don’t get caught up in thinking about 384GB of RAM – think about 48 memory slots.  In the picture below I’ve grouped off the 8 DIMMs with each ASIC in a green square (click to enlarge.)

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

With that many slots, you can get to 192GB of RAM using 4GB DDR3 RDIMMs– which currently cost about 1/5th of the 8GB DIMMs.  That’s the real value in this server.

Cisco has published a white paper on this patented technology at http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300.html so if you want to get more details, I encourage you to check it out.

Happy New Year!

Happy New Year to all of my readers. As we enter a new decade, I wanted to give everyone who takes the time to read a few stats on how I’ve done since my inaugural posting on September 23, 2009. First a bit of a background. My main website is now located at BladesMadeSimple.com, however a few months prior to that I had a blog on WordPress.com at http://kevinbladeguy.wordpress.com/.  Even though I have my own site, I have kept the WordPress.com site up as a mirror site primarily since Google has the site indexed and I get a lot of traffic from Google.  SO – how’d I do?  Well, here’s the breakdown:

On http://kevinbladeguy.wordpress.com, I received 4,588 page views since Sept 23, 2009 with my article on “Cisco UCS vs IBM BladeCenter H” receiving 399 page views.

On http://BladesMadeSimple.com, I received 2,041 page views which started up on November 1, 2009 with my article on Cisco UCS vs IBM BladeCenter H receiving 238 page views.

Combined, that is 6,629 page views since September 23, 2009!  As I’m still a virgin blogger, I’m not sure if that’s a good stat for a website devoted to talking about blade servers, but I’m happy with it.  I hope that you will stay with my as I continue my voyage on keeping you informed on blade servers.

Happy New Year!!

Cisco Wants IBM’s Blade Servers??

In an unusual move Tuesday, Cisco CEO, John Chambers, commented that Cisco is still open to a blade server “partnership” with IBM.  “I still firmly believe that it’s in IBM’s best interests to work with us. That door will always be open,” Chambers told the audience at the Cisco’s financial analyst conference yesterday at Cisco’s HQ in San Jose. 

John Chambers and other executives spent much of the day talking with financial analysts about Cisco’s goal to become the preeminent IT and communications vendor because of the growing importance of virtualization, collaboration and video, a move demonstrated by their recent partnership announcement with EMC and VMware.  According to reports, analysts at the event said they think Chambers is sincere about his willingness to work with IBM. The two companies have much in common, such as their enterprise customer base, and Cisco’s products could fit into IBM’s offerings, said Mark Sue of RBC Capital Markets.

So – is this just a move for Cisco to tighten their relationship with IBM in the hopes of growing to an entity that can defeat HP and their BladeSystem sales, or has Cisco decided that the server market is best left to manufacturers who have been selling servers for 20+ years?  What are your thoughts?  Please feel free to leave some comments and let me know.

IDC Q3 2009 Report: Blade Servers are Growing, HP Leads in Shares

IDC reported on Wednesday that blade server sales for Q3 2009 returned to quarterly revenue growth with factory revenues increasing 1.2% year over year.  However there was a 14.0% year-over-year shipment decline.  Overall blade servers accounted for $1.4 billion in Q3 2009 which represented 13.6% of the overall server revenue.  Of the top 5 OEM blade manufacturers, IBM experienced the strongest blade growth gaining 6.0 points of market share.  However, overall market share for Q3 2009 still belongs to HP with 50.7%, with IBM following up with 29.4% and Dell in 3rd place with a lowly 8.9% revenue share.Q3_2009_Blades According to Jed Scaramella, senior research analyst in IDC's Datacenter and Enterprise Server group,  "Customers are leveraging blade technologies to optimize their environments in response to the pressure of the economic downturn and tighter budgets. Blade technologies provide IT organizations the capability to simplify their IT while improving asset utilization, IT flexibility, and energy efficiency.  For the second consecutive quarter, the blade segment increased in revenue on a quarter-to-quarter basis, while simultaneously increasing their average sales value (ASV). This was driven by next generation processors (Intel Nehalem) and a greater amount of memory, which customers are utilizing for more virtualization deployments. IDC sees virtualization and blades are closely associated technologies that drive dynamic IT for the future datacenter."