Yearly Archives: 2010

New IBM Blade Chassis? New Liquid Cooled Blade?

Okay, I’ll be the first to admit. I’m a geek. I’m not an uber-geek, but I’m a geek. When things get slow, I like digging around in the U.S. Patent Archives for hints as to what might be coming next in the blade server market place. My latest find uncovered a couple of “interesting” patents that were published by the International Business Machines Corporation, also known as IBM.

Liquid Cooled Blades?

IBM ColdBlade2United States Patent #7552758, titled “Method for high-density packaging and cooling of high-powered compute and storage server blades” (published 6/29/2009) may be IBM’s clever way of disguising a method to liquid cool blade servers.IBM ColdBlade According to the patent, the invention is “A system for removing heat from server blades, comprising: a server rack enclosure, the server rack enclosure enclosing: a liquid distribution manifold; a plurality of cold blades attached to the liquid distribution manifold, wherein liquid is circulated through the liquid distribution manifold and the cold blades; and at least one server blade attached to each of the cold blades, wherein the server blade includes a base portion, the base portion is a heat-conducting aluminum plate, the base portion is positioned directly onto the cold blade, and contact blocks penetrate the aluminum plate and make contact with corresponding contact points of the cold blades.”

You can read more about this patent, in detail, at http://www.freepatentsonline.com/7552758.html

New Storage Blade?

Another search revealed a patent for a “hard disk enclosure blade” (patent # 7499271), published on 3/3/2009, is a design that IBM seems to have been working on for a few years, as this design stems back to 2006.IBM future Storage blade It appears to be a “double-wide” enclosure that will allow for 8 disk drives to be inserted.

IBM future Storage blade2

This is an interesting idea, if the goal were to be used inside a normal bladecenter chassis. It would be like having the local space of an IBM BladeCenter S, but in the IBM BladeCenter E or IBM BladeCenter H. On the other hand, it could have been the invention that was used for the storage modules of the IBM BladeCenter S. You can read more about this invention at http://www.freepatentsonline.com/7499271.html.

New IBM BladeCenter Chassis?
IBM NewChassisThe final invention that I uncovered is very mysterious to me. Titled, “Securing Blade Servers in a Data Center,” patent application # 20100024001 shows a new concept from IBM encompassing a blade server chassis, a router, a patch panel, a RAID Array, a power strip and blade servers all inside of a single enclosure, or “Data Center.” An important note is that this device is not yet approved as a patent – it’s still a patent application. Filed on 7/25/2008 and published as a patent application on 1/28/2010, this patent application lists an abstract description of, “Securing blade servers in a data center, the data center including a plurality of blade servers installed in a plurality of blade server chassis, the blade servers and chassis connected for data communications to a management module, each blade server chassis including a chassis key, where securing blade servers includes: prior to enabling user-level operation of the blade server, receiving, by a security module, from the management module, a chassis key for the blade server chassis in which the blade server is installed; determining, by the security module, whether the chassis key matches a security key stored on the blade server; if the chassis key matches the security key, enabling, by the security module, user-level operation of the blade server; and if the chassis key does not match the security key, disabling, by the security module, operation of the blade server.” I’ve tried a few times to decipher what this patent is really for, but I’ve not had any luck. I encourage you to head over to http://www.freepatentsonline.com/y2010/0024001.html and take a look. If it makes sense to you, leave me a comment.

While this was nothing but a trivial attempt at finding the next big thing before it’s announced, I walk away from this amazed at the number of patents that IBM has, just for blade servers. I hope to do a similar exercise for HP, Dell and Cisco in the near future, after tomorrow’s Westmere announcements.

4 Socket Blade Servers Density: Vendor Comparison

IMPORTANT NOTE – I updated this blog post on Feb. 28, 2011 with better details.  To view the updated blog post, please go to:

https://bladesmadesimple.com/2011/02/4-socket-blade-servers-density-vendor-comparison-2011/

Original Post (March 10, 2010):

As the Intel Nehalem EX processor is a couple of weeks away, I wonder what impact it will have in the blade server market.  I’ve been talking about IBM’s HX5 blade server for several months now, so it is very clear that the blade server vendors will be developing blades that will have some iteration of the Xeon 7500 processor.  In fact, I’ve had several people confirm on Twitter that HP, Dell and even Cisco will be offering a 4 socket blade after Intel officially announces it on March 30.  For today’s post, I wanted to take a look at how the 4 socket blade space will impact the overall capacity of a blade server environment.  NOTE: this is purely speculation, I have no definitive information from any of these vendors that is not already public.

The Cisco UCS 5108 chassis holds 8 “half-width” B-200 blade servers or 4 “full-width” B-250 blade servers, so when we guess at what design Cisco will use for a 4 socket Intel Xeon 7500 (Nehalem EX) architecture, I have to place my bet on the full-width form factor.  Why?  Simply because there is more real estate.  The Cisco B250 M1 blade server is known for its large memory capacity, however Cisco could sacrifice some of that extra memory space for a 4 socket, “Cisco B350 blade.  This would provide a bit of an issue for customers wanting to implement a complete rack full of these servers, as it would only allow for a total of 28 servers in a 42U rack (7 chassis x 4 servers per chassis.)

Estimated Cisco B300 with 4 CPUs

On the other hand, Cisco is in a unique position in that their half-width form factor also has extra real estate because they don’t have 2 daughter card slots like their competitors.  Perhaps Cisco would create a half-width blade with 4 CPUs (a B300?)  With a 42U rack, and using a half-width design, you would be able to get a maximum of 56 blade servers (7 chassis x 8 servers per chassis.)

Dell
The 10U M1000e chassis from Dell can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that Dell would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that Dell’s 4 socket blade will be a full-height blade server, probably named a PowerEdge M910.  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

HP
Similar to Dell, HP’s 10U BladeSystem c7000 chassis can currently handle 16 “half-height” blade servers or 8 “full height” blade servers.  I don’t forsee any way that HP would be able to put 4 CPUs into a half-height blade.  There just isn’t enough room.  To do this, they would have to sacrifice something, like memory slots or a daughter card expansion slot, which just doesn’t seem like it is worth it.  Therefore, I predict that HP’s 4 socket blade will be a full-height blade server, probably named a Proliant BL680 G7 (yes, they’ll skip G6.)  With this assumption, you would be able to get 32 blade servers in a 42u rack (4 chassis x 8 blades.) 

IBM
Finally, IBM’s 9U BladeCenter H chassis offers up 14 servers.  IBM has one size server, called a “single wide.”  IBM will also have the ability to combine servers together to form a “double-wide”, which is what is needed for the newly announced IBM BladeCenter HX5.  A double-width blade server reduces the IBM BladeCenter’s capacity to 7 servers per chassis.  This means that you would be able to put 28 x 4 socket IBM HX5 blade servers into a 42u rack (4 chassis x 7 servers each.)

Summary
In a tie for 1st place, at 32 blade servers in a 42u rack, Dell and HP would have the most blade server density based on their existing full-height blade server design.  IBM and Cisco would come in at 3rd place with 28 blade servers in a 42u rack..  However IF Cisco (or HP and Dell for that matter) were able to magically re-design their half-height servers to hold 4 CPUs, then they would be able to take 1st place for blade density with 56 servers. 

Yes, I know that there are slim chances that anyone would fill up a rack with 4 socket servers, however I thought this would be good comparison to make.  What are your thoughts?  Let me know in the comments below.

IDC Q4 2009 Report: Blade Servers STILL Growing, HP Leads STILL Leading in Shares

IDC reported on February 24, 2010 that blade server sales for Q4 2009 returned to quarterly revenue growth with factory revenues increasing 30.9% in Q4 2009 year over year (vs 1.2% in Q3.)  For the first time in 2009 there was an 8.3% increase in year-over-year shipments in Q4.  Overall blade servers accounted for $1.8 billion in Q4 2009 (up from $1.3 billion in Q3) which represented 13.9% of the overall server revenue.  It was also reported that more than 87% of all blade revenue in Q4 2009 was driven by x86 systems where blades now represent 21.4% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following: 

#1 market share: HP with 52.4%

#2 market share: IBM increased their marketshare from Q3 by 5.7% growth to 35.1%

q4_2009_idc

As an important note, according to IDC, IBM significantly outperformed the market with year-over-year revenue growth of 64.1%.  

According to Jed Scaramella, senior research analyst in IDC's Datacenter and Enterprise Server group,  "Blades remained a bright spot in the server vendors’ portfolios.  They were able to grow blade revenue throughout the year while maintaining their average selling prices. Customers recognize the benefits extend beyond consolidation and density, and are leveraging the platform to deliver a dynamic IT environment. Vendors consider blades strategic to their business due to the strong loyalty customers develop for their blade vendor as well as the higher level of pull-through revenue associated with blades."

Virtual I/O on IBM BladeCenter (IBM Virtual Fabric Adapter by Emulex)

A few weeks ago, IBM and Emulex announced a new blade server adapter for the IBM BladeCenter and IBM System x line, called the “Emulex Virtual Fabric Adapter for IBM BladeCenter" (IBM part # 49Y4235). Frequent readers may recall that I had a "so what" attitude when I blogged about it in October and that was because, I didn't get it. I didn't get what the big deal was with being able to take a 10Gb pipe and allow you to carve it up into 4 "virtual NICs". HP's been doing this for a long time with their FlexNICs (check out VirtualKennth's blog for a great detail on this technology) so I didn't see the value in what IBM and Emulex was trying to do. But now I understand. Before I get into this, let me remind you of what this adapter is. The Emulex Virtual Fabric Adapter (CFFh) for IBM BladeCenter is a dual-port 10 Gb Ethernet card that supports 1 Gbps or 10 Gbps traffic, or up to eight virtual NIC devices.

This adapter hopes to address three key I/O issues:

1.Need for more than two ports per server, with 6-8 recommended for virtualization
2.Need for more than 1Gb bandwidth, but can't support full 10Gb today
3.Need to prepare for network convergence in the future

"1, 2, 3, 4"
I recently attended an IBM/Emulex partner event and Emulex presented a unique way to understand the value of the Emulex Virtual Fabric Adapter via the term, "1, 2, 3, 4" Let me explain:

"1" – Emulex uses a single chip architecture for these adapters. (As a non-I/O guy, I'm not sure of why this matters – I welcome your comments.)


"2" – Supports two platforms: rack and blade
(Easy enough to understand, but this also emphasizes that a majority of the new IBM System x servers announced this week will have the Virtual Fabric Adapter "standard")

"3" – Emulex will have three product models for IBM (one for blade servers, one for the rack servers and one intergrated into the new eX5 servers)

"4" – There are four modes of operation:

  • Legacy 1Gb Ethernet
  • 10Gb Ethernet
  • Fibre Channel over Ethernet (FCoE)…via software entitlement ($$)
  • iSCSI Hardware Acceleration…via software entitlement ($$)

This last part is the key to the reason I think this product could be of substantial value. The adapter enables a user to begin with traditional Ethernet, then grow into 10Gb, FCoE or iSCSI without any physical change – all they need to do is buy a license (for the FCoE or iSCSI).

Modes of operation

The expansion card has two modes of operation: standard physical port mode (pNIC) and virtual NIC (vNIC) mode.

In vNIC mode, each physical port appears to the blade server as four virtual NIC with a default bandwidth of 2.5 Gbps per vNIC. Bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.

In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.

As previously mentioned, a future entitlement purchase will allow for up to two FCoE ports or two iSCSI ports. The FCoE and iSCSI ports can be used in combination with up to six Ethernet ports in vNIC mode, up to a maximum of eight total virtual ports.

Mode IBM Switch Compatibility

vNIC – works with BNT Virtual Fabric Switch
pNIC – works with BNT, IBM Pass-Thru, Cisco Nexus
FCoE– BNT or Cisco Nexus
iSCSI Acceleration – all IBM 10GbE switches

I really think the "one card can do all" concept works really well for the IBM BladeCenter design, and I think we'll start seeing more and more customers move toward this single card concept.

Comparison to HP Flex-10
I'll be the first to admit, I'm not a network or storage guy, so I'm not really qualified to compare this offering to HP's Flex-10, however IBM has created a very clever video that does some comparisons. Take a few minutes to watch and let me know your thoughts.

7 habits of highly effective people
pet food express
cartoon network video
arnold chiari malformation
category 1 hurricane

Announcing the IBM BladeCenter HX5 Blade Server (with detailed pics)

(UPDATED 11:29 AM EST 3/2/2010)
IBM announced today the BladeCenter® HX5 – their first 4 socket blade since the HS41 blade server. IBM calls the HX5 “a scalable, high-performance blade server with unprecedented compute and memory performance, and flexibility ideal for compute and memory-intensive enterprise workloads.”

The HX5 will have the ability to be coupled with a 2nd HX5 to scale to 4 CPU Sockets, grow beyond the base memory with the MAX5 memory expansion and be offer hardware partition to split a dual node server into 2 x single node servers and back again. I’ll review each of these features in more detail below, but first, let’s look at the basics of the HX5 blade server.

X5 features:

  • Up to 2 x Intel Xeon 7500 CPUs per node
  • 16 DIMMs per node
  • 2 x Solid State Disk (SSD) slots per node
  • 1 x CIOv and 1 CFFh daughter card expansion slot per node, providing up to 8 I/O ports per node
  • 1 x scale connector per node

CPU Scalability
In the fashion of the eX5 architecture, IBM is enabling the HX5 blade server to grow from 2 CPUs to 4 CPUs (and theoretically more) via connecting the servers through a “scale connector“. This connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering. This means you could have any number of combinations from 2 x HX5 blade servers to 2 x HX5 blade servers + a MAX5 memory blade.

Memory Scalability
With the addition of a new 24 DIMM memory blade, called the MAX5, IBM is enabling users to grow the base memory from 16 memory DIMMS to 48 40 (16+24) memory DIMMs. The MAX5 will be connected via the scale connector mentioned above, and in fact, when coupled with a 2 node, 4 socket system, could enable the entire system to have 72 80 DIMMS (16 DIMMs per HX5 plus 24 DIMMs per MAX5). Granted, this will be a 4 server wide offering, but this will be a powerful offering for database servers, or even virtualization.

Hardware Partitioning
The final feature, known as FlexNode partitioning is the ability to split up a combined server node into individual server nodes and back again as needed. Performed using IBM Software, this feature will enable a user to automatically take a 2 node HX5 system acting as a single 4 socket system and split it up into 2 x 2 socket systems then revert back to a single 4 socket system once the workload is completed.

For example, during the day, the 4 socket HX5 server is used for as a database server, but at night, the database server is not being used, so the system is partitioned off into 2 x 2 socket physical servers that can each run their own applications.

As I’ve mentioned previously, the pricing and part number info for the IBM BladeCenter HX5 blade server is not expected to show up until the Intel Xeon 7500 processor announcement on March 30, so when that info is released, you can find it here.

For more details, head over to IBM’s
RedBook
site.

Let me know your thoughts – leave your comments below.

Announcing IBM eX5 Portfolio and the HX5 Blade Server

UPDATED: 3/2/2010 at 12:58 PM EST
Author’s Note: I’m stretching outside of my “blades” theme today so I can capture the entire eX5 messaging.
 
Finally, all the hype is over.  IBM announced today the next evolution of their “Enterprise x-Architecture”, also known as eX5.  
Why eX5?  Simple:  e=Enterprise X=x-Architecture  5=fifth generation. 

IBM’s Enterprise x-Architecture has been around for quite a while providing unique Scalability, Reliability and Flexibility in the x86 4-socket platforms.  You can check out the details of the eX4 technology here. 

Today’s announcement offered up a few facts:   

a) the existing x3850 and x3950 M2 will be called x3850 and x3950 X5 signifying a trend for IBM to move toward product naming designations that reflect the purpose of the server. 

b) the x3850 and x3950 X5’s will use the Intel Nehalem EX – to be officially announced/released on March 30.  At this time we can expect full details including part numbers, pricing and technical specifications. 

 c) a new 2u high,  2 socket server, the x3690 X5 was also announced.  This is probably the most exciting of the product announcements, as it is based on the Intel Nehalem EX processor but IBM’s innovation is going to enable the x3690 X5 to scale from 2 sockets to 4 sockets – but wait, there’s more.  There will be the ability, called MAX5 to add a memory expansion unit  to the x3690 X5 systems, enabling their system memory to be DOUBLED.d) in addition to the memory drawer, IBM will be shipping packs of solid state disks, called eXFlash that will deliver high performance to replace the limited IOPs of traditional spinning disks.  IBM is touting “significant” increases in performance for local databases with this new bundle of solid state disks.   In fact, according to IBM’s press release, eXFlash technology would eliminate the need for a client to purchase two entry-level servers and 80 JBODs to support a 240,000 IOPs database environment, saving $670,000 in server and storage acquisition costs.   The cool part is, these packs of disks will pop into the hot-swap drive bays of the x3690, x3850 and x3950 X5 servers.

e) IBM also announced a new technology, known as “FlexNode” that offers up physical partitioning capability for servers to move from being a single system to 2 different unique systems and back again. 

 
Blade Specific News
1) IBM will be releasing a new blade server, the BladeCenter HX5 next quarter that will also use the Intel Xeon 7500.  This blade server will scale, like all of the eX5 products, from 2 processors to 4 processors (and theoretically more) and will be ideal for database workloads.  Again, pricing and specs for this product will be released on the official Intel Nehalem EX launch date.  
 

 

  

IBM BladeCenter HX5 Blade Server

 

An observation from the pictures of the HX5 is that it will not have hot-swap drives, like the HS22’s do.  This means there will be internal drives – most like solid state drives (SSDs).  You may recall from my previous rumour post that the lack of hot-swap drives is pretty evident – IBM needed the real estate for the memory.  Unfortunately until memristors become available, blade vendors will need to sacrifice real estate for memory. 

2) As part of the MAX5 technology, IBM will also be launching a memory blade to increase the overall memory on the HX5 blade server.  Expect more details on this in the near future. 

Visit IBM’s website for their Live eX5 Event at 2 p.m. Eastern time at this site: 

http://www-03.ibm.com/systems/info/x86servers/ex5/events/index.html?CA=ex5launchteaser&ME=m&MET=exli&RE=ezvrm&Tactic=us0ab06w&cm_mmc=us0ab06w-_-m-_-ezvrm-_-ex5launchteaser-20100203 

As more information comes out on the new IBM eX5 portfolio, check back here and I’ll keep you posted.  I’d love to hear your thoughts in the comments below. 

MAX5 Memory Drawer (1U)

 

I find the x3690 X5 to be so interesting and exciting because it could quickly take over the server space that is currently occupied by the HP DL380 and the IBM x3650’s when it comes to virtualization.  We all know that VMware and other hypervisors thrive on memory, however the current 2 socket server design is limited to 12 – 16 memory sockets.  With the IBM System x3690 X5, this limitation can be overcome, as you can simply add on a memory drawer to achieve more memory capacity. 
Industry Opinions
Check out this analyst’s view of the IBM eX5 announcement here (pdf).
Here’s what VMware’s CTO, Stephen Herrod, has to say about IBM eX5:

  

maryland general hospital
horror movies 2010
ny gay marriage
online grocery coupons
smith goggles

IBM System x March 2 Event (What DOES 5 Mean?)

Tomorrow, March 2nd,  IBM kicks off a new portfolio of products in their System x line of product offerings.  One of the products will be a refresh and two of the products will be new.  However – don’t get your hopes up on seeing details on these new offerings, because tomorrow’s live event at 2 p.m. Eastern will be focused on the portfolio and the technology behind the portfolio.  IBM will not be disclosing any pricing, performance, model or Intel specifics until Intel’s launch dates on March 16 and March 30. 

“What Does 5 Mean to You” Campaign
5 business days ago, IBM kicked off a video campaign, “What Does 5 Mean to You.”  While a clever idea, I thought it missed on the messaging.  They were playing too much on “5” – which will become clearer tomorrow when the announcement is made.  Here’s a look at all the videos:


What Are Your Top 5 IT Challenges (from the “What is 5” videos)?
The key point of these videos were not to tease us, but to highlight the top 5 IT challenges that the new IBM portfolio will help solve.  Take a look at the top 5 IT challenges:

Challenge #5:  “My servers need Fibre Channel, Ethernet and iSCSI all operating at different speeds.  How do I simplify my networks right now?” 
Message:  Converged Infrastructure


Challenge #4:   “Why do I have to buy different types of servers whenever my needs change?  Can’t technology adapt to me?”
Message: Flexible Infrastructure

 

Challenge #3:  “My data costs keep growing.  How can I control the sprawl of my storage?”
Message: ?? Not Sure

Challenge #2:  “I don’t need a lot of complicated choices.  Why can’t I get a system that is set up for my workloads…right out of the box?”
Message: Like the IBM HS22v is “designed” for virtualization, we can expect this trend to continue with future IBM product releases…

Challenge #1:  “Technology competitors can all seem the same.  Doesn’t anyone have a game changing technology that will blow me away?”
Message: IBM expects the March 2nd announcement to be a game changer – and so do I. 

Check back with me tomorrow when IBM unveils What 5 Really Means!

HP Tech Day (#hpbladesday) – Final Thoughts (REVISED)

(revised 5/4/2010)

First, I’d like to thank HP for inviting me to HP Tech Day in Houston. I’m honored that I was chosen and hope that I’m invited back – event after my challenging questions about the Tolly Report. It was a fun packed day and a half, and while it was a great event, I won’t miss having to hashtag (#hpbladesday) all my tweets. I figured I’d use this last day to offer up my final thoughts – for what they are worth.

Blogger AttendeesShare photos on twitter with Twitpic
As some of you may know, I’m still the rookie of this blogging community – especially in the group of invitees, so I didn’t have a history with anyone in the group, except Rich Brambley of http://vmetc.com .  However, this did not matter, as they all welcomed me as if I were one of their own.  In fact, they even treated me to a practical joke, letting me walk around HP’s Factory Express tour for hal an hour with a Proliant DL180 G6 sticker on my back (thanks to Stephen and Greg for that one.) Yes, that’s me in the picture.

All jokes aside, these bloggers were top class, and they offer up some great blogs, so if you don’t check them out daily, please make sure to visit them.  Here’s the list of attendees and their sites:

Rich Brambley: http://vmetc.com

Greg Knieriemen: http://www.storagemonkeys.com/  and http://iKnerd.com
Also check out Greg’s notorious podcast, “Infosmack” (if you like it, make sure to subscribe via iTunes)

Chris Evans: http://thestoragearchitect.com

Simon Seagrave: http://techhead.co.uk

John Obeto: http://absolutelywindows.com 
(don’t mention VMware or Linux to him, he’s all Microsoft)

Frank Owen: http://techvirtuoso.com

Martin Macleod: http://www.bladewatch.com/

Stephen Foskett: http://gestaltit.com/ and http://blog.fosketts.net/

Devang Panchigar: http://www.storagenerve.com

A special thanks to the extensive HP team who participated in the blogging efforts as well. 

HP Demos and Factory Express Tour
I think I got the most out of this event from the live demos and the Factory Express tour.  These are things that you can read about, but until you see them in person, you can’t appreciate the value that HP brings to the table, through their product design and through their services.

The image on the left shows the MDS6000 MDS600 storage shelf – something that I’ve read about many times, but until I saw it, I didn’t realize how cool, and useful, it was.  70 drives in a 5u space.  That’s huge.  Seeing things like this, live and in person, is what these HP Tech Days need to be about.  Hands-on, live demos. and tours of what makes HP tick.

The Factory Express Tour was really cool.  I think we should have been allowed to work the line for an hour along with the HP employees.  On this tour we saw how customized HP Server builds go from being an order, to being a solution.  Workers like the one in the picture on the right typically do 30 servers a day, depending on the type of server.  The entire process involves testing and 100% audits to insure accuracy.

My words won’t do HP Factory Express justice, so check out this video from YouTube:

For a full list of my pictures taken during this event, please check out:
http://tweetphoto.com/user/kevin_houston

http://picasaweb.google.com/101667790492270812102/HPTechDay2010#

Feedback to the HP team for future events:
1) Keep the blogger group small
2) Keep it to HP demos and presentations (no partners, please)
3) More time on hands-on, live demos and tours.  This is where the magic is.
4) Try and do this at least once a quarter.  HP’s doing a great job building their social media teams, and this event goes a long way in creating that buzz.

Thanks again, HP, and to Ivy Worldwide (http://www.ivyworldwide.com) for doing a great job.  I hope to attend again!

Disclaimer: airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

Tolly Report: HP Flex-10 vs Cisco UCS (Network Bandwidth Scalability Comparison)

Tolly.com announced on 2/25/2010 a new Test Report that compares the network bandwidth scalability between HP BladeSystem c7000 with BL460 G6 Servers and Cisco UCS 5100 with B200 Servers, and the results were interesting.   The report simply tested 6 HP blades, with a single Flex-10 Module vs 6 Cisco blades using their Fabric Extender + a single Fabric Interconnect.  I’m not going to try and re-state what the report says (for that you can download it directly), instead, I’m going to highlight the results.  It is important to note that the report was “commissioned by Hewlett-Packard Dev. Co, L.P.”

Result #1:  HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Physical-to-Physical Comparison
>The test shows when 4 physical servers were tested, Cisco can achieve an aggregate throughput of 36.59 Gbps vs HP achieving 35.83Gbps (WINNER: Cisco)

>When 6 physical servers were tested, Cisco achieved an aggregate throughput of 27.37 Gbps vs HP achieving 53.65 Gbps – a difference of 26.28 Gbps (WINNER: HP)

Result #2:
 HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Virtual-to-Virtual Comparison
>Testing 2 servers each running 8 VMware Red Hat Linux hosts showed that HP achieved an aggregate throughput of 16.42 Gbps vs Cisco UCS achieving 16.70 Gbps (WINNER: Cisco). 

The results of the above was performed with the 2 x Cisco B200 blade servers each mapped to a dedicated 10Gb uplink port on the Fabric Extender (FEX).  When the 2 x Cisco B200 blade servers were designed to share the same 10Gb uplink port on the FEX, the achieved aggregate throughput on the Cisco UCS decreased to 9.10 Gbps.

A few points to note about these findings:
a) the HP Flex-10 Module has 8 x 10Gb uplinks whereas the Cisco Fabric Extender (FEX) has 4 x 10Gb uplinks

b) Cisco’s FEX Design allows for the 8 blade servers to extend out the 4 external ports in the FEX a 2:1 ratio (2 blades per external FEX port.) The current Cisco UCS design requires the servers to be “pinned”, or permanently assigned, to the respective FEX uplink. This works well when there are 4 blade servers, but when you get to more than 4 blade servers, the traffic is shared between two servers, which could cause bandwidth contention. 

 Furthermore, it’s important to understand that the design of the UCS blade infrastructure does not allow communication to go from Server 1 to Server 2 without leaving the FEX, connecting to the Fabric Interconnect (top of the picture) then returning to the FEX and connecting to the server.  This design is the potential cause of the decrease in aggregate throughput from 16.70Gbps to 9.10Gbps as shown above.


One of the “Bottom Line” conclusions from this report states, “throughput degradation on the Cisco UCS cased by bandwidth contention is a cause of concern for customers considering the use of UCS in a virtual server environment”  however I encourage you to take a few minutes, download this full report from the Tolly.com website and make your own conclusions about this report. 

Let me know your thoughts about this report – leave a comment below.

Disclaimer: This report was brought to my attention while attending the HP Tech Day event where airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

HP Tech Day – Day 1 Recap

Wow – the first day of HP Tech Day 2010 was jammed pack full of meetings, presentations and good information.  Unfortunately, it appears there won’t be any confidential, earth shattering news to report on, but it has still been a great event to attend.

My favorite part of the day was going to the HP BladeSystem demo, where we not only got to get our hands on the blade servers, but we got to see what the mid-plane and power bus looks like outside the chassis. 

From HP Tech Day 2010

Kudos to James Singer, HP Blade engineer, who did a great job talking about the HP BladeSystem and all it offers.  My only advice to the HP events team is to double the time we get with the blades next time.  (Isn’t that why were were here?)

Since I spent most of the day Tweeting what was going on, I figured it would be easiest to just list my tweets throughout the day.  If you have any questions about any of this, let me know.

My tweets from 2/25/2010 (latest to earliest):

Q&A from HP StorageWorks CTO, Paul Perez

  • “the era of spindles for IOPS will be over soon.” Paul Perez, CTO HP StorageWorks
  • CTO Perez said Memristors (http://tinyurl.com/39f6br) are the next major evolution in storage – in next 2 or 3 years
  • CTO Perez views Solid State (Drives) as an extension of main memory.
  • HP StorageWorks CTO, Paul Perez, now discussing HP StorageWorks X9000 Network Storage System (formerly known as IBRIX)
  • @SFoskett is grilling the CTO of HP StorageWorks
  • Paul Perez – CTO of StorageWorks is now in the room

Competitive Discussion

  • Kudos to Gary Thome , Chief Architect at HP, for not wanting to bash any vendor during the competitive blade session
  • Cool – we have a first look at a Tolly report comparing HP BladeSystem Flex-10 vs Cisco UCS…
  • @fowen Yes – a 10Gb, a CNA and a virtual adapter. Cisco doesn’t have anything “on the motherboard” though.
  • RT @fowen: HP is the only vendor (currently) who can embed 10GB nics in Blades @hpbladeday AND Cisco…
  • Wish HP allowed more time for deep dive into their blades at #hpbladesday. We’re rushing through in 20 min content that needs an hour.
  • Dell’s M1000 blade chassis has the blade connector pins on the server side. This causes a lot of issues as pins bend
  • I’m going to have to bite my tongue on this competitive discussion between blade vendors…
  • Mentioning HP’s presence in Gartner’s Magic Quadrant (see my previous post on this here) –> http://tinyurl.com/ydbsnan
  • Fun – now we get to hear how HP blades are better than IBM, Cisco and Dell

HP BladeSystem Matrix Demo

Insight Software Demo

  • Whoops – previous picture was “Tom Turicchi” not John Schmitz
  • John Schmitz, HP, demonstrates HP Insight Software http://tinyurl.com/yjnu3o9
  • HP Insight Control comes with “Data Center Power Control” which allows you to define rules for power control inside your DC
  • HP Insight Control = “Essential Management”; HP Insight Dynamics = “Advanced Management”
  • Live #hpBladesday Tweet Feed can be seen at http://tinyurl.com/ygcaq2a

BladeSystem in the Lab

  • c7000 Power Bus (rear) http://tinyurl.com/yjy3kwy #hpbladesday complete list of pics can be found @ http://tinyurl.com/yl465v9
  • HP c7000 Power Bus (front) http://tinyurl.com/yfwg88t #hpbladesday (one more pic coming…)
  • HP c7000 Midplane (rear) http://tinyurl.com/yhozte6
  • HP BladeSystem C7000 Midplane (front) http://tinyurl.com/ylbr9rd
  • BladeSystem lab was friggin awesome. Pics to follow
  • 23 power “steppings” on each BladeSystem fan
  • 4 fan zones in a HP BladeSystem allows for fans to spin at different rates. – controlled by the Onboard Administrator
  • The designs of the HP BladeSystem cooling fans came from Ducted Electric Jet Fans from hobby planes) http://tinyurl.com/yhug94w
  • Check out the HP SB40c Storage Blade with the cover off : http://tinyurl.com/yj6xode
  • James Singer – talking about HP BladeSystem power (http://tinyurl.com/ykfhbb2)
  • DPS takes total loads and pushes on fewer supplies which maximizes the power efficiency
  • DPS – Dynamic Power Saver dynamically turns power supplies off based on the server loads (HP exclusive technology)
  • HP BladeSystem power supplies are 94% efficient
  • HP’s hot-pluggable equipment is not purple, it’s “port wine”
  • Here’s the HP BladeSystem C3500 (1/2 of a C7000) http://tinyurl.com/yhbpddt
  • In BladeSystem demo with James Singer (HP). Very cool. They’ve got a C3500 (C7000 cut in half.) Picture will come later.

 Lunch

  • Having lunch with Dan Bowers (HP marketing) and Gary Thome – talking about enhancements need for Proliant support materials

 Virtual Connect

ISB Overview and Data Center Trends 2010

  • check out all my previous HP posts at http://tinyurl.com/yzx3hx6
  • BladeSystem midplane doesn’t require transceivers, so it’s easy to run 10Gb at same cost as 1Gb
  • BladeSystem was designed for 10Gb (with even higher in mind.)
  • RT @SFoskett: Spot the secret “G” (for @GestaltIT?) in this #HPBladesDay Nth Generation slide! http://twitpic.com/159q23 
  • If Cisco wants to be like HP, they’d have to buy Lenovo, Canon and Dunder Mifflon
  • discussed how HP blades were used in Avatar (see my post on this here )–> http://tinyurl.com/yl32xud
  • HP’s Virtual Client Infra. Solutions design allows you to build “bricks” of servers and storage to serve 1000’s of virtual PCs
  • Power capping is built into HP hardware (it’s not in the software.)
  • Power Capping is a key technology in the HP Thermal Logic design.
  • HP’s Thermal Logic technology allows you to actively manage power overtime.