Category Archives: HP-HPE

HP Tech Day – Day 1 Agenda and Attendees

Today kicks off the HP Blades and Infrastructure Software Tech Day 2010 (aka HP Blades Day). I’ll be updating this site frequently throughout the day, so be sure to check back. You can quickly view all of the HP Tech Day info by clicking on the “Category” tab on the left and choose “HPTechDay2010.” For live updates, follow me on Twitter @Kevin_Houston.

Here’s our agenda for today (Day 1):

9:10 – 10:00 ISB Overview and Key Data Center Trends 2010
10:00 – 10:30 Nth Generation Computing Presentation
10:45 – 11:45 Virtual Connect
1:00 – 3:00 BladeSystem in the Lab (Overview and Demo) and Insight Software (Overview and Demo)
3:15 – 4:15 Matrix
4:15 – 4:45 Competitive Discussion
5:00 – 5:45 Podcast roundtable with Storage Monkeys

Note: gaps in the times above indicate a break or lunch.

For extensive coverage, make sure you check in on the rest of the attendees’ blogs:

Rich Brambley: http://vmetc.com
Greg Knieremen: http://www.storagemonkeys.com/
Chris Evans: http://thestoragearchitect.com
Simon Seagrave: http://techhead.co.uk
John Obeto: http://absolutelywindows.com
Frank Owen: http://techvirtuoso.com
Martin Macleod: http://www.bladewatch.com/
Steven Foskett: http://blog.fosketts.net/
Devang Panchigar: http://www.storagenerve.com

Disclaimer: airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

More HP and IBM Blade Rumours

I wanted to post a few more rumours before I head out to HP in Houston for “HP Blades and Infrastructure Software Tech Day 2010” so it’s not to appear that I got the info from HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.

First off – the HP Rumour:
I’ve caught wind of a secret that may be truth, may be fiction, but I hope to find out for sure from the HP blade team in Houston.  The rumour is that HP’s development team currently has a Cisco Nexus Blade Switch Module for the HP BladeSystem in their lab, and they are currently testing it out.

Now, this seems far fetched, especially with the news of Cisco severing partner ties with HP, however, it seems that news tidbit was talking only about products sold with the HP label, but made by Cisco (OEM.)   HP will continue to sell Cisco Catalyst switches for the HP BladeSystem and even Cisco branded Nexus switches with HP part numbers (see this HP site for details.)  I have some doubt about this rumour of a Cisco Nexus Switch that would go inside the HP BladeSystem simply because I am 99% sure that HP is announcing a Flex10 type of BladeSystem switch that will allow converged traffic to be split out, with the Ethernet traffic going to the Ethernet fabric and the Fibre traffic going to the Fibre fabric (check out this rumour blog I posted a few days ago for details.)  Guess only time will tell.

The IBM Rumour:
I posted a few days ago a rumour blog that discusses the rumour of HP’s next generation adding Converged Network Adapters (CNA) to the motherboard on the blades (in lieu of the 1GB or Flex10 NICs), well, now I’ve uncovered a rumour that IBM is planning on following later this year with blades that will also have CNA’s on the motherboard.  This is huge!  Let me explain why.

The design of IBM’s BladeCenter E and BladeCenter H have the 1Gb NICs onboard each blade server hard-wired to I/O Bays 1 and 2 – meaning only Ethernet modules can be used in these bays (see the image to the left for details.)  However, I/O Bays 1 and 2 are for “standard form factor I/O modules” while I/O Bays are for “high speed form factor I/O modules”.  This means that I/O Bays 1 and 2 can not handle “high speed” traffic, i.e. converged traffic.

This means that IF IBM comes out with a blade server that has a CNA on the motherboard, either:

a) the blade’s CNA will have to route to I/O Bays 7-10
OR
b) IBM’s going to have to come out with a new BladeCenter chassis that allows the high speed converged traffic from the CNAs to connect to a high speed switch module in Bays 1 and 2.

So let’s think about this.  If IBM (and HP for that matter) does put CNA’s on the motherboard, is there a need for additional mezzanine/daughter cards?  This means the blade servers could have more real estate for memory, or more processors.   If there’s no extra daughter cards, then there’s no need for additional I/O module bays.  This means the blade chassis could be smaller and use less power – something every customer would like to have.

I can really see the blade market moving toward this type of design (not surprising very similar to Cisco’s UCS design) – one where only a pair of redundant “modules” are needed to split converged traffic to their respective fabrics.  Maybe it’s all a pipe dream, but when it comes true in 18 months, you can say you heard it here first.

Thanks for reading.  Let me know your thoughts – leave your comments below.

HP Blades and Infrastructure Software Tech Day 2010 (UPDATED)

On Wednesday I will be headed to the 2010 HP Infrastructure Software & Blades Tech Day, an invitation only blogger event at the HP Campus in Houston, TX.  This event is a day and a half deep dive about the blade server market, key data center trends and client virtualization.  We will be with HP technology leaders and business executives who will discuss the company’s business advantages and technical advances.  The event will also include customers’ and their own key insights and experiences and provide demos of the products including an insider’s tour of HP’s Lab facilities.

I’m extremely excited to attend this event and can’t wait to blog about it.  (Hopefully HP will not NDA the entire event.)  I’m also excited to meet some of the world’s top bloggers.  Check out this list of attendees:

Rich Brambley: http://vmetc.com

Greg Knieremen: http://www.storagemonkeys.com/

Chris Evans: http://thestoragearchitect.com

Simon Seagrave: http://techhead.co.uk

John Obeto: http://absolutelywindows.com

Frank Owen: http://techvirtuoso.com

Martin Macleod: http://www.bladewatch.com/

Plus a couple that I left off originally (sorry guys):

Steven Foskett: http://blog.fosketts.net/

Devang Panchigar: http://www.storagenerve.com

Be sure to check back with me on Thursday and Friday for updates to the event, and also follow me on Twitter @kevin_houston (twitter hashcode for this event is #hpbladesday.)

Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

HP BladeSystem Rumours

I’ve recently posted some rumours about IBM’s upcoming announcements in their blade server line, now it is time to let you know some rumours I’m hearing about HP.   NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.  That being said – here we go:

Rumour #1:  Integration of “CNA” like devices on the motherboard. 
As you may be aware, with the introduction of the “G6”, or Generation 6, of HP’s blade servers, HP added “FlexNICs” onto the servers’ motherboards instead of the 2 x 1Gb NICs that are standard on most of the competition’s blades.  FlexNICs allow for the user to carve up a 10Gb NIC into 4 virtual NICs when using the Flex-10 Modules inside the chassis.  (For a detailed description of Flex-10 technology, check out this HP video.)  The idea behind Flex-10 is that you have 10Gb connectivity that allows you to do more with fewer NICs. 

SO – what’s next?  Rumour has it that the “G7” servers, expected to be announced on March 16, will have an integrated CNA or Converged Network Adapter.  With a CNA on the motherboard, both the ethernet and the fibre traffic will have a single integrated device to travel over.  This is a VERY cool idea because this announcement could lead to a blade server that can eliminate the additional daughter card or mezzanine expansion slots therefore freeing up valueable real estate for newer Intel CPU architecture.

Rumour #2: Next generation Flex-10 Modules will separate Fibre and Network traffic.

Today, HP’s Flex-10 ONLY allows handles Ethernet traffic.  There is no support for FCoE (Fibre Channel over Ethernet) so if you have a Fibre network, then you’ll also have to add a Fibre Switch into your BladeSystem chassis design. If HP does put in a CNA onto their next generation blade servers that carry Fibre and Ethernet traffic, wouldn’t it make sense there would need to be a module that would fit in the BladeSystem chassis that would allow for the storage and Ethernet traffic to exit? 

I’m hearing that a new version of the Flex-10 Module is coming, very soon, that will allow for the Ethernet AND the Fibre traffic to exit out the switch. (The image to the right shows what it could look like.)  The switch would allow for 4 of the uplink ports to go to the Ethernet fabric and the other 4 ports of the 8 port Next Generation Flex-10 switch to either be dedicated to a Fibre fabric OR used for additional 4 ports to the Ethernet fabric. 

If this rumour is accurate, it could shake up things in the blade server world.  Cisco UCS uses 10Gb Data Center Ethernet (Ethernet plus FCoE); IBM BladeCenter has the ability to do a 10Gb plus Fibre switch fabric (like HP) or it can use a 10Gb Enhanced Ethernet plus FCoE (like Cisco) however no one currently has a device to split the Ethernet and Fibre traffic at the blade chassis.  If this rumour is true, then we should see it announced around the same time as the G7 blade server (March 16).

That’s all for now.  As I come across more rumours, or information about new announcements, I’ll let you know.

Cisco Takes Top 8 Core VMware VMmark Server Position

Cisco is getting some (more) recognition with their UCS blade server product, as they recently achieved the top position for “8 Core Server” on VMware’s VMmark benchmark tool.  VMmark is the industry’s first (and only credible) virtualization benchmark for x86-based computers.  According to the VMmark website, the Cisco UCS B200 blade server reached a score of 25.06 @ 17 tiles.  A “tile” is simple a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment.   The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity.

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot?  Here’s the configuration:

server config:

  • 2 x Intel Xeon X5570 Processors
  • 96GB of RAM (16 x 8GB)
  • 1 x Converged Network Adapter (Cisco UCS M71KR-Q)

storage config:

  • EMC CX4-240
  • Cisco MDS 9130
  • 1154.27GB Used Disk Space
  • 1024MB Array Cache
  • 41 disks used on 4 enclosures/shelves (1 with 14 disk, 3 with 9 disks)
  • 37 LUNs used
    *17 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *17 at 15GB (database) + 2 LUNs at 400GB (Misc) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

While first place on the VMmark page (8 cores) shows Fujitsu’s RX300, it’s important to note that it was reached using Intel’s W5590 processor – a processor that is designed for “workstations” – not servers.  Second place, of server processors, currently shows HP’s BL490 with 24.54 (@ 17 tiles)

Thanks to Omar Sultan (@omarsultan) for Tweeting about this and to Harris Sussman for blogging about it.

Weta Digital, Production House for AVATAR, Donates IBM Blade Servers to Schools

Weta Digital, the digital production house that designed the hit movie AVATAR recently donated about 300 IBM HS20 blade servers to Whitireia Community Polytechnic in Porirua which will use them to help teach students how to create 3-D animations. The IBM HS20 blade servers were originally bought to produce special effects for The Lord of the Rings at a cost of more than $1 million (for more details on this, check out this November 2004 article from DigitalArtsOnline.co.uk.) Weta Digital has since replaced them with more powerful HP BL 2x220c G5 servers supplied by Hewlett-Packard, which were used for AVATAR.

According to the school, these older IBM blade servers will help the schoolexpand its graphics and information technology courses and turn out students with more experience of 3-D rendering.

Thanks to Stuff.co.nz for the information mentioned above.

(UPDATED) Blade Servers with SD Slots for Virtualization

(updated 1/13/2010 – see bottom of blog for updates)

Eric Gray at www.vcritical.com blogged today about the benefits of using a flash based device, like an SD card, for loading VMware ESXi, so I thought I would take a few minutes to touch on the topic.

As Eric mentions, probably the biggest benefit of using VMware ESXi on an embedded device is that you don’t need local drives, which lowers the power and cooling of your blade server.  While he mentions HP in his blog, both HP and Dell offer SD slots in their blade servers – so let’s take a look:

HP
HP currently offers these SD slots in their BL460 G6 and BL490 G6 blade servers.  As you can see from the picture on the left (thanks again to Eric at vCritical.com) HP allows for you to access the SD slot from the top of the blade server.  This makes it fairly convenient to access, although once the image is installed on the SD card, it’s probably not ever coming out.  HP’s QuickSpecs for the BL460 G6 state offer up an “HP 4GB SD Flash Media” that has a current list price of $70, however I have been unable to find any documentation that says you MUST use this SD card, so if you want to try and use it with your own personal SD card first, good luck.  It is important to note that HP does not currently offer VMware ESXi, or any other virtualization vendor’s software, pre-installed on an SD card, unlike Dell.

Dell
Dell has been offering SD slots on select servers for quite a while.  In fact, I can remember seeing it at VMworld 2008.  Everyone else was showing “embedded hypervisors” on USB keys while Dell was using an SD card.  I don’t know that I have a personal preference of USB vs SD, but the point is that Dell was ahead of the game on this one.

Dell currently only offers their SD slot on their M805 and M905 blade servers.  These are full-height servers, which could be considered good candidates for a virtualization server due to its redundant connectivity, high memory offering and high I/O (but that’s for another blog post.)

Dell chose to place the SD slots on the bottom rear of their blade servers.  I’m not sure I agree with the placement, because if you needed to access the card, for whatever reason, you have to pull the server completely out of the chassis to service.  It’s a small thing, but it adds time and complexity to the serviceability of the server.  

An advantage that Dell has over HP is they offer to have VMware ESXi 4 PRE-LOADED on the SD key upon delivery.  Per the Dell website, an SD card with ESXi 4 (basic, not Standard or Enterprise) is available for $99.  It’s listed as “VMware ESXi v4.0 with VI4, 4CPU, Embedded, Trial, No Subsc, SD,NoMedia“.  Yes, it’s considered a “trial” and it’s the basic version with no bells or whistles, however it is pre-loaded which equals time savings.  There are additional options to upgrade the ESXi to either Standard or Enterprise as well (for additional cost, of course.)

It is important to note that this discussion was only about SD slots.  All of the blade server vendors, including IBM, have incorporated USB slots internally to their blade servers, so whereas a specific server may not have an SD slot, there is still the ability to load the hypervisor onto an USB key (where supported.)

1/13/2010 UPDATE –SD slots are also available on the BL 280G6 and BL 685 G6.

There is also an HP Advisory discouraging use of an internal USB key for embedded virtualization.  Check it out at:

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01957637&lang=en&cc=us&taskId=101&prodSeriesId=3948609&prodTypeId=3709945

Interesting HP Server Facts (from IDC)

As you can see from my blog title, I try to focus on “all things blade servers”, however I came across this bit of information that I thought would be fun to blog.  An upfront warning – this is an HP biased blog post, so sorry for those of you who are Cisco, Dell or IBM fans.

Market research firm, IDC released a quarterly update to their Worldwide Quarterly Server Tracker, citing market share figures for the 3rd calendar quarter of 2009 (3Q09).  From this report, there are a few fun HP server facts (thanks to HP for passing these facts along to me:)

HP is the #1 vendor in worldwide server shipments for the 30th consecutive quarter (7.5 years). HP shipped more than 1 out of every 3 servers worldwide and captured 36.5 percent total unit shipment share.

According to IDC:

  • HP shipped over 161,000 more servers than #2 Dell.
  • HP shipped 2.6 times as many servers as #3 IBM
  • 9.0 times as #4 Fujitsu
  • 12.9 times as many as #5 Sun.
  • HP ended up in a statistical tie with IBM for #1 in total server revenue market share with 30.9 percent.  This includes all server (UNIX and x86 revenues.)

HP leads the blade server market, with a 50.7 percent revenue share, and a 47.7 percent unit share.

I blogged about this in early December (see this link for details),but it’s no surprise that HP is leading the pack in blade sales.  Their field sales team is actively promoting blades for nearly every server opportunity and they continue to make innovative additions to their blades (like 10Gb NICs standard on G6 blades.)   HP Integrity blades claimed the #1 position in revenue share for the RISC+EPIC blade segment with a 53.2 percent share gaining 1.8 points year over year.

For the 53rd consecutive quarter, more than 13 years, HP ProLiant is the x86 server market share leader in both factory revenue and units, shipping more than 1 out of every 3 servers in this market with a 36.9 percent unit share.

HP’s x86 revenue share was 14.6 points higher than its nearest competitor; Dell. HP’s x86 revenue share was 19.2 percentage points higher than IBM.

 For the 3 major operating environments UNIX®, Windows and Linux combined (representing 99.3 percent of all servers shipped worldwide), HP is number 1 worldwide in server unit shipment and revenue market share.

HP holds a 36.5 percent unit market share worldwide, which is 2.6 times more than IBM’s unit market share and 12.9 times the unit share of Sun.

HP holds a 35.4 percent revenue market share worldwide which is 2.2 times the revenue share of Dell and 4.0 times the revenue share of Sun.

FINAL NOTE:  All of the following market share figures are for the 3rd quarter (unless otherwise noted) and represent worldwide results as reported by the IDC Worldwide Quarterly Server Tracker for Q309, December 2009.

The Hit Movie, AVATAR Processed on HP Blade Servers

Since the hit movie AVATAR surpassed the $1 Billion Revenue mark this weekend I thought it would be interesting to post some information about how the movie was put together – especially since the hardware behind the magic was the HP BL2x220c.

According to an article from information-management.com, AVATAR was put together at a visual effects production house called Weta Digital located in Miramar, New Zealand.  Weta’s datacenter sits in a 10,000 square foot facility however the film’s computing core ran on 2,176 HP BL 2x220c Blade Servers.  This added up to over 40,000 processors and 104 terabytes of RAM(Check out my post on the HP BL 2x220c blade server for details on this 2 in 1 server design by HP.)

The HP blades read and wrote data against 3 petabytes of fast fiber channel disk network area storage from BluArc and NetApp.  According to the article, all of the gear was  connected by multiple 10-gigabit network links. “We need to stack the gear closely to get the bandwidth we need for our visual effects, and, because the data flows are so great, the storage has to be local,” says Paul Gunn, Weta’s data center systems administrator.  

The article also highlights the fact that the datacenter uses water cooled racks to keep the racks and storage cooled.  Suprisingly, the water cooled design, along with a cool local climate, allows Weta to run their datacenter for less cost than running air conditioning (all they pay for is the cost of running water.)  In fact, they recently won an energy excellence award for building a smaller footprint that came with 40% lower cooling cost.

Summary of Hardware Used for AVATAR:

  • 34 racks – each with 4 HP BladeSystem Chassis, 32 servers (16 BL2x220c)
  • over 40,000 processors
  • 104 TB RAM

Since I don’t want to re-write the excellent article from information-management.com, I encourage you to click here to read the full article.

IDC Q3 2009 Report: Blade Servers are Growing, HP Leads in Shares

IDC reported on Wednesday that blade server sales for Q3 2009 returned to quarterly revenue growth with factory revenues increasing 1.2% year over year.  However there was a 14.0% year-over-year shipment decline.  Overall blade servers accounted for $1.4 billion in Q3 2009 which represented 13.6% of the overall server revenue.  Of the top 5 OEM blade manufacturers, IBM experienced the strongest blade growth gaining 6.0 points of market share.  However, overall market share for Q3 2009 still belongs to HP with 50.7%, with IBM following up with 29.4% and Dell in 3rd place with a lowly 8.9% revenue share.Q3_2009_Blades According to Jed Scaramella, senior research analyst in IDC's Datacenter and Enterprise Server group,  "Customers are leveraging blade technologies to optimize their environments in response to the pressure of the economic downturn and tighter budgets. Blade technologies provide IT organizations the capability to simplify their IT while improving asset utilization, IT flexibility, and energy efficiency.  For the second consecutive quarter, the blade segment increased in revenue on a quarter-to-quarter basis, while simultaneously increasing their average sales value (ASV). This was driven by next generation processors (Intel Nehalem) and a greater amount of memory, which customers are utilizing for more virtualization deployments. IDC sees virtualization and blades are closely associated technologies that drive dynamic IT for the future datacenter."