Monthly Archives: February 2010

HP Tech Day (#hpbladesday) – Final Thoughts (REVISED)

(revised 5/4/2010)

First, I’d like to thank HP for inviting me to HP Tech Day in Houston. I’m honored that I was chosen and hope that I’m invited back – event after my challenging questions about the Tolly Report. It was a fun packed day and a half, and while it was a great event, I won’t miss having to hashtag (#hpbladesday) all my tweets. I figured I’d use this last day to offer up my final thoughts – for what they are worth.

Blogger AttendeesShare photos on twitter with Twitpic
As some of you may know, I’m still the rookie of this blogging community – especially in the group of invitees, so I didn’t have a history with anyone in the group, except Rich Brambley of http://vmetc.com .  However, this did not matter, as they all welcomed me as if I were one of their own.  In fact, they even treated me to a practical joke, letting me walk around HP’s Factory Express tour for hal an hour with a Proliant DL180 G6 sticker on my back (thanks to Stephen and Greg for that one.) Yes, that’s me in the picture.

All jokes aside, these bloggers were top class, and they offer up some great blogs, so if you don’t check them out daily, please make sure to visit them.  Here’s the list of attendees and their sites:

Rich Brambley: http://vmetc.com

Greg Knieriemen: http://www.storagemonkeys.com/  and http://iKnerd.com
Also check out Greg’s notorious podcast, “Infosmack” (if you like it, make sure to subscribe via iTunes)

Chris Evans: http://thestoragearchitect.com

Simon Seagrave: http://techhead.co.uk

John Obeto: http://absolutelywindows.com 
(don’t mention VMware or Linux to him, he’s all Microsoft)

Frank Owen: http://techvirtuoso.com

Martin Macleod: http://www.bladewatch.com/

Stephen Foskett: http://gestaltit.com/ and http://blog.fosketts.net/

Devang Panchigar: http://www.storagenerve.com

A special thanks to the extensive HP team who participated in the blogging efforts as well. 

HP Demos and Factory Express Tour
I think I got the most out of this event from the live demos and the Factory Express tour.  These are things that you can read about, but until you see them in person, you can’t appreciate the value that HP brings to the table, through their product design and through their services.

The image on the left shows the MDS6000 MDS600 storage shelf – something that I’ve read about many times, but until I saw it, I didn’t realize how cool, and useful, it was.  70 drives in a 5u space.  That’s huge.  Seeing things like this, live and in person, is what these HP Tech Days need to be about.  Hands-on, live demos. and tours of what makes HP tick.

The Factory Express Tour was really cool.  I think we should have been allowed to work the line for an hour along with the HP employees.  On this tour we saw how customized HP Server builds go from being an order, to being a solution.  Workers like the one in the picture on the right typically do 30 servers a day, depending on the type of server.  The entire process involves testing and 100% audits to insure accuracy.

My words won’t do HP Factory Express justice, so check out this video from YouTube:

For a full list of my pictures taken during this event, please check out:
http://tweetphoto.com/user/kevin_houston

http://picasaweb.google.com/101667790492270812102/HPTechDay2010#

Feedback to the HP team for future events:
1) Keep the blogger group small
2) Keep it to HP demos and presentations (no partners, please)
3) More time on hands-on, live demos and tours.  This is where the magic is.
4) Try and do this at least once a quarter.  HP’s doing a great job building their social media teams, and this event goes a long way in creating that buzz.

Thanks again, HP, and to Ivy Worldwide (http://www.ivyworldwide.com) for doing a great job.  I hope to attend again!

Disclaimer: airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

Tolly Report: HP Flex-10 vs Cisco UCS (Network Bandwidth Scalability Comparison)

Tolly.com announced on 2/25/2010 a new Test Report that compares the network bandwidth scalability between HP BladeSystem c7000 with BL460 G6 Servers and Cisco UCS 5100 with B200 Servers, and the results were interesting.   The report simply tested 6 HP blades, with a single Flex-10 Module vs 6 Cisco blades using their Fabric Extender + a single Fabric Interconnect.  I’m not going to try and re-state what the report says (for that you can download it directly), instead, I’m going to highlight the results.  It is important to note that the report was “commissioned by Hewlett-Packard Dev. Co, L.P.”

Result #1:  HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Physical-to-Physical Comparison
>The test shows when 4 physical servers were tested, Cisco can achieve an aggregate throughput of 36.59 Gbps vs HP achieving 35.83Gbps (WINNER: Cisco)

>When 6 physical servers were tested, Cisco achieved an aggregate throughput of 27.37 Gbps vs HP achieving 53.65 Gbps – a difference of 26.28 Gbps (WINNER: HP)

Result #2:
 HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Virtual-to-Virtual Comparison
>Testing 2 servers each running 8 VMware Red Hat Linux hosts showed that HP achieved an aggregate throughput of 16.42 Gbps vs Cisco UCS achieving 16.70 Gbps (WINNER: Cisco). 

The results of the above was performed with the 2 x Cisco B200 blade servers each mapped to a dedicated 10Gb uplink port on the Fabric Extender (FEX).  When the 2 x Cisco B200 blade servers were designed to share the same 10Gb uplink port on the FEX, the achieved aggregate throughput on the Cisco UCS decreased to 9.10 Gbps.

A few points to note about these findings:
a) the HP Flex-10 Module has 8 x 10Gb uplinks whereas the Cisco Fabric Extender (FEX) has 4 x 10Gb uplinks

b) Cisco’s FEX Design allows for the 8 blade servers to extend out the 4 external ports in the FEX a 2:1 ratio (2 blades per external FEX port.) The current Cisco UCS design requires the servers to be “pinned”, or permanently assigned, to the respective FEX uplink. This works well when there are 4 blade servers, but when you get to more than 4 blade servers, the traffic is shared between two servers, which could cause bandwidth contention. 

 Furthermore, it’s important to understand that the design of the UCS blade infrastructure does not allow communication to go from Server 1 to Server 2 without leaving the FEX, connecting to the Fabric Interconnect (top of the picture) then returning to the FEX and connecting to the server.  This design is the potential cause of the decrease in aggregate throughput from 16.70Gbps to 9.10Gbps as shown above.


One of the “Bottom Line” conclusions from this report states, “throughput degradation on the Cisco UCS cased by bandwidth contention is a cause of concern for customers considering the use of UCS in a virtual server environment”  however I encourage you to take a few minutes, download this full report from the Tolly.com website and make your own conclusions about this report. 

Let me know your thoughts about this report – leave a comment below.

Disclaimer: This report was brought to my attention while attending the HP Tech Day event where airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

HP Tech Day – Day 1 Recap

Wow – the first day of HP Tech Day 2010 was jammed pack full of meetings, presentations and good information.  Unfortunately, it appears there won’t be any confidential, earth shattering news to report on, but it has still been a great event to attend.

My favorite part of the day was going to the HP BladeSystem demo, where we not only got to get our hands on the blade servers, but we got to see what the mid-plane and power bus looks like outside the chassis. 

From HP Tech Day 2010

Kudos to James Singer, HP Blade engineer, who did a great job talking about the HP BladeSystem and all it offers.  My only advice to the HP events team is to double the time we get with the blades next time.  (Isn’t that why were were here?)

Since I spent most of the day Tweeting what was going on, I figured it would be easiest to just list my tweets throughout the day.  If you have any questions about any of this, let me know.

My tweets from 2/25/2010 (latest to earliest):

Q&A from HP StorageWorks CTO, Paul Perez

  • “the era of spindles for IOPS will be over soon.” Paul Perez, CTO HP StorageWorks
  • CTO Perez said Memristors (http://tinyurl.com/39f6br) are the next major evolution in storage – in next 2 or 3 years
  • CTO Perez views Solid State (Drives) as an extension of main memory.
  • HP StorageWorks CTO, Paul Perez, now discussing HP StorageWorks X9000 Network Storage System (formerly known as IBRIX)
  • @SFoskett is grilling the CTO of HP StorageWorks
  • Paul Perez – CTO of StorageWorks is now in the room

Competitive Discussion

  • Kudos to Gary Thome , Chief Architect at HP, for not wanting to bash any vendor during the competitive blade session
  • Cool – we have a first look at a Tolly report comparing HP BladeSystem Flex-10 vs Cisco UCS…
  • @fowen Yes – a 10Gb, a CNA and a virtual adapter. Cisco doesn’t have anything “on the motherboard” though.
  • RT @fowen: HP is the only vendor (currently) who can embed 10GB nics in Blades @hpbladeday AND Cisco…
  • Wish HP allowed more time for deep dive into their blades at #hpbladesday. We’re rushing through in 20 min content that needs an hour.
  • Dell’s M1000 blade chassis has the blade connector pins on the server side. This causes a lot of issues as pins bend
  • I’m going to have to bite my tongue on this competitive discussion between blade vendors…
  • Mentioning HP’s presence in Gartner’s Magic Quadrant (see my previous post on this here) –> http://tinyurl.com/ydbsnan
  • Fun – now we get to hear how HP blades are better than IBM, Cisco and Dell

HP BladeSystem Matrix Demo

Insight Software Demo

  • Whoops – previous picture was “Tom Turicchi” not John Schmitz
  • John Schmitz, HP, demonstrates HP Insight Software http://tinyurl.com/yjnu3o9
  • HP Insight Control comes with “Data Center Power Control” which allows you to define rules for power control inside your DC
  • HP Insight Control = “Essential Management”; HP Insight Dynamics = “Advanced Management”
  • Live #hpBladesday Tweet Feed can be seen at http://tinyurl.com/ygcaq2a

BladeSystem in the Lab

  • c7000 Power Bus (rear) http://tinyurl.com/yjy3kwy #hpbladesday complete list of pics can be found @ http://tinyurl.com/yl465v9
  • HP c7000 Power Bus (front) http://tinyurl.com/yfwg88t #hpbladesday (one more pic coming…)
  • HP c7000 Midplane (rear) http://tinyurl.com/yhozte6
  • HP BladeSystem C7000 Midplane (front) http://tinyurl.com/ylbr9rd
  • BladeSystem lab was friggin awesome. Pics to follow
  • 23 power “steppings” on each BladeSystem fan
  • 4 fan zones in a HP BladeSystem allows for fans to spin at different rates. – controlled by the Onboard Administrator
  • The designs of the HP BladeSystem cooling fans came from Ducted Electric Jet Fans from hobby planes) http://tinyurl.com/yhug94w
  • Check out the HP SB40c Storage Blade with the cover off : http://tinyurl.com/yj6xode
  • James Singer – talking about HP BladeSystem power (http://tinyurl.com/ykfhbb2)
  • DPS takes total loads and pushes on fewer supplies which maximizes the power efficiency
  • DPS – Dynamic Power Saver dynamically turns power supplies off based on the server loads (HP exclusive technology)
  • HP BladeSystem power supplies are 94% efficient
  • HP’s hot-pluggable equipment is not purple, it’s “port wine”
  • Here’s the HP BladeSystem C3500 (1/2 of a C7000) http://tinyurl.com/yhbpddt
  • In BladeSystem demo with James Singer (HP). Very cool. They’ve got a C3500 (C7000 cut in half.) Picture will come later.

 Lunch

  • Having lunch with Dan Bowers (HP marketing) and Gary Thome – talking about enhancements need for Proliant support materials

 Virtual Connect

ISB Overview and Data Center Trends 2010

  • check out all my previous HP posts at http://tinyurl.com/yzx3hx6
  • BladeSystem midplane doesn’t require transceivers, so it’s easy to run 10Gb at same cost as 1Gb
  • BladeSystem was designed for 10Gb (with even higher in mind.)
  • RT @SFoskett: Spot the secret “G” (for @GestaltIT?) in this #HPBladesDay Nth Generation slide! http://twitpic.com/159q23 
  • If Cisco wants to be like HP, they’d have to buy Lenovo, Canon and Dunder Mifflon
  • discussed how HP blades were used in Avatar (see my post on this here )–> http://tinyurl.com/yl32xud
  • HP’s Virtual Client Infra. Solutions design allows you to build “bricks” of servers and storage to serve 1000’s of virtual PCs
  • Power capping is built into HP hardware (it’s not in the software.)
  • Power Capping is a key technology in the HP Thermal Logic design.
  • HP’s Thermal Logic technology allows you to actively manage power overtime.

HP Tech Day – Day 1 Agenda and Attendees

Today kicks off the HP Blades and Infrastructure Software Tech Day 2010 (aka HP Blades Day). I’ll be updating this site frequently throughout the day, so be sure to check back. You can quickly view all of the HP Tech Day info by clicking on the “Category” tab on the left and choose “HPTechDay2010.” For live updates, follow me on Twitter @Kevin_Houston.

Here’s our agenda for today (Day 1):

9:10 – 10:00 ISB Overview and Key Data Center Trends 2010
10:00 – 10:30 Nth Generation Computing Presentation
10:45 – 11:45 Virtual Connect
1:00 – 3:00 BladeSystem in the Lab (Overview and Demo) and Insight Software (Overview and Demo)
3:15 – 4:15 Matrix
4:15 – 4:45 Competitive Discussion
5:00 – 5:45 Podcast roundtable with Storage Monkeys

Note: gaps in the times above indicate a break or lunch.

For extensive coverage, make sure you check in on the rest of the attendees’ blogs:

Rich Brambley: http://vmetc.com
Greg Knieremen: http://www.storagemonkeys.com/
Chris Evans: http://thestoragearchitect.com
Simon Seagrave: http://techhead.co.uk
John Obeto: http://absolutelywindows.com
Frank Owen: http://techvirtuoso.com
Martin Macleod: http://www.bladewatch.com/
Steven Foskett: http://blog.fosketts.net/
Devang Panchigar: http://www.storagenerve.com

Disclaimer: airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

More HP and IBM Blade Rumours

I wanted to post a few more rumours before I head out to HP in Houston for “HP Blades and Infrastructure Software Tech Day 2010” so it’s not to appear that I got the info from HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.

First off – the HP Rumour:
I’ve caught wind of a secret that may be truth, may be fiction, but I hope to find out for sure from the HP blade team in Houston.  The rumour is that HP’s development team currently has a Cisco Nexus Blade Switch Module for the HP BladeSystem in their lab, and they are currently testing it out.

Now, this seems far fetched, especially with the news of Cisco severing partner ties with HP, however, it seems that news tidbit was talking only about products sold with the HP label, but made by Cisco (OEM.)   HP will continue to sell Cisco Catalyst switches for the HP BladeSystem and even Cisco branded Nexus switches with HP part numbers (see this HP site for details.)  I have some doubt about this rumour of a Cisco Nexus Switch that would go inside the HP BladeSystem simply because I am 99% sure that HP is announcing a Flex10 type of BladeSystem switch that will allow converged traffic to be split out, with the Ethernet traffic going to the Ethernet fabric and the Fibre traffic going to the Fibre fabric (check out this rumour blog I posted a few days ago for details.)  Guess only time will tell.

The IBM Rumour:
I posted a few days ago a rumour blog that discusses the rumour of HP’s next generation adding Converged Network Adapters (CNA) to the motherboard on the blades (in lieu of the 1GB or Flex10 NICs), well, now I’ve uncovered a rumour that IBM is planning on following later this year with blades that will also have CNA’s on the motherboard.  This is huge!  Let me explain why.

The design of IBM’s BladeCenter E and BladeCenter H have the 1Gb NICs onboard each blade server hard-wired to I/O Bays 1 and 2 – meaning only Ethernet modules can be used in these bays (see the image to the left for details.)  However, I/O Bays 1 and 2 are for “standard form factor I/O modules” while I/O Bays are for “high speed form factor I/O modules”.  This means that I/O Bays 1 and 2 can not handle “high speed” traffic, i.e. converged traffic.

This means that IF IBM comes out with a blade server that has a CNA on the motherboard, either:

a) the blade’s CNA will have to route to I/O Bays 7-10
OR
b) IBM’s going to have to come out with a new BladeCenter chassis that allows the high speed converged traffic from the CNAs to connect to a high speed switch module in Bays 1 and 2.

So let’s think about this.  If IBM (and HP for that matter) does put CNA’s on the motherboard, is there a need for additional mezzanine/daughter cards?  This means the blade servers could have more real estate for memory, or more processors.   If there’s no extra daughter cards, then there’s no need for additional I/O module bays.  This means the blade chassis could be smaller and use less power – something every customer would like to have.

I can really see the blade market moving toward this type of design (not surprising very similar to Cisco’s UCS design) – one where only a pair of redundant “modules” are needed to split converged traffic to their respective fabrics.  Maybe it’s all a pipe dream, but when it comes true in 18 months, you can say you heard it here first.

Thanks for reading.  Let me know your thoughts – leave your comments below.

Blade Networks Announces Industry’s First and Only Fully Integrated FCoE Solution Inside Blade Chassis

BLADE Network Technologies, Inc. (BLADE), “officially” announces today the delivery of the industry’s first and only fully integrated Fibre Channel over Ethernet (FCoE) solution inside a blade chassis.   This integration significantly reduces power, cost, space and complexity over external FCoE implementations.

You may recall that I blogged about this the other day (click here to read), however I left off one bit of information.  The (Blade Networks) BNT Virtual Fabric 10 Gb Switch Module does not require the QLogic Virtual Fabric Extension Module to function.  It will work with an existing Top-of-Rack (TOR) Convergence Switch from Brocade or Cisco to act as a 10Gb switch module, feeding the converged 10Gb link up to the TOR switch.  Since it is a switch module, you can connect as few as 1 uplink to your TOR switch, therefore saving connectivity costs, as opposed to a pass-thru option (click here for details on the pass-thru option.) 

Yes – this is the same architectural design as the Cisco Nexus 4001i provides as well, however there are a couple of differences:

BNT Virtual Fabric Switch Module (IBM part #46C7191) – 10 x 10Gb Uplinks, $11,199 list (U.S.)
Cisco Nexus 4001i Switch (IBM part #46M6071) – 6 x 10Gb Uplinks, $12,999 list (U.S.)

While BNT provides 4 extra 10Gb uplinks, I can’t really picture anyone using all 10 ports.  However, it does has a lower list price, but I encourage you to check your actual price with your IBM partner, as the actual pricing may be different.  Regardless of whether you choose BNT or Cisco to connect into your TOR switch, don’t forget the transceivers!  They add much more $$ to the overall cost, and without them you are hosed.

About the BNT Virtual Fabric 10Gb Switch Module
The BNT Virtual Fabric 10Gb Switch Module includes the following features and functions:

  • Form-factor
    • Single-wide high-speed switch module (fits in IBM BladeCenter H bays #7 and 9.) 
  • Internal ports
    • 14 internal auto-negotiating ports: 1 Gb or 10 Gb to the server blades
    • Two internal full-duplex 100 Mbps ports connected to the management module
  • External ports
    • Up to ten 10 Gb SFP+ ports (also designed to support 1 Gb SFP if required, flexibility of mixing 1 Gb/10 Gb)
    • One 10/100/1000 Mb copper RJ-45 used for management or data
    • An RS-232 mini-USB connector for serial port that provides an additional means to install software and configure the switch module
  • Scalability and performance
    • Autosensing 1 Gb/10 Gb internal and external Ethernet ports for bandwidth optimization

To read the extensive list of details about this switch, please visit the IBM Redbook located here.

HP Blades and Infrastructure Software Tech Day 2010 (UPDATED)

On Wednesday I will be headed to the 2010 HP Infrastructure Software & Blades Tech Day, an invitation only blogger event at the HP Campus in Houston, TX.  This event is a day and a half deep dive about the blade server market, key data center trends and client virtualization.  We will be with HP technology leaders and business executives who will discuss the company’s business advantages and technical advances.  The event will also include customers’ and their own key insights and experiences and provide demos of the products including an insider’s tour of HP’s Lab facilities.

I’m extremely excited to attend this event and can’t wait to blog about it.  (Hopefully HP will not NDA the entire event.)  I’m also excited to meet some of the world’s top bloggers.  Check out this list of attendees:

Rich Brambley: http://vmetc.com

Greg Knieremen: http://www.storagemonkeys.com/

Chris Evans: http://thestoragearchitect.com

Simon Seagrave: http://techhead.co.uk

John Obeto: http://absolutelywindows.com

Frank Owen: http://techvirtuoso.com

Martin Macleod: http://www.bladewatch.com/

Plus a couple that I left off originally (sorry guys):

Steven Foskett: http://blog.fosketts.net/

Devang Panchigar: http://www.storagenerve.com

Be sure to check back with me on Thursday and Friday for updates to the event, and also follow me on Twitter @kevin_houston (twitter hashcode for this event is #hpbladesday.)

Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

IBM’s New Approach to Ethernet/Fibre Traffic

Okay, I’ll be the first to admit when I’m wrong – or when I provide wrong information.

A few days ago, I commented that no one has yet offered the ability to split out Ethernet and Fibre traffic at the chassis level (as opposed to using a top of rack switch.)  I quickly found out that I was wrong – IBM now has the ability to separate the Ethernet fabric and the Fibre fabric at the BladeCenter H,  so if you are interested grab a cup of coffee and enjoy this read.

First a bit of background.  The traditional method of providing Ethernet and Fibre I/O in a blade infrastructure was to integrate 6 Ethernet switches and 2 Fibre switches into the blade chassis, which provides 6 NICs and 2 Fibre HBAs per blade server.  This is a costly method and it limits the scalability of a blade server.

A more conventional method that is becoming more popular is to converge the I/O traffic using a single converged network adapter (CNA) to carry the Ethernet and the Fibre traffic over a single 10Gb connection to a top of rack (TOR) switch which then sends the Ethernet traffic to the Ethernet fabric and the Fibre traffic to the Fibre fabric.  This reduces the number of physical cables coming out of the blade chassis, offers higher bandwidth and reduces the overall switching costs.  Up now, IBM offered two different methods to enable converged traffic:

Method 1: connect a pair of 10Gb Ethernet Pass-Thru modules into the blade chassis, add a CNA on each blade server, then connect the pass thru modules to a top of rack  convergence switch from Brocade or Cisco.  This method is the least expensive method, however since Pass-Thru modules are being used, a connection is required on the TOR convergence switch for every blade server being connected.  This would mean a 14 blade infrastructure would eat up 14 ports on the convergence switch, potentially leaving the switch with very few available ports.

Method #2: connect a pair of IBM Cisco Nexus 4001i switches, add a CNA on each server then connect the Nexus 4001i to a Cisco Nexus 5000 top of rack switch.  This method enables you to use as few as 1 uplink connection from the blade chassis to the Nexus 5000 top of rack switch, however it is more costly and you have to invest into another Cisco switch.

The New Approach
A few weeks ago, IBM announced the “Qlogic Virtual Fabric Extension Module” – a device that fits into the IBM BladeCenter H and takes the the Fibre traffic from the CNA on a blade server and sends it to the Fibre fabric.  This is HUGE!  While having a top of rack convergence switch is helpful, you can now remove the need to have a top of rack switch because the I/O traffic is being split out into it’s respective fabrics at the BladeCenter H.

What’s Needed
I’ll make it simple – here’s a list of components that are needed to make this method work:

  • 2 x BNT Virtual Fabric 10 Gb Switch Module – part # 46C7191
  • 2 x QLogic Virtual Fabric Extension Module – part # 46M6172
  • a Qlogic 2-port 10 Gb Converged Network Adapter per blade server – part # 42C1830
  • a IBM 8 Gb SFP+ SW Optical Transceiver for each uplink needed to your fibre fabric – part # 44X1964 (notethe QLogic Virtual Fabric Extension Module doesn’t come with any, so you’ll need the same quantity for each module.)

The CNA cards connect to the BNT Virtual Fabric 10 Gb Switch Module in Bays 7 and 9.  These switch modules have an internal connector to the QLogic Virtual Fabric Extension Module, located in Bays 3 and 5.  The I/O traffic moves from the CNA cards to the BNT switch, which separates the Ethernet traffic and sends it out to the Ethernet fabric while the Fibre traffic routes internally to the QLogic Virtual Fabric Extension Modules.  From the Extension Modules, the traffic flows into the Fibre Fabric.

It’s important to understand the switches, and how they are connected, too, as this is a new approach for IBM.  Previously the Bridge Bays (I/O Bays 5 and 6) really haven’t been used and IBM has never allowed for a card in the CFF-h slot to connect to the switch bay in I/O Bay 3. 

 

There are a few other designs that are acceptable that will still give you the split fabric out of the chassis, however they were not “redundant” so I did not think they were relevant.  If you want to read the full IBM Redbook on this offering, head over to IBM’s site.

A few things to note with the maximum redundancy design I mentioned above:

1) the CIOv slots on the HS22 and HS22v can not be used.  This is because I/O bay 3 is being used for the Extension Module and since the CIOv slot is hard wired to I/O bay 3 and 4, that will just cause problems – so don’t do it.

2) The BladeCenter E chassis is not supported for this configuration.  It doesn’t have any “high speed bays” and quite frankly wasn’t designed to handle high I/O throughput like the BladeCenter H.

3) Only the parts listed above are supported.  Don’t try and slip in a Cisco Fibre Switch Module or use the Emulex Virtual Adapter on the blade server – it won’t work.  This is a QLogic design and they don’t want anyone else’s toys in their backyard.

That’s it.  Let me know what you think by leaving a comment below.  Thanks for stopping by!

10 Things That Cisco UCS Polices Can Do (That IBM, Dell or HP Can’t)

ViewYonder.com recently posted a great write up on some things that Cisco’s UCS can do that IBM, Dell or HP really can’t. You can go to ViewYonder.com to read the full article, but here are 10 things that Cisco’s UCS Polices do:

  • Chassis Discovery – allows you to decide how many links you should use from the FEX (2104) to the FI (6100).  This affects the path from blades to FI and the oversubscription rate.  If you’ve cabled 4 I can just use 2 if you want, or even 1.
  • MAC Aging – helps you manage your MAC table?  This affects ability to scale, as bigger MAC tables need more management.
  • Autoconfig – when you insert a blade, depending on its hardware config enables you to apply a specific template for you and put it in a organization automatically.
  • Inheritence – when you insert a blade, allows you to automatically create a logical version (Service Profile) by coping the UUID, MAC, WWNs etc.
  • vHBA Templates – helps you to determine how you want _every_ vmhba2 to look like (i.e. Fabric,  VSAN,  QoS, Pin to a border port)
  • Dynamic vNICs – helps you determine how to distribute the VIFs on a VIC
  • Host Firmware – enables you to determine what firmware to apply to the CNA, the HBA, HBA ROM, BIOS, LSI
  • Scrub – provides you with the ability to wipe the local disks on association
  • Server Pool Qualification – enables you to determine which hardware configurations live in which pool
  • vNIC/vHBA Placement – helps you to determine how to distribute VIFs over one/two CNAs?

For more on this topic, visit Steve’s blog at ViewYonder.com.  Nice job, Steve!