IBM’s Enterprise x-Architecture has been around for quite a while providing unique Scalability, Reliability and Flexibility in the x86 4-socket platforms. You can check out the details of the eX4 technology here.
Today’s announcement offered up a few facts:
a) the existing x3850 and x3950 M2 will be called x3850 and x3950 X5 signifying a trend for IBM to move toward product naming designations that reflect the purpose of the server.
b) the x3850 and x3950 X5’s will use the Intel Nehalem EX – to be officially announced/released on March 30. At this time we can expect full details including part numbers, pricing and technical specifications.
c) a new 2u high, 2 socket server, the x3690 X5 was also announced. This is probably the most exciting of the product announcements, as it is based on the Intel Nehalem EX processor but IBM’s innovation is going to enable the x3690 X5 to scale from 2 sockets to 4 sockets – but wait, there’s more. There will be the ability, called MAX5 to add a memory expansion unit to the x3690 X5 systems, enabling their system memory to be DOUBLED.d) in addition to the memory drawer, IBM will be shipping packs of solid state disks, called eXFlash that will deliver high performance to replace the limited IOPs of traditional spinning disks. IBM is touting “significant” increases in performance for local databases with this new bundle of solid state disks. In fact, according to IBM’s press release, eXFlash technology would eliminate the need for a client to purchase two entry-level servers and 80 JBODs to support a 240,000 IOPs database environment, saving $670,000 in server and storage acquisition costs. The cool part is, these packs of disks will pop into the hot-swap drive bays of the x3690, x3850 and x3950 X5 servers.
e) IBM also announced a new technology, known as “FlexNode” that offers up physical partitioning capability for servers to move from being a single system to 2 different unique systems and back again.
Blade Specific News
1) IBM will be releasing a new blade server, the BladeCenterHX5next quarter that will also use the Intel Xeon 7500. This blade server will scale, like all of the eX5 products, from 2 processors to 4 processors (and theoretically more) and will be ideal for database workloads. Again, pricing and specs for this product will be released on the official Intel Nehalem EX launch date.
IBM BladeCenter HX5 Blade Server
An observation from the pictures of the HX5 is that it will not have hot-swap drives, like the HS22’s do. This means there will be internal drives – most like solid state drives (SSDs). You may recall from my previous rumour post that the lack of hot-swap drives is pretty evident – IBM needed the real estate for the memory. Unfortunately until memristors become available, blade vendors will need to sacrifice real estate for memory.
2) As part of the MAX5 technology, IBM will also be launching a memory blade to increase the overall memory on the HX5 blade server. Expect more details on this in the near future.
Visit IBM’s website for their Live eX5 Event at 2 p.m. Eastern time at this site:
As more information comes out on the new IBM eX5 portfolio, check back here and I’ll keep you posted. I’d love to hear your thoughts in the comments below.
MAX5 Memory Drawer (1U)
I find the x3690 X5 to be so interesting and exciting because it could quickly take over the server space that is currently occupied by the HP DL380 and the IBM x3650’s when it comes to virtualization. We all know that VMware and other hypervisors thrive on memory, however the current 2 socket server design is limited to 12 – 16 memory sockets. With the IBM System x3690 X5, this limitation can be overcome, as you can simply add on a memory drawer to achieve more memory capacity.
Industry Opinions
Check out this analyst’s view of the IBM eX5 announcement here (pdf).
Here’s what VMware’s CTO, Stephen Herrod, has to say about IBM eX5:
Tomorrow, March 2nd, IBM kicks off a new portfolio of products in their System x line of product offerings. One of the products will be a refresh and two of the products will be new. However – don’t get your hopes up on seeing details on these new offerings, because tomorrow’s live event at 2 p.m. Eastern will be focused on the portfolio and the technology behind the portfolio. IBM will not be disclosing any pricing, performance, model or Intel specifics until Intel’s launch dates on March 16 and March 30.
“What Does 5 Mean to You” Campaign 5 business days ago, IBM kicked off a video campaign, “What Does 5 Mean to You.” While a clever idea, I thought it missed on the messaging. They were playing too much on “5” – which will become clearer tomorrow when the announcement is made. Here’s a look at all the videos:
What Are Your Top 5 IT Challenges (from the “What is 5” videos)? The key point of these videos were not to tease us, but to highlight the top 5 IT challenges that the new IBM portfolio will help solve. Take a look at the top 5 IT challenges:
Challenge #5: “My servers need Fibre Channel, Ethernet and iSCSI all operating at different speeds. How do I simplify my networks right now?” Message:Converged Infrastructure
Challenge #4: “Why do I have to buy different types of servers whenever my needs change? Can’t technology adapt to me?” Message:Flexible Infrastructure
Challenge #3: “My data costs keep growing. How can I control the sprawl of my storage?” Message:?? Not Sure
Challenge #2: “I don’t need a lot of complicated choices. Why can’t I get a system that is set up for my workloads…right out of the box?” Message:Like the IBM HS22v is “designed” for virtualization, we can expect this trend to continue with future IBM product releases…
Challenge #1: “Technology competitors can all seem the same. Doesn’t anyone have a game changing technology that will blow me away?” Message:IBM expects the March 2nd announcement to be a game changer – and so do I.
Check back with me tomorrow when IBM unveils What 5 Really Means!
First, I’d like to thank HP for inviting me to HP Tech Day in Houston. I’m honored that I was chosen and hope that I’m invited back – event after my challenging questions about the Tolly Report. It was a fun packed day and a half, and while it was a great event, I won’t miss having to hashtag (#hpbladesday) all my tweets. I figured I’d use this last day to offer up my final thoughts – for what they are worth.
Blogger Attendees As some of you may know, I’m still the rookie of this blogging community – especially in the group of invitees, so I didn’t have a history with anyone in the group, except Rich Brambley of http://vmetc.com . However, this did not matter, as they all welcomed me as if I were one of their own. In fact, they even treated me to a practical joke, letting me walk around HP’s Factory Express tour for hal an hour with a Proliant DL180 G6 sticker on my back (thanks to Stephen and Greg for that one.) Yes, that’s me in the picture.
All jokes aside, these bloggers were top class, and they offer up some great blogs, so if you don’t check them out daily, please make sure to visit them. Here’s the list of attendees and their sites:
A special thanks to the extensive HP team who participated in the blogging efforts as well.
HP Demos and Factory Express Tour I think I got the most out of this event from the live demos and the Factory Express tour. These are things that you can read about, but until you see them in person, you can’t appreciate the value that HP brings to the table, through their product design and through their services.
The image on the left shows the MDS6000MDS600 storage shelf – something that I’ve read about many times, but until I saw it, I didn’t realize how cool, and useful, it was. 70 drives in a 5u space. That’s huge. Seeing things like this, live and in person, is what these HP Tech Days need to be about. Hands-on, live demos. and tours of what makes HP tick.
The Factory Express Tour was really cool. I think we should have been allowed to work the line for an hour along with the HP employees. On this tour we saw how customized HP Server builds go from being an order, to being a solution. Workers like the one in the picture on the right typically do 30 servers a day, depending on the type of server. The entire process involves testing and 100% audits to insure accuracy.
My words won’t do HP Factory Express justice, so check out this video from YouTube:
Feedback to the HP team for future events:
1) Keep the blogger group small
2) Keep it to HP demos and presentations (no partners, please)
3) More time on hands-on, live demos and tours. This is where the magic is.
4) Try and do this at least once a quarter. HP’s doing a great job building their social media teams, and this event goes a long way in creating that buzz.
Thanks again, HP, and to Ivy Worldwide (http://www.ivyworldwide.com) for doing a great job. I hope to attend again!
Disclaimer: airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.
Tolly.com announced on 2/25/2010 a new Test Report that compares the network bandwidth scalabilitybetween HP BladeSystem c7000 with BL460 G6 Servers and Cisco UCS 5100 with B200 Servers, and the results were interesting. The report simply tested 6 HP blades, with a single Flex-10 Module vs 6 Cisco blades using their Fabric Extender + a single Fabric Interconnect. I’m not going to try and re-state what the report says (for that you can download it directly), instead, I’m going to highlight the results. It is important to note that the report was “commissioned by Hewlett-Packard Dev. Co, L.P.”
Result #1: HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Physical-to-Physical Comparison
>The test shows when 4 physical servers were tested, Cisco can achieve an aggregate throughput of 36.59 Gbps vs HP achieving 35.83Gbps (WINNER: Cisco)
>When 6 physical servers were tested, Cisco achieved an aggregate throughput of 27.37 Gbps vs HP achieving 53.65 Gbps – a difference of 26.28 Gbps (WINNER: HP)
Result #2: HP BladeSystem C7000 with a Flex-10 Module Tested to have More Aggregate Server Throughput (Gbps) over the Cisco UCS with a Fabric Extender connected to a Fabric Interconnect in a Virtual-to-Virtual Comparison >Testing 2 servers each running 8 VMware Red Hat Linux hosts showed that HP achieved an aggregate throughput of 16.42 Gbps vs Cisco UCS achieving 16.70 Gbps (WINNER: Cisco).
The results of the above was performed with the 2 x Cisco B200 blade servers each mapped to a dedicated 10Gb uplink port on the Fabric Extender (FEX). When the 2 x Cisco B200 blade servers were designed to share the same 10Gb uplink port on the FEX, the achieved aggregate throughput on the Cisco UCS decreased to 9.10 Gbps.
A few points to note about these findings:
a) the HP Flex-10 Module has 8 x 10Gb uplinks whereas the Cisco Fabric Extender (FEX) has 4 x 10Gb uplinks
b) Cisco’s FEX Design allows for the 8 blade servers to extend out the 4 external ports in the FEX a 2:1 ratio (2 blades per external FEX port.) The current Cisco UCS design requires the servers to be “pinned”, or permanently assigned, to the respective FEX uplink. This works well when there are 4 blade servers, but when you get to more than 4 blade servers, the traffic is shared between two servers, which could cause bandwidth contention.
Furthermore, it’s important to understand that the design of the UCS blade infrastructure does not allow communication to go from Server 1 to Server 2 without leaving the FEX, connecting to the Fabric Interconnect (top of the picture) then returning to the FEX and connecting to the server. This design is the potential cause of the decrease in aggregate throughput from 16.70Gbps to 9.10Gbps as shown above.
One of the “Bottom Line” conclusions from this report states, “throughput degradation on the Cisco UCS cased by bandwidth contention is a cause of concern for customers considering the use of UCS in a virtual server environment” however I encourage you to take a few minutes, download this full report from the Tolly.com website and make your own conclusions about this report.
Let me know your thoughts about this report – leave a comment below.
Disclaimer: This report was brought to my attention while attending the HP Tech Day event where airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.
Wow – the first day of HP Tech Day 2010 was jammed pack full of meetings, presentations and good information. Unfortunately, it appears there won’t be any confidential, earth shattering news to report on, but it has still been a great event to attend.
My favorite part of the day was going to the HP BladeSystem demo, where we not only got to get our hands on the blade servers, but we got to see what the mid-plane and power bus looks like outside the chassis.
Kudos to James Singer, HP Blade engineer, who did a great job talking about the HP BladeSystem and all it offers. My only advice to the HP events team is to double the time we get with the blades next time. (Isn’t that why were were here?)
Since I spent most of the day Tweeting what was going on, I figured it would be easiest to just list my tweets throughout the day. If you have any questions about any of this, let me know.
My tweets from 2/25/2010 (latest to earliest):
Q&A from HP StorageWorks CTO, Paul Perez
“the era of spindles for IOPS will be over soon.” Paul Perez, CTO HP StorageWorks
CTO Perez said Memristors (http://tinyurl.com/39f6br) are the next major evolution in storage – in next 2 or 3 years
CTO Perez views Solid State (Drives) as an extension of main memory.
HP StorageWorks CTO, Paul Perez, now discussing HP StorageWorks X9000 Network Storage System (formerly known as IBRIX)
Today kicks off the HP Blades and Infrastructure Software Tech Day 2010 (aka HP Blades Day). I’ll be updating this site frequently throughout the day, so be sure to check back. You can quickly view all of the HP Tech Day info by clicking on the “Category” tab on the left and choose “HPTechDay2010.” For live updates, follow me on Twitter @Kevin_Houston.
Here’s our agenda for today (Day 1):
9:10 – 10:00 ISB Overview and Key Data Center Trends 2010
10:00 – 10:30 Nth Generation Computing Presentation
10:45 – 11:45 Virtual Connect
1:00 – 3:00 BladeSystem in the Lab (Overview and Demo) and Insight Software (Overview and Demo)
3:15 – 4:15 Matrix
4:15 – 4:45 Competitive Discussion
5:00 – 5:45 Podcast roundtable with Storage Monkeys
Note: gaps in the times above indicate a break or lunch.
For extensive coverage, make sure you check in on the rest of the attendees’ blogs:
Disclaimer: airfare, accommodations and meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.
I wanted to post a few more rumours before I head out to HP in Houston for “HP Blades and Infrastructure Software Tech Day 2010” so it’s not to appear that I got the info from HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.
First off – the HP Rumour:
I’ve caught wind of a secret that may be truth, may be fiction, but I hope to find out for sure from the HP blade team in Houston. The rumour is that HP’s development team currently has a Cisco Nexus Blade Switch Module for the HP BladeSystem in their lab, and they are currently testing it out.
Now, this seems far fetched, especially with the news of Cisco severing partner ties with HP, however, it seems that news tidbit was talking only about products sold with the HP label, but made by Cisco (OEM.) HP will continue to sell Cisco Catalyst switches for the HP BladeSystem and even Cisco branded Nexus switches with HP part numbers (see this HP site for details.) I have some doubt about this rumour of a Cisco Nexus Switch that would go inside the HP BladeSystem simply because I am 99% sure that HP is announcing a Flex10 type of BladeSystem switch that will allow converged traffic to be split out, with the Ethernet traffic going to the Ethernet fabric and the Fibre traffic going to the Fibre fabric (check out this rumour blog I posted a few days ago for details.) Guess only time will tell.
The IBM Rumour: I posted a few days ago a rumour blog that discusses the rumour of HP’s next generation adding Converged Network Adapters (CNA) to the motherboard on the blades (in lieu of the 1GB or Flex10 NICs), well, now I’ve uncovered a rumour that IBM is planning on following later this year with blades that will also have CNA’s on the motherboard. This is huge! Let me explain why.
The design of IBM’s BladeCenter E and BladeCenter H have the 1Gb NICs onboard each blade server hard-wired to I/O Bays 1 and 2 – meaning only Ethernet modules can be used in these bays (see the image to the left for details.) However, I/O Bays 1 and 2 are for “standard form factor I/O modules” while I/O Bays are for “high speed form factor I/O modules”. This means that I/O Bays 1 and 2 can not handle “high speed” traffic, i.e. converged traffic.
This means that IF IBM comes out with a blade server that has a CNA on the motherboard, either:
a) the blade’s CNA will have to route to I/O Bays 7-10
OR
b) IBM’s going to have to come out with a new BladeCenter chassis that allows the high speed converged traffic from the CNAs to connect to a high speed switch module in Bays 1 and 2.
So let’s think about this. If IBM (and HP for that matter) does put CNA’s on the motherboard, is there a need for additional mezzanine/daughter cards? This means the blade servers could have more real estate for memory, or more processors. If there’s no extra daughter cards, then there’s no need for additional I/O module bays. This means the blade chassis could be smaller and use less power – something every customer would like to have.
I can really see the blade market moving toward this type of design (not surprising very similar to Cisco’s UCS design) – one where only a pair of redundant “modules” are needed to split converged traffic to their respective fabrics. Maybe it’s all a pipe dream, but when it comes true in 18 months, you can say you heard it here first.
Thanks for reading. Let me know your thoughts – leave your comments below.
BLADE Network Technologies, Inc. (BLADE), “officially” announces today the delivery of the industry’s first and only fully integrated Fibre Channel over Ethernet (FCoE) solution inside a blade chassis. This integration significantly reduces power, cost, space and complexity over external FCoE implementations.
You may recall that I blogged about this the other day (click here to read), however I left off one bit of information. The (Blade Networks) BNT Virtual Fabric 10 Gb Switch Moduledoes not require the QLogic Virtual Fabric Extension Moduleto function. It will work with an existing Top-of-Rack (TOR) Convergence Switch from Brocade or Cisco to act as a 10Gb switch module, feeding the converged 10Gb link up to the TOR switch. Since it is a switch module, you can connect as few as 1 uplink to your TOR switch, therefore saving connectivity costs, as opposed to a pass-thru option (click here for details on the pass-thru option.)
Yes – this is the same architectural design as the Cisco Nexus 4001i provides as well, however there are a couple of differences:
BNT Virtual Fabric Switch Module (IBM part #46C7191) – 10 x 10Gb Uplinks, $11,199 list (U.S.) Cisco Nexus 4001i Switch (IBM part #46M6071) – 6 x 10Gb Uplinks, $12,999 list (U.S.)
While BNT provides 4 extra 10Gb uplinks, I can’t really picture anyone using all 10 ports. However, it does has a lower list price, but I encourage you to check your actual price with your IBM partner, as the actual pricing may be different. Regardless of whether you choose BNT or Cisco to connect into your TOR switch, don’t forget the transceivers! They add much more $$ to the overall cost, and without them you are hosed.
About the BNT Virtual Fabric 10Gb Switch Module The BNT Virtual Fabric 10Gb Switch Module includes the following features and functions:
Form-factor
Single-wide high-speed switch module (fits in IBM BladeCenter H bays #7 and 9.)
Internal ports
14 internal auto-negotiating ports: 1 Gb or 10 Gb to the server blades
Two internal full-duplex 100 Mbps ports connected to the management module
External ports
Up to ten 10 Gb SFP+ ports (also designed to support 1 Gb SFP if required, flexibility of mixing 1 Gb/10 Gb)
One 10/100/1000 Mb copper RJ-45 used for management or data
An RS-232 mini-USB connector for serial port that provides an additional means to install software and configure the switch module
Scalability and performance
Autosensing 1 Gb/10 Gb internal and external Ethernet ports for bandwidth optimization
To read the extensive list of details about this switch, please visit the IBM Redbook located here.
On Wednesday I will be headed to the 2010 HP Infrastructure Software & Blades Tech Day, an invitation only blogger event at the HP Campus in Houston, TX. This event is a day and a half deep dive about the blade server market, key data center trends and client virtualization. We will be with HP technology leaders and business executives who will discuss the company’s business advantages and technical advances. The event will also include customers’ and their own key insights and experiences and provide demos of the products including an insider’s tour of HP’s Lab facilities.
I’m extremely excited to attend this event and can’t wait to blog about it. (Hopefully HP will not NDA the entire event.) I’m also excited to meet some of the world’s top bloggers. Check out this list of attendees:
Be sure to check back with me on Thursday and Friday for updates to the event, and also follow me on Twitter @kevin_houston (twitter hashcode for this event is #hpbladesday.)
Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.