Monthly Archives: February 2010

HP BladeSystem Rumours

I’ve recently posted some rumours about IBM’s upcoming announcements in their blade server line, now it is time to let you know some rumours I’m hearing about HP.   NOTE: this is purely speculation, I have no definitive information from HP so this may be false info.  That being said – here we go:

Rumour #1:  Integration of “CNA” like devices on the motherboard. 
As you may be aware, with the introduction of the “G6”, or Generation 6, of HP’s blade servers, HP added “FlexNICs” onto the servers’ motherboards instead of the 2 x 1Gb NICs that are standard on most of the competition’s blades.  FlexNICs allow for the user to carve up a 10Gb NIC into 4 virtual NICs when using the Flex-10 Modules inside the chassis.  (For a detailed description of Flex-10 technology, check out this HP video.)  The idea behind Flex-10 is that you have 10Gb connectivity that allows you to do more with fewer NICs. 

SO – what’s next?  Rumour has it that the “G7” servers, expected to be announced on March 16, will have an integrated CNA or Converged Network Adapter.  With a CNA on the motherboard, both the ethernet and the fibre traffic will have a single integrated device to travel over.  This is a VERY cool idea because this announcement could lead to a blade server that can eliminate the additional daughter card or mezzanine expansion slots therefore freeing up valueable real estate for newer Intel CPU architecture.

Rumour #2: Next generation Flex-10 Modules will separate Fibre and Network traffic.

Today, HP’s Flex-10 ONLY allows handles Ethernet traffic.  There is no support for FCoE (Fibre Channel over Ethernet) so if you have a Fibre network, then you’ll also have to add a Fibre Switch into your BladeSystem chassis design. If HP does put in a CNA onto their next generation blade servers that carry Fibre and Ethernet traffic, wouldn’t it make sense there would need to be a module that would fit in the BladeSystem chassis that would allow for the storage and Ethernet traffic to exit? 

I’m hearing that a new version of the Flex-10 Module is coming, very soon, that will allow for the Ethernet AND the Fibre traffic to exit out the switch. (The image to the right shows what it could look like.)  The switch would allow for 4 of the uplink ports to go to the Ethernet fabric and the other 4 ports of the 8 port Next Generation Flex-10 switch to either be dedicated to a Fibre fabric OR used for additional 4 ports to the Ethernet fabric. 

If this rumour is accurate, it could shake up things in the blade server world.  Cisco UCS uses 10Gb Data Center Ethernet (Ethernet plus FCoE); IBM BladeCenter has the ability to do a 10Gb plus Fibre switch fabric (like HP) or it can use a 10Gb Enhanced Ethernet plus FCoE (like Cisco) however no one currently has a device to split the Ethernet and Fibre traffic at the blade chassis.  If this rumour is true, then we should see it announced around the same time as the G7 blade server (March 16).

That’s all for now.  As I come across more rumours, or information about new announcements, I’ll let you know.

Introducing the IBM HS22v Blade Server

IBM officially announced today a new addition to their blade server line – the HS22v.  Modeled after the HS22 blade server, the HS22v is touted by IBM as a “high density, high performance blade optimized for virtualization.”  So what makes it so great for virtualization?  Let’s take a look.

Memory
One of the big differences between the HS22v and the HS22 is more memory slots.  The HS22v comes with 18 x very low profile (VLP) DDR3 memory DIMMs for a maximum of 144GB RAM.  This is a key attribute for a server running virtualization since everyone knows that VM’s love memory.  It is important to note, though, the memory will only run at 800Mhz when all 18 slots are used.  In comparison, if you only had 6 memory DIMMs installed (3 per processor) then the memory would run at 1333Mhz and 12 DIMMs installed (6 per processor) runs at 1066Mhz.  As a final note on the memory, this server will be able to use both 1.5v and 1.35v memory.  The 1.35v will be newer memory that is introduced as the Intel Westmere EP processor becomes available.  The big deal about this is that lower voltage memory = lower overall power requirements.

Drives
The second big difference is the HS22v does not use hot-swap drives like the HS22 does.  Instead, it uses a 2 x solid state drives (SSD) for local storage. These drives have  hardware RAID 0/1 capabilities standard.  Although the picture to the right shows a 64GB SSD drive, my understanding is that only 50GB drives will be available as they start to become readlily available on March 19, with larger sizes (64GB and 128GB) becoming available in the near future.  Another thing to note is that the image shows a single SSD drive, however the 2nd drive is located directly beneath.  As mentioned above, these drives do have the ability to be set up in a RAID 0 or 1 as needed.

So – why did IBM go back to using internal drives?  For a few reasons:

Reason #1
: in order to get the space to add the extra memory slots, a change had to be made in the design.  IBM decided that solid state drives were the best fit.

Reason #2: the SSD design allows the server to run with lower power.  It’s well known that SSD drives run at a much lower power draw than physical spinning disks, so using SSD’s will help the HS22v be a more power efficient blade server than the HS22.

Reason #3: a common trend of virtualization hosts, especially VMware ESXi, is to run on integrated USB devices.  By using an integrated USB key for your virtualization software, you can eliminate the need for spinning disks, or even SSD’s therefore reducing your overall cost of the server.

Processors
So here’s the sticky area.  IBM will be releasing the HS22v with the Intel Xeon 5500 processor first.  Later in March, as the Intel Westmere EP (Intel Xeon 5600) is announced, IBM will have models that come with it.  IBM will have both Xeon 5500 and Xeon 5600 processor offerings.  Why is this?  I think for a couple of reasons:

a) the Xeon 5500 and the Xeon 5600 will use the same chipset (motherboard) so it will be easy for IBM to make one server board, and plop in either the Nehalem EP or the Westmere EP

b) simple – IBM wants to get this product into the marketplace sooner than later.

Questions

1) Will it fit into the BladeCenter E?
YES – however there may be certain limitations, so I’d recommend you reference the IBM BladeCenter Interoperability Guide for details.

2) Is it certified to run VMware ESX 4?
YES

3) Why didn’t IBM call it HS22XM?
According to IBM, the “XM” name is feature focused while “V” is workload focused – a marketing strategy we’ll probably see more of from IBM in the future.

That’s it for now.  If there are any questions you have about the HS22v, let me know in the comments and I’ll try to get some answers.

For more on the IBM HS22v, check out IBM’s web site here.

Check back with me in a few weeks when I’m able to give some more info on what’s coming from IBM!

Cisco Takes Top 8 Core VMware VMmark Server Position

Cisco is getting some (more) recognition with their UCS blade server product, as they recently achieved the top position for “8 Core Server” on VMware’s VMmark benchmark tool.  VMmark is the industry’s first (and only credible) virtualization benchmark for x86-based computers.  According to the VMmark website, the Cisco UCS B200 blade server reached a score of 25.06 @ 17 tiles.  A “tile” is simple a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment.   The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity.

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot?  Here’s the configuration:

server config:

  • 2 x Intel Xeon X5570 Processors
  • 96GB of RAM (16 x 8GB)
  • 1 x Converged Network Adapter (Cisco UCS M71KR-Q)

storage config:

  • EMC CX4-240
  • Cisco MDS 9130
  • 1154.27GB Used Disk Space
  • 1024MB Array Cache
  • 41 disks used on 4 enclosures/shelves (1 with 14 disk, 3 with 9 disks)
  • 37 LUNs used
    *17 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *17 at 15GB (database) + 2 LUNs at 400GB (Misc) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

While first place on the VMmark page (8 cores) shows Fujitsu’s RX300, it’s important to note that it was reached using Intel’s W5590 processor – a processor that is designed for “workstations” – not servers.  Second place, of server processors, currently shows HP’s BL490 with 24.54 (@ 17 tiles)

Thanks to Omar Sultan (@omarsultan) for Tweeting about this and to Harris Sussman for blogging about it.

Mark Your Calendar – Upcoming Announcements

As I mentioned previously, the next few weeks are going to be filled with new product / technology annoucements.  Here’s a list of some dates that you may want to mark on your calendar (and make sure to come back here for details:)

Feb 9 – Big Blue new product announcement (hint: in the BladeCenter family)

Mar 2 – Big Blue non-product annoucement (hint: it’s not the eX4 family)

Mar 16  – Intel Westmere (Intel Xeon 5600) Processor Announcement (expect HP and IBM to announce their Xeon 5600 offerings)

Mar 30 – Intel Nehalem EX (Xeon 7600) Processor Annoucement (expect HP and IBM to announce their Intel Xeon 7600 offerings)

As always, you can expect for me to give you coverage on the new blade server technology as it gets announced!

IBM’s 4 Processor Intel Nehalem EX Blade Server

2-2-10 CORRECTION Made Below

Okay, I’ve seen the details on IBM’s next generation 4 processor blade server that is based on the Intel Nehalem EX cpu and I can tell you that IBM’s about to change the way people look at workloads for blade servers.  Out of respect for IBM (and at the risk of getting in trouble) I’m not going to disclose any confidential details, but I can tell you a few things:

1) my previous post about what the server will look like is not far off.  In fact it was VERY close.  However IBM up’d the ante and made a few additions that I didn’t expect that will make it appealing for customers who need the ability to run large workloads.

2) the scheduled announce date for this new 4 processor IBM blade server based on the Nehalem EX (whose name I guessed correctly) will be before April 1, 2010 but after March 15, 2010.  Ship date is currently scheduled sometime after May but before July.

As a final teaser, there’s another IBM blade server annoucement scheduled for tomorrow.  Once it’s officially announced on Feb 3rd  Feb 9th I’ll let you know and give you some details.