Yearly Archives: 2009

IBM BladeCenter HS22 Delivers Best SPECweb2005 Score Ever Achieved by a Blade Server

HS22According to IBM’s System x and BladeCenter x86 Server Blog, the IBM BladeCenter HS22 server has posted the best SPECweb2005 score ever from a blade server.  With a SPECweb2005 supermetric score of 75,155, IBM has reached a benchmark seen by no other blade yet to-date.  The SPECweb2005 benchmark is designed to be a neutral, equal benchmark for evaluting the peformance of web servers.  According to the IBM blog, the score is derived from three different workloads measured:

  • SPECweb2005_Banking – 109,200 simultaneous sessions
  • SPECweb2005_Ecommerce – 134,472 simultaneous sessions
  • SPECweb2005_Support – 64,064 simultaneous sessions

The HS22 achieved these results using two Quad-Core Intel Xeon Processor X5570 (2.93GHz with 256KB L2 cache per core and 8MB L3 cache per processor—2 processors/8 cores/8 threads). The HS22 was also configured with 96GB of memory, the Red Hat Enterprise Linux® 5.4 operating system, IBM J9 Java® Virtual Machine, 64-bit Accoria Rock Web Server 1.4.9 (x86_64) HTTPS software, and Accoria Rock JSP/Servlet Container 1.3.2 (x86_64).

It’s important to note that these results have not yet been “approved” by SPEC, the group who posts the results, but as soon as they are, they’ll be published at at http://www.spec.org/osg/web2005

The IBM HS22 is IBM’s most popular blade server with the following specs:

  • up to  2 x Intel 5500 Processors
  • 12 memory slots for a current maximum of 96Gb of RAM
  • 2 hot swap hard drive slots capable of running RAID 1 (SAS or SATA)
  • 2 PCI Express connectors for I/O expansion cards (NICs, Fibre HBAs, 10Gb Ethernet, CNA, etc)
  • Internal USB slot for running VMware ESXi
  • Remote management
  • Redundant connectivity

ibm_hs22_nehalem_blade

(UPDATED) Officially Announced: IBM’s Nexus 4000 Switch: 4001I (PART 2)

I’ve gotten a lot of response from my first post, “REVEALED: IBM’s Nexus 4000 Switch: 4001I” and more information is coming out quickly so I decided to post a part 2. IBM officially announced the switch on October 20, 2009, so here’s some additional information:

  • The Nexus 4001I Switch for the IBM BladeCenter is part # 46M6071 and has a list price of $12,999 (U.S.) each
  • In order for the Nexus 4001I switch for the IBM BladeCenter to connect to an upstream FCoE switch, an additional software purchase is required. This item will be part # strong>49Y9983, “Software Upgrade License for Cisco Nexus 4001I.” This license upgrade allows for the Nexus 4001I to handle FCoE traffic. It has a U.S. list price of $3,899
  • The Cisco Nexus 4001I for the IBM BladeCenter will be compatible with the following blade server expansion cards
    • 2/4 Port Ethernet Expansion Card, part # 44W4479
    • NetXen 10Gb Ethernet Expansion Card, part # 39Y9271
    • Broadcom 2-port 10Gb Ethernet Exp. Card, part # 44W4466
    • Broadcom 4-port 10Gb Ethernet Exp. Card, part # 44W4465
    • Broadcom 10 Gb Gen 2 2-port Ethernet Exp. Card, part # 46M6168
    • Broadcom 10 Gb Gen 2 4-port Ethernet Exp. Card, part # 46M6164
    • QLogic 2-port 10Gb Converged Network Adapter, part # 42C1830
  • (UPDATED 10/22/09) The newly announced Emulex Virtual Adapter WILL NOT work with the Nexus 4001I IN VIRTUAL NIC (vNIC) mode.  It will work in pNIC mode according to IBM.

The Cisco Nexus 4001I switch for the IBM BladeCenter is a new approach to getting converged network traffic. As I posted a few weeks ago in my post, “How IBM’s BladeCenter works with BladeCenter H Diagram 6 x 10Gb UplinksCisco Nexus 5000” before the Nexus 4001I was announced, in order to get your blade servers to communicate with a Cisco Nexus 5000, you had to use a CNA,and a 10Gb Pass-Thru Module as shown on the left. The pass-thru module used in that solution requires for a direct connection to be made from the pass-thru module to the Cisco Nexus 5000 for every blade server that requires connectivity. This means for 14 blade servers, 14 connections are required to the Cisco Nexus 5000. This solution definitely works – it just eats up 14 Nexus 5000 ports. At $4,999 list (U.S.), plus the cost of the GBICs, the “pass-thru” scenario may be a good solution for budget conscious environments.

In comparison, with the IBM Nexus 4001I switch, we now can have as few as 1 uplink to the Cisco Nexus 5000 from the Nexus 4001I switch. This allows you to have more open ports on the Cisco Nexus 5000 for connections to other IBM Bladecenters with Nexus 4001I switches, or to allow connectivity from your rack based servers with CNAs.

Bottom line: the Cisco Nexus 4001I switch will reduce your port requirements on your Cisco Nexus 5000 or Nexus 7000 switch by allowing up to 14 servers to uplink via 1 port on the Nexus 4001I.

For more details on the IBM Nexus 4001I switch, I encourage you to go to the newly released IBM Redbook for the Nexus 4001I Switch.

IBM Announces Emulex Virtual Fabric Adapter for BladeCenter…So?

Emulex Virtual Fabric AdapterEmulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC.  The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card. 

When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card.  According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.  The one catch with this mode is that it ONLY operates with the  BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC.  This means no connection to Cisco Nexus…yet.  According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade.  Not sure if that means compatibility with Cisco Nexus 5000 or not.  We’ll have to wait and see.

When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a  standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.   The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.

BladeCenter H I-O

So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value.  HP has had this functionality for over a year now in their VirtualConnect Flex-10  offering so this technology is nothing new.  Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic.  I’m just not sure, but am open to any comments or thoughts.

legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms

REVEALED: IBM's Nexus 4000 Switch: 4001I (Updated)

Finally – information on the soon-to-be-released Cisco Nexus 4000 switch for IBM BladeCenter.  Apparently IBM is officially calling their version “Cisco Nexus Switch Module 4001I for the IBM BladeCenter.”  I’m not sure if it’s “officially” announced yet, but I’ve uncovered some details.  Here is a summary of the Cisco Nexus Switch Module 4001I for the IBM BladeCenter:Nexus 4000i Photo

  • Six external 10-Gb Ethernet ports for uplink
  • 14 internal XAUI ports for connection to the server blades in the chassis
  • One 10/100/1000BASE-T RJ-45 copper management port for out-of-band management link  (this port is available on the front panel next to the console port)
  • One external RS-232 serial console port  (this port is available on the front panel and uses an RJ-45 connector)

More tidbits of info:

  • The switch will be capable of forwarding Ethernet and FCoE packets at wire rate speed. 
  • The six external ports will be SFP+ (no surprise) and they’ll support 10GBASE-SR SFP+, 10GBASE-LR SFP+, 10GBASE-CU SFP+ and GE-SFP.
  • Internal port speeds can run at 1 Gb or 10Gb (and can be set to auto-negotiate); full duplex
  • Internal ports will be able to forward Layer-2 packets at wire rate speed.
  • The switch will work in the IBM BladeCenter “high-speed bays” (bays 7, 8, 9 and 10); however at this time, the available Converged Network Adapters (CNAs) for the IBM blade servers will only work with Nexus 4001I’s located in bays 7 and 9.

There is also mention of a “Nexus 4005I” from IBM, but I can’t find anything on that.  I do not believe that IBM has announced this product, so the information provided is based on documentation from Cisco’s web site.  I expect announcement to come in the next 2 weeks, though, with availability probably following in November just in time for the Christmas rush!

For details on the information mentioned above, please visit the Cisco web site, titled “Cisco Nexus 4001I and 4005I Switch Module for IBM BladeCenter Hardware Installation Guide“. 

If you are interested in finding out more about configuring the NX-OS for the Cisco Nexus Switch Module 4001I for the IBM BladeCenter, check out the Cisco Nexus 4001I and 4005I Switch Module for IBM BladeCenter NX-OS Configuration Guide

 UPDATE (10/20/09): the IBM part # for the Cisco Nexus 4001I Switch Module will be 46M6071

 UPDATE # 2 (10/20/09,  17:37 PM EST): Found more Cisco links:
Cisco Nexus 4001I Switch Module At A Glance

Cisco Nexus 4001I Switch Module DATA SHEET

New Picture:

Nexus 4000i Photo 2

 

black swan movie
hot shot business
ibooks for mac
bonita springs florida
greenville daily news

What Gartner Thinks of Cisco, HP, IBM and Dell (UPDATED)

(UPDATED 10/28/09 with new links to full article)

I received a Tweet from @HPITOps linked to Gartner’s first ever “Magic Quadrant” for blade servers.  Gartner Magic Quadrant - October 2009The Magic Quadrant is a tool that Gartner put together to help people easily where manufacturers rank, based on certain criteria.  As the success of blade servers continues to grow, the demand for blades increases.  You can read the complete Gartner paper at http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4AA3-0100ENW.pdf, but I wanted to touch on a few highlights.

Key Points

  • *Blades are less than 15% of the server marketplace today.
  • *HP and IBM make up 70% of the blade market share
  • *HP, IBM and Dell are classified as “Leaders” in the blade market place and Cisco is listed as a “Visionary” 

What Gartner Says About Cisco, Dell, HP and IBM

Cisco
Cisco announced their entry into the blade server market place in early 2009 and as of the past few weeks began shipping their first product.  Gartner’s report says, “Cisco’s Unified Computing System (UCS) is highly innovative and is particularly targeted at highly integrated and virtualized enterprise requirements.”  Gartner currently views Cisco as being in the “visionaries” quadrant.  The report comments that Cisco’s strengths are:

  • they have a  global presence in “most data centers”
  • differentiated blade design
  • they have a cross-selling opportunity across their huge install base
  • they have strong relationships with virtualization and integration vendors

As part of the report, Gartner also mentions some negative points (aka “Cautions”) about Cisco to consider:

  • Lack of blade server install base
  • limited blade portfolio
  • limited hardware certification by operating system and application software vendors

Obviously these Cautions are based on Cisco’s newness to the marketplace, so let’s wait 6 months and check back on what Gartner thinks.

Dell
No stranger to the blade marketplace, Dell continues to produce new servers and new designs.  While Dell has a fantastic marketing department, they still are not anywhere close to the market share that IBM and HP split.  In spite of this, Gartner still classifies Dell in the “leaders” quadrant.  According to the report, “Dell offers Intel and AMD Opteron blade servers that are well-engineered, enterprise-class platforms that fit well alongside the rest of DelI’s x86 server portfolio, which has seen the company grow its market share steadily through the past 18 months.

The report views that Dell’s strengths are:

  • having a cross-selling opportunity to sell blades to their existing server, desktop and notebook customers
  • aggressive pricing policies
  • focused in innovating areas like cooling and virtual I/O

Dell’s “cautions” are reported as:

  • having a limited portfolio that is targeted toward enterprise needs
  • bad history of “patchy committment” to their blade platforms

It will be interesting to see where Dell takes their blade model.  It’s easy to have a low price model on entry level rack servers, but in a blade server infrastructure where standardization is key and integrated switches are a necessity having the lowest pricing may get tough.

IBM
Since 2002, IBM has ventured into the blade server marketplace with an wide variety of server and chassis offerings.  Gartner placed IBM in the “leaders” quadrant as well, although they place IBM much higher and to the right signifying a “greater ability to execute” and a “more complete vision.”  While IBM once had the lead in blade server market share, they’ve since handed that over to HP.  Gartner reports, “IBM is putting new initiatives in place to regain market share, including supply chain enhancements, dedicated sales resources and new channel programs. 

The report views that IBM’ strengths are:

  • strong global market share
  • cross selling opportunities to sell into existing IBM System x, System i, System p and System z customers
  • broad set of chassis options that address specialized needs (like DC power & NEBS compliance for Telco) as well as Departmental / Enterprise
  • blade server offerings for x86 and Power Processors
  • strong record of management tools
  • innovation around cooling and specialized workloads

Gartner only lists one “caution” for IBM and that is their loss of market share to HP since 2007.

HP
Gartner identifies HP as being in the farthest right in the October 2009 Magic Quadrant, therefore I’ll classify HP as being the #1 “leader.”  Gartner’s report says, “Since the 2006 introduction of its latest blade generation, HP has recaptured market leadership and now sells more blade servers than the rest of the market combined.”  Ironically, Gartner list of HP’s strengths is nearly identical to IBM:

  • global blade market leader
  • cross selling opportunities to sell into existing HP server, laptop and desktop customers
  • broad set of chassis options that address Departmental and Enterprise needs
  • blade server offerings for x86 and Itanium Processors
  • strong record of management tools
  • innovation around cooling and virtual I/O

Gartner only lists one “caution” for HP and that is their portfolio, as extensive as it may be, could be considered too complex and it could be too close to HP’s alternative, modular, rack-based offering.

Gartner’s report continues to discuss other niche players like Fujitsu, NEC and Hitachi, so if you are interesting in reading about them, check out the full report at 

http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4AA3-0100ENW.pdf.  All-in-all, Gartner’s report reaffirms that HP, IBM and Dell are the market leaders, for now, with Cisco coming up behind them.

Feel free to comment on this post and let me know what you think.

estimated tax payments
christmas tree store
beaches in florida
dog treat recipes
new zealand map

Cisco's UCS Software

eWeek recently posted snapshots of Cisco’s Unified Computing System (UCS) Software on their site: http://www.eweek.com/c/a/IT-Infrastructure/LABS-GALLERY-Cisco-UCS-Unified-Computing-System-Software-199462/?kc=rss

Take a good look at the software because the software is the reason this blade system will be successful because they are treating the physical blades as a resource – just CPUs, memory and I/O.  “What” the server should be and “How” the server should act is a feature of the UCS Management software.  It will show you the physical layout of the blades to the UCS 6100 Interconnect, it can show you the configurations of the blades in the attached UCS 5108 chassis, it can set the boot order of the blades, etc.  Quite frankly there are too many features to mention and I don’t want to steal their fire, so take a few minutes to go to:  http://www.eweek.com/c/a/IT-Infrastructure/LABS-GALLERY-Cisco-UCS-Unified-Computing-System-Software-199462/?kc=rss.

HP's Well Hidden Secret Blade Server

bl2x220cg5

BL2x220c G5 (2 server "nodes" shown)

HP’s BladeSystem server offering is quite extensive – everything from a 4 CPU Intel blade to an Itanium CPU blade, however their most well hidden, secret blade is their BL2x220c blade server.  Starting at $6,129, this blade server is an awesome feet of design because it is not just 1 server, it is 2 serversin 1 blade case – in a clam shell design (see below).  This means that in a HP C7000 BladeSystem chassis you could have 32 servers!    That’s 64 CPUs, 256 CORES, 2TB of RAM all in a 10U rack space.  That’s pretty impressive.  Let me break it down for you.  Each “node” on a single 2 node BL2x220c G5 server contains:

  • Up to two Quad-Core Intel® Xeon® 5400 sequence processors
  • Up to 32 GB (4 x 8 GB) of memory, supported by (4) slots of PC2-5300 Registered DIMMs, 667 MHz
  • 1 non-hot plug small form factor SATA or Solid State hard drive
  • Embedded Dual-port NC326i Gigabit Server Adapter
  • One (1) I/O expansion slots via mezzanine card
  • One (1) internal USB 2.0 connector for security key devices and USB drive keys

BL2x220

 

 

 

 

 

 

 

You may have noticed that this server is a “G5” version and currently has the older Intel 5400 series processors.  Based on HP’s current blade offering, expect to see HP refresh of this server to a “G6” model that will contain the Intel® Xeon® 5500 series processors.  Once that happens, I expect for more memoryslots to come with it, since the Intel® Xeon® 5500 series processors have 3 memory channels.  I’m guessing 12 memory slots “per node” or 24 memory slots per BL2x220c G6.  Purely speculation on my part, but it would make sense.  

Why do I consider this server to be one of HP’s best hidden secrets?  Simply because with that amount of server density, server processing power and server memory, the BL2x220c could become a perfect virtualization server.   Now if they’d only make a converged network adapter (CNA)…

How IBM's BladeCenter works with Cisco Nexus 5000

Cisco Nexus 4000 switch for blade chassis environments, I thought it would be good to discuss how IBM is able to connect blade servers via 10Gb Datacenter Ethernet (or Converged Enhanced Ethernet) to a Cisco Nexus 5000.

Other than Cisco’s UCS offering, IBM is currently the only blade vendor who offers a Converged Network Adapter (CNA) for the blade server.  The 2 port CNA sits on the server in a PCI express slot and is mapped to high speed bays with CNA port #1 going to High Speed Bay #7 and CNA port #2 going to High Speed Bay #9.  Here’s an overview of the IBM BladeCenter H I/O Architecture (click to open large image:)

BladeCenter H I-O

Since the CNAs are only switched to I/O Bays 7 and 9, those are the only bays that require a “switch” for the converged traffic to leave the chassis.  At this time, the only option to get the converged traffic out of the IBM BladeCenter H is via a 10Gb “pass-thru” module.  A pass-thru module is not a switch – it just passes the signal through to the next layer, in this case the Cisco Nexus 5000. 

10 Gb Ethernet Pass-thru Module for IBM BladeCenter

10 Gb Ethernet Pass-thru Module for IBM BladeCenter

The pass-thru module is relatively inexpensive, however it requires a connection to the Nexus 5000 for every server that has a CNA installed.  As a reminder, the IBM BladeCenter H can hold up to 14 servers with CNAs installed so that would require 14 of the 20 ports on a Nexus 5010.  This is a small cost to pay, however to gain the 80% efficiency that 10Gb Datacenter Ethernet (or Converged Enhanced Ethernet) offers.  The overall architecture for the IBM Blade Server with CNA + IBM BladeCenter H + Cisco Nexus 5000 would look like this (click to open larger image:)

BladeCenter H Diagram 6 x 10Gb Uplinks

 

Hopefully when IBM announces their Cisco Nexus 4000 switch for the IBM BladeCenter H later this month, it will provide connectivity to CNAs on the IBM Blade server and it will help consolidate the amount of connections required to the Cisco Nexus 5000 from 14 to perhaps 6 connections ;) 

Cisco Announces Nexus 4000 Switch for Blade Chassis

Cisco is announcing today the release of the Nexus 4000 switch.  It will be designed to work in “other” blade vendors’ chassis, although they aren’t announcing which blade vendors.  My gut is that Dell and IBM will OEM it, but HP will stick with their ProCurve line announced a few weeks ago.  Here’s what I know about the Nexus 4000 switch:

1) It will aggregate 1GB links to 10Gb uplink.  To me, this means that it will not be compatible with Converged Network Adapters (CNAs).  From this description, it seems to be just the Cisco Nexus 2000 in a blade form factor.  It’s simply a “fabric extender” allowing all of the traffic to flow into the Nexus 5000 Switch.

2) It will run on the Nexus O/S (NX-OS)  This is key because it allows users to have a seamless environment for their server and their Nexus switch infrastructure.

3) Cisco Nexus 4000 will provide “cost effective transition from multiple 1GbE links to a lossless 10GbE for virtualized environments”  This statement confuses me.  Does it mean that the Cisco Nexus 4000 switch will be capable of working with 1Gb NICs as well as 10Gb CNAs, or is it just stating that the traditional 1Gb NICs will be able to connect into a lossless unified fabric??

Cisco is having a live broadcast at 10 am PST today, but I just reviewed the slide deck and they talk at a VERY high level on this new announcement.  I suppose maybe they are going to let each vendor (Dell and IBM) provide details once they officially announce their switches.  When they do, I’ll post details here.

IBM Announces 4 Socket Intel Blade Server-UPDATED

IBM announced last week they will be launching a new blade server modeled with the upcoming 4 socket Intel Nehalem EX.  While details have not yet been provided on this new server, I wanted to provide an estimation of what this server could look like, based on previous IBM models.  I’ve drawn up what I think it will look like below, but first let me describe it.

“New Server Name”
IBM’s naming schema is pretty straight forward: Intel blades are “HS”, AMD blades are “LS”, Power blades are “JS”.  Knowing this, I the new server will most likely be called a “HS42“.  IBM previously had an HS40 and HS41, so calling it an HS42 would make the most sense. 

“Size
With the amount of memory that each CPU will have access to, I don’t see any way for IBM to create a 4 socket blade that wasn’t a “double-wide” form factor.  A “double-wide” design means the server is 2 server slots wide, so in a single IBM BladeCenter H chassis, customers would be limited to 7  x HS42’s per chassis.

“Memory”
The Intel Nehalem EX will tentatively support 16 memory slots PER CPU, across 4 memory channels, so a 4 socket server will have 64 memory slots.  Each memory channel can hold up to 4 DIMMs each.  This is great, but this is the MAX for an upcoming Intel Nehalem EX server.  I do not expect for any blade server vendor to achieve 64 memory slots with 4 CPUs.  Since this is the maximum, it makes sense that vendors, like IBM, will be able to use less memory.  I expect for these new servers to have 12 memory slots per CPUs (or 3 DIMMs per memory channel).  This will still provide 48 memory dimms per” HS42″ blade server; and with 16Gb DIMMs, that would equal 768Gb per blade server.

“CPU”
The “HS42” would have up to 4 x Intel Nehalem EX CPU’s, each with 8 cores, for a total of 32 CPU cores per “HS42” server.  HOWEVER, Intel is offering Hyperthreading with this CPU so an 8 core CPU now looks like 16 CPUs.

“Internal Drive Capacity”
I don’t see any way for IBM to have hot-swap drives in this server.  There is just not enough real estate.  So, I believe they would consider putting in Solid State drives (SSD’s) toward the front of the server.  Will they put it on both sides of the server, probably not.  The role of these drives would be just to provide space for your boot O/S.  The data will sit on a storage area network. 

“I/O Expansion”
I don’t think that IBM will re-design their existing I/O architecture for the blade servers.  Therefore, I expect for each side of the double-wide “HS42” to have a single CIOv and a CFF-h daughter card expansion slot, so a single HS42 would have 4 expansion slots.  This is assuming that IBM designs connector pins that interconnect the two halves of the server together that don’t interfere with the card slots (presumably at the upper half of the connections.)HS42 Estimation

As we come closer to the release date of the Intel Nehalem EX processor later in Q4 of 2009, I expect to hear more definitive details on the announced 4 socket IBM Blade server, so make sure to check back here later this year.

UPDATE (10/6/09):   I’m hearing rumors that IBM’s Nehalem EX processor offerings (aka “X5″ offerings” will be shipping in Q2 of 2010.)  Once that is confirmed by IBM, I’ll post a new post.