IBM Helps Use Blade Servers to Fight Fires

WildfiresOn Thursday, IBM plans to announce its work with university researchers to instantly process data for wildfire prediction — changing the delay time from every six hours to real-time. This will not only help firefighters control the blaze more efficiently, but deliver more informed decisions on public evacuations and health warnings.

The new joint project with the University of Maryland, Baltimore County allows for researches to analyze smoke patterns during wildfires by instantly processing the massive amounts of data available from drone aircraft, high-resolution satellite imagery and air-quality sensors, to develop more effective models for smoke dissipation using a cluster of IBM BladeCenters and IBM InfoSphere Streamsanalytics.   Today analysis of smoke patterns is limited to weather forecasting data, observations from front line workers and low resolution satellite imagery.  This new ability will provide fire and public safety officials with a real-time assessment of smoke patterns during a fire, which will allow them to make more informed decisions on public evacuations and health warnings.

Researchers expect to have a prototype of this new system available by next year.

HP Unveils New Updated Blade Server: BL2x220c G6

bl2x220cg5HP officially announced today an update to their BL2x220c blade server line.  Although the primary purpose for this update was to introduce the Intel Xeon 5500 Series processor to the server line, there are additional significant enhancements as well (shown below in bold:

  • Up to two Quad-Core Intel® Xeon® 5500 sequence processors
  • Up to 48 GB (6 x 8 GB) of memory, supported by (6) slots of PC2-5300 Registered DIMMs, 1066Mhz
  • 1 non-hot plug small form factor SATA or Solid State hard drive
  • Embedded Dual-port NC326i Gigabit Server Adapter
  • One (1) I/O expansion slots via mezzanine card
  • One (1) internal USB 2.0 connector for security key devices and USB drive keys
  • Supported ONLY in c7000 Chassis

For those of you not familiar with the BL2x220 Blade Server, I think it is one of HP’s best kept secret.  BL2x220G6 - OpenThis blade server is an awesome feet of design because it is not just 1 server, it is 2 servers in 1 blade case – in a clam shell design (see below).  This means that in a HP C7000 BladeSystem chassis you could have 32 servers!    That’s 64 CPUs, 256 CORES, 3TB of RAM all in a 10U rack space.  That’s pretty impressive.

For more details on this new server, I encourage you to visit the QuickSpecs website at http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/3709945-3709945-3328410-241641-3722790-4047584.html

 

HP Converged Infrastructure

In the wake of the Cisco, EMC and VMware announcement, HP today is formally announcing the HP Converged Infrastructure.  You can take a look at the full details of this design on HP’s Website, but I wanted to try and simplify:

The HP Converged Infrastrcture is comprised of four core areas:

  • HP Infrastructure Operating Environment
  • HP FlexFabric
  • HP Virtual Resource Pools
  • HP Data Center Smart Grid

According to HP, achieving the benefits of a “converged infrastructure” requires the following core attributes:

  1. Virtualized pools of servers, storage, networking
  2. Resiliency built into the hardware, software, and operating environment
  3. Orchestration through highly automated resources to deliver an application aligned according to policies
  4. Optimized to support widely changing workloads and different applications and usage models
  5. Modular components built on open standards to more easily upgrade systems and scale capacity

Let’s take a peak into each of the core areas that makes up the HP Converged Infrastructure.

Operating EnvironmentHP Infrastructure Operating Environment
This element of the converged infrastructure provides a shared services management engine that adapts and provisions the infrastructure.  The goal of this core area is to expedite delivery and provisioning of the datacenter’s infrastructure. 

The HP Infrastructure Operating Environment is comprised of HP Dynamics – a command center that enables you to continuously analyze and optimize your infrastructure; and HP Insight Control– HP’s existing server management software.

FlexFabric

HP FlexFabric
HP defines this core area as  a “next-generation, highly scalable data center fabric architecture and a technology layer in the HP Converged Infrastructure.”  The goal of the HP FlexFabric is to create a highly scalable, flat network domain that enables administrators to easily provision networks as needed and on-demand to meet the virtual machines requirements. 

HP’s FlexFabric is made up of HP’s ProCurve line and their VirtualConnect technologies.  Beyond the familiar network components, the HP Procurve Data Center Connection Manager is also included as a fundamental component offering up automated network provisioning.

virtualResourcePools

HP Virtual Resource Pools
This core area is designed to allow for a virtualized collection of storage, servers and networking that can be shared, repurposed and provisioned as needed.

Most of HP’s Enterprise products fit into this core area.  The HP 9000 and HP Integrity servers use HP Global Workload Managerto provision workloads; HP Proliant servers can use VMware or Microsoft’s virtualization technologies and the HP StorageWorks SAN Virtualization Services
Platform
(SVSP) enables network-based (SAN) virtualization of heterogeneous disk arrays.

datacenter

HP Data Center Smart Grid
The goal of this last core area of the HP Converged Infrastructure is to “create an intelligent, energy-aware environment across IT and facilities to optimize and reduce energy use, reclaiming facility capacity and reducing energy costs.”

HP approaches this core area with a few different products.  The Proliant G6 server lines offer a “sea of sensors” that aid with the consumption of power and cooling.  HP also offers a Performance Optimized Datacenter (POD)– a container based datacenter that optimize power and cooling.    HP also uses the HP Insight Control software to manage the HP Thermal Logic technologies and control peaks and valleys of power management on servers.

Summary
In summary, HP’s Converged Infrastructure follows suit with what many other vendors are doing – taking their existing products and technologies and re-marketing them to closely align and reflect a more coherent messaging.  Only time will tell as to if this approach will be successful in growing HP’s business.

Cisco, EMC and VMware Announcement – My Thoughts


By now I’m sure you’ve read, heard or seen Tweeted the announcement that Cisco, EMC and VMware have come together and created the Virtual Computing Environment coalition .   So what does this announcement really mean?  Here are my thoughts:

Greater Cooperation and Compatibility
Since these 3 top IT giants are working together, I expect to see greater cooperation between all three vendors, which will lead to understanding between what each vendor is offering.  More important, though, is we’ll be able to have reference architecturethat can be a starting point to designing a robust datacenter.  This will help to validate that an “optimized datacenter” is a solution that every customer should consider.

Technology Validation
With the introduction of the Xeon 5500 processor from Intel earlier this year and the announcement of the Nehalem EX coming early in Q1 2010, the ability to add more and more virtual machines onto a single host server is becoming more prevalent.  No longer is the processor or memory the bottleneck – now it’s the I/O.  With the introduction of Converged Network Adapters (CNAs), servers now have access to  Converged Enhanced Ethernet (CEE) or DataCenter Ethernet (DCE) providing up to 10Gb of bandwidth running at 80% efficiency with lossless packets.  With this lossless ethernet, I/O is no longer the bottleneck.

VMware offers the top selling virtualization software, so it makes sense they would be a good fit for this solution.

Cisco has a Unified Computing System that offers up the ability to combine a server running a CNA to a Interconnect switch that allows the data to be split out into ethernet and storage traffic.  It also has a building block design to allow for ease of adding new servers – a key messaging in the Coalition announcement.

EMCoffers a storage platform that will enable the storage traffic from the Cisco UCS 6120XP Interconnect Switch and they have a vested interest in VMware and Cisco, so this marriage of the 3 top IT vendors is a great fit.

Announcement of Vblock™ Infrastructure Packages
According to the announcement, the Vblock Infrastructure Package “will provide customers with a fundamentally better approach to streamlining and optimizing IT strategies around private clouds.”  The packages will be fully integrated, tested, validated, and that combine best-in-class virtualization, networking, computing, storage, security, and management technologies from Cisco, EMC and VMware with end-to-end vendor accountability.  My thought on these packages is that they are really nothing new.  Cisco’s UCS has been around, VMware vSphere has been around and EMC’s storage has been around.  The biggest message from this announcement is that there will soon be  “bundles” that will simplify customers solutions.  Will that take away from Solution Providers’ abilities to implement unique solutions?  I don’t think so.  Although this new announcement does not provide any new product, it does mark the beginning of an interesting relationship between 3 top IT giants and I think this announcement will definitely be an industry change – it will be interesting to see what follows.

UPDATE – click here check out a 3D model of the vBlocks Architecture.

Cisco’s Unified Computing System Management Software

Cisco’s own Omar Sultan and Brian Schwarz recently blogged about Cisco’s Unified Computing System (UCS) Manager software and offered up a pair of videos demonstrating its capabilities.  In my opinion, the management software of Cisco’s UCS is the magic that is going to push Cisco out of the Visionary quadrant of the Gartner Magic Quadrant for Blade Servers to the “Leaders” quadrant. 

The Cisco UCS Manager is the centralized management interface that integrates the entire set of Cisco Unified Computing System components.   The management software  not only participates in UCS blade server provisioning, but also in device discovery, inventory, configuration, diagnostics, onitoring, fault detection, auditing, and statistics collection. 

On Omar’s Cisco blog, located at http://blogs.cisco.com/datacenter, Omar and Brian created two videos.  Part 1 of their video offers a general overview of the Management software, where as in Part 2 they highlight the capabilities of profiles

I encourage you to check out the videos – they did a great job with them.

Cisco's New Virtualized Adapter (aka "Palo")

Previously known as “Palo”, Cisco’s virtualized adapter allows for a server to split up the 10Gb pipes into numerous virtual pipes (see belowpalo adapter) like multiple NICs or multiple Fibre Channel HBAs.  Although the card shown in the image to the left is a normal PCIe card, the initial launch of the card will be in the Cisco UCS blade server. 

So, What’s the Big Deal?

When you look at server workloads, their needs vary – web servers need a pair of NICs, whereas database servers may need 4+ NICs and 2+HBAs.  By having the ability to split the 10Gb pipe into virtual devices, you can set up profiles inside of Cisco’s UCS Manager to apply the profiles for a specific servers’ needs.  An example of this would be a server being used for VMware VDI (6 NICs and 2 HBAs) during the day, and at night, it’s repurposed for a computational server needing only 4 NICs.

Another thing to note is although the image shows 128 virtual devices, that is only the theoretical limitation.  The reality is that the # of virtual devices depends on the # of connections to the Fabric Interconnects.  As I previously posted, the servers’ chassis has a pair of  4 port Fabric Extenders (aka FEX) that uplink to the UCS 6100 Fabric Interconnect.  If only 1 of the 4 ports is uplinked to the UCS 6100, then only 13 virtual devices will be available.  If 2 FEX ports are uplinked, then 28 virtual devices will be available.  If 4 FEX uplink ports are used, then 58 virtual devices will be available. 

Will the ability to carve up your 10Gb pipes into smaller ones make a difference?  It’s hard to tell.  I guess we’ll see when this card starts to ship in December of 2009.

What is the HP BladeSystem Matrix?

hpmatrix-webHP announced a while ago a new product they call the HP BladeSystem Matrix.  Okay, well, it’s not really a “product” as much as it is a solution.  HP calls the BladeSystem Matrix “a cloud infrastructure in a box” – which is a good way to look at it.  The infrastructure that is “the Matrix” is simply HP’s BladeSystem chassis, loaded with blade servers and attached to an HP storage SAN.  Add to the mix some automation, via templates, and you have the BladeSystem Matrix.  The secret behind this unique solution is the “Matrix Orchestration Environment“, which combines automated provisioning, capacity planning, and disaster recovery, with a self-service portal into one “command center.”  However, this is not a single software, but a combination of HP Insight Dynamics – VSE and Insight Orchestration .

What’s In a BladeSystem Matrix?

There are two options for the HP BladeSystem Matrix bundle – a Starter Kit and an Expansion Kit.  The Starter Kit is designed to include all of the infrastructure necessary to manage up to 16 blade servers, with the option of adding a HP StorageWorks EVA4400 SAN.   The HP BladeSystem Matrix Starter Kit (hardware components) contains:

  • HP BladeSystem c7000 enclosure, single-phase, single-phase with 6 power supplies, 10 fans
  • HP BladeSystem c7000 Onboard Administrator with KVM option, redundant pair
    HP Virtual Connect Flex-10 10Gb Ethernet modules, redundant pair
    NOTE: No transceivers/SFPs included so that you can choose these options – need to add to order.
  • HP Virtual Connect 8Gb 24-Port FC Module for BladeSystem c-Class, redundant pair
    NOTE: Two [2] Fibre Channel SFP+ transceivers included with each module; therefore, 4 total transceivers per redundant pair.
  • BladeSystem Matrix documentation CD
  • BladeSystem Matrix label attached to 10000 series rack door handle

The part number for HP BladeSystem Matrix Starter Kit (hardware components) is 535888-B21. It is important to note, the starter kit does not contain any blade servers or storage.  Those must be ordered separately.

The HP BladeSystem Matrix Starter Kit (software components) provides HP Insight software licenses for 1 enclosure / 16-server with standard 1 year 24×7 Technical Support and Update Service unless 3-, 4-, or 5-year Support Plus 24 Care Pack uplifts are purchased to increase support and update period. These licenses include:

  • Insight Dynamics – VSE suite for ProLiant with Insight Control suite
  • Insight Orchestration software
  • Insight Recovery software
  • Virtual Connect Enterprise Manager software
  • HP Insight Remote Support Advanced (formerly Remote Support Pack)

The part number for HP BladeSystem Matrix Starter Kit (software components) is TB462A.

Once you have the Hardware and Software Starter kits, then you’ll need to purchase the HP Professional Services – for installation; the Central Management Server (CMS) – a BL460 with 2 CPUs, 12GB RAM; additional blade servers and the storage that you need.

The HP BladeSystem Matrix Expansion Kit (HP part #507021-B21 ) is very similar to the Starter Kit:

  • HP BladeSystem c7000 enclosure, single-phase, single-phase with 6 power supplies, 10 fans
  • HP BladeSystem c7000 Onboard Administrator with KVM option, redundant pair
    HP Virtual Connect Flex-10 10Gb Ethernet modules, redundant pair
    NOTE: No transceivers/SFPs included so that you can choose these options – need to add to order.
  • HP Virtual Connect 8Gb 24-Port FC Module for BladeSystem c-Class, redundant pair
    NOTE: Two [2] Fibre Channel SFP+ transceivers included with each module; therefore, 4 total transceivers per redundant pair.
  • BladeSystem Matrix documentation CD
  • BladeSystem Matrix label attached to 10000 series rack door handle

However, it also includes software licenses for:

  • Insight Dynamics – VSE suite for ProLiant with Insight Control suite
  • Insight Orchestration software
  • Insight Recovery software
  • Virtual Connect Enterprise Manager software
  • HP Insight Remote Support Advanced (formerly Remote Support Pack)

Once again, you’ll need to purchase the HP Professional Services – for installation; then your blade servers and the storage that you need.

As you can see, the HP BladeSystem Matrix is not a new product – it is an easy way to order HP BladeSystem products and use HP services and software to easily get your server infrastructure in place.  Let me know your thoughts – feel free to leave comments.  For more on the HP BladeSystem Matrix, visit HP’s web site at http://h18004.www1.hp.com/products/blades/components/matrix/main.html

IBM BladeCenter HS22 Delivers Best SPECweb2005 Score Ever Achieved by a Blade Server

HS22According to IBM’s System x and BladeCenter x86 Server Blog, the IBM BladeCenter HS22 server has posted the best SPECweb2005 score ever from a blade server.  With a SPECweb2005 supermetric score of 75,155, IBM has reached a benchmark seen by no other blade yet to-date.  The SPECweb2005 benchmark is designed to be a neutral, equal benchmark for evaluting the peformance of web servers.  According to the IBM blog, the score is derived from three different workloads measured:

  • SPECweb2005_Banking – 109,200 simultaneous sessions
  • SPECweb2005_Ecommerce – 134,472 simultaneous sessions
  • SPECweb2005_Support – 64,064 simultaneous sessions

The HS22 achieved these results using two Quad-Core Intel Xeon Processor X5570 (2.93GHz with 256KB L2 cache per core and 8MB L3 cache per processor—2 processors/8 cores/8 threads). The HS22 was also configured with 96GB of memory, the Red Hat Enterprise Linux® 5.4 operating system, IBM J9 Java® Virtual Machine, 64-bit Accoria Rock Web Server 1.4.9 (x86_64) HTTPS software, and Accoria Rock JSP/Servlet Container 1.3.2 (x86_64).

It’s important to note that these results have not yet been “approved” by SPEC, the group who posts the results, but as soon as they are, they’ll be published at at http://www.spec.org/osg/web2005

The IBM HS22 is IBM’s most popular blade server with the following specs:

  • up to  2 x Intel 5500 Processors
  • 12 memory slots for a current maximum of 96Gb of RAM
  • 2 hot swap hard drive slots capable of running RAID 1 (SAS or SATA)
  • 2 PCI Express connectors for I/O expansion cards (NICs, Fibre HBAs, 10Gb Ethernet, CNA, etc)
  • Internal USB slot for running VMware ESXi
  • Remote management
  • Redundant connectivity

ibm_hs22_nehalem_blade

(UPDATED) Officially Announced: IBM’s Nexus 4000 Switch: 4001I (PART 2)

I’ve gotten a lot of response from my first post, “REVEALED: IBM’s Nexus 4000 Switch: 4001I” and more information is coming out quickly so I decided to post a part 2. IBM officially announced the switch on October 20, 2009, so here’s some additional information:

  • The Nexus 4001I Switch for the IBM BladeCenter is part # 46M6071 and has a list price of $12,999 (U.S.) each
  • In order for the Nexus 4001I switch for the IBM BladeCenter to connect to an upstream FCoE switch, an additional software purchase is required. This item will be part # strong>49Y9983, “Software Upgrade License for Cisco Nexus 4001I.” This license upgrade allows for the Nexus 4001I to handle FCoE traffic. It has a U.S. list price of $3,899
  • The Cisco Nexus 4001I for the IBM BladeCenter will be compatible with the following blade server expansion cards
    • 2/4 Port Ethernet Expansion Card, part # 44W4479
    • NetXen 10Gb Ethernet Expansion Card, part # 39Y9271
    • Broadcom 2-port 10Gb Ethernet Exp. Card, part # 44W4466
    • Broadcom 4-port 10Gb Ethernet Exp. Card, part # 44W4465
    • Broadcom 10 Gb Gen 2 2-port Ethernet Exp. Card, part # 46M6168
    • Broadcom 10 Gb Gen 2 4-port Ethernet Exp. Card, part # 46M6164
    • QLogic 2-port 10Gb Converged Network Adapter, part # 42C1830
  • (UPDATED 10/22/09) The newly announced Emulex Virtual Adapter WILL NOT work with the Nexus 4001I IN VIRTUAL NIC (vNIC) mode.  It will work in pNIC mode according to IBM.

The Cisco Nexus 4001I switch for the IBM BladeCenter is a new approach to getting converged network traffic. As I posted a few weeks ago in my post, “How IBM’s BladeCenter works with BladeCenter H Diagram 6 x 10Gb UplinksCisco Nexus 5000” before the Nexus 4001I was announced, in order to get your blade servers to communicate with a Cisco Nexus 5000, you had to use a CNA,and a 10Gb Pass-Thru Module as shown on the left. The pass-thru module used in that solution requires for a direct connection to be made from the pass-thru module to the Cisco Nexus 5000 for every blade server that requires connectivity. This means for 14 blade servers, 14 connections are required to the Cisco Nexus 5000. This solution definitely works – it just eats up 14 Nexus 5000 ports. At $4,999 list (U.S.), plus the cost of the GBICs, the “pass-thru” scenario may be a good solution for budget conscious environments.

In comparison, with the IBM Nexus 4001I switch, we now can have as few as 1 uplink to the Cisco Nexus 5000 from the Nexus 4001I switch. This allows you to have more open ports on the Cisco Nexus 5000 for connections to other IBM Bladecenters with Nexus 4001I switches, or to allow connectivity from your rack based servers with CNAs.

Bottom line: the Cisco Nexus 4001I switch will reduce your port requirements on your Cisco Nexus 5000 or Nexus 7000 switch by allowing up to 14 servers to uplink via 1 port on the Nexus 4001I.

For more details on the IBM Nexus 4001I switch, I encourage you to go to the newly released IBM Redbook for the Nexus 4001I Switch.

IBM Announces Emulex Virtual Fabric Adapter for BladeCenter…So?

Emulex Virtual Fabric AdapterEmulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC.  The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card. 

When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card.  According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.  The one catch with this mode is that it ONLY operates with the  BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC.  This means no connection to Cisco Nexus…yet.  According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade.  Not sure if that means compatibility with Cisco Nexus 5000 or not.  We’ll have to wait and see.

When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a  standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.   The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.

BladeCenter H I-O

So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value.  HP has had this functionality for over a year now in their VirtualConnect Flex-10  offering so this technology is nothing new.  Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic.  I’m just not sure, but am open to any comments or thoughts.

legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms