Tag Archives: VirtualConnect

HP Converged Infrastructure

In the wake of the Cisco, EMC and VMware announcement, HP today is formally announcing the HP Converged Infrastructure.  You can take a look at the full details of this design on HP’s Website, but I wanted to try and simplify:

The HP Converged Infrastrcture is comprised of four core areas:

  • HP Infrastructure Operating Environment
  • HP FlexFabric
  • HP Virtual Resource Pools
  • HP Data Center Smart Grid

According to HP, achieving the benefits of a “converged infrastructure” requires the following core attributes:

  1. Virtualized pools of servers, storage, networking
  2. Resiliency built into the hardware, software, and operating environment
  3. Orchestration through highly automated resources to deliver an application aligned according to policies
  4. Optimized to support widely changing workloads and different applications and usage models
  5. Modular components built on open standards to more easily upgrade systems and scale capacity

Let’s take a peak into each of the core areas that makes up the HP Converged Infrastructure.

Operating EnvironmentHP Infrastructure Operating Environment
This element of the converged infrastructure provides a shared services management engine that adapts and provisions the infrastructure.  The goal of this core area is to expedite delivery and provisioning of the datacenter’s infrastructure. 

The HP Infrastructure Operating Environment is comprised of HP Dynamics – a command center that enables you to continuously analyze and optimize your infrastructure; and HP Insight Control– HP’s existing server management software.


HP FlexFabric
HP defines this core area as  a “next-generation, highly scalable data center fabric architecture and a technology layer in the HP Converged Infrastructure.”  The goal of the HP FlexFabric is to create a highly scalable, flat network domain that enables administrators to easily provision networks as needed and on-demand to meet the virtual machines requirements. 

HP’s FlexFabric is made up of HP’s ProCurve line and their VirtualConnect technologies.  Beyond the familiar network components, the HP Procurve Data Center Connection Manager is also included as a fundamental component offering up automated network provisioning.


HP Virtual Resource Pools
This core area is designed to allow for a virtualized collection of storage, servers and networking that can be shared, repurposed and provisioned as needed.

Most of HP’s Enterprise products fit into this core area.  The HP 9000 and HP Integrity servers use HP Global Workload Managerto provision workloads; HP Proliant servers can use VMware or Microsoft’s virtualization technologies and the HP StorageWorks SAN Virtualization Services
(SVSP) enables network-based (SAN) virtualization of heterogeneous disk arrays.


HP Data Center Smart Grid
The goal of this last core area of the HP Converged Infrastructure is to “create an intelligent, energy-aware environment across IT and facilities to optimize and reduce energy use, reclaiming facility capacity and reducing energy costs.”

HP approaches this core area with a few different products.  The Proliant G6 server lines offer a “sea of sensors” that aid with the consumption of power and cooling.  HP also offers a Performance Optimized Datacenter (POD)– a container based datacenter that optimize power and cooling.    HP also uses the HP Insight Control software to manage the HP Thermal Logic technologies and control peaks and valleys of power management on servers.

In summary, HP’s Converged Infrastructure follows suit with what many other vendors are doing – taking their existing products and technologies and re-marketing them to closely align and reflect a more coherent messaging.  Only time will tell as to if this approach will be successful in growing HP’s business.

IBM Announces Emulex Virtual Fabric Adapter for BladeCenter…So?

Emulex Virtual Fabric AdapterEmulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC.  The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card. 

When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card.  According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.  The one catch with this mode is that it ONLY operates with the  BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC.  This means no connection to Cisco Nexus…yet.  According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade.  Not sure if that means compatibility with Cisco Nexus 5000 or not.  We’ll have to wait and see.

When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a  standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.   The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.

BladeCenter H I-O

So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value.  HP has had this functionality for over a year now in their VirtualConnect Flex-10  offering so this technology is nothing new.  Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic.  I’m just not sure, but am open to any comments or thoughts.

legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms