Tag Archives: VMware

(UPDATED) Blade Servers with SD Slots for Virtualization

(updated 1/13/2010 – see bottom of blog for updates)

Eric Gray at www.vcritical.com blogged today about the benefits of using a flash based device, like an SD card, for loading VMware ESXi, so I thought I would take a few minutes to touch on the topic.

As Eric mentions, probably the biggest benefit of using VMware ESXi on an embedded device is that you don’t need local drives, which lowers the power and cooling of your blade server.  While he mentions HP in his blog, both HP and Dell offer SD slots in their blade servers – so let’s take a look:

HP
HP currently offers these SD slots in their BL460 G6 and BL490 G6 blade servers.  As you can see from the picture on the left (thanks again to Eric at vCritical.com) HP allows for you to access the SD slot from the top of the blade server.  This makes it fairly convenient to access, although once the image is installed on the SD card, it’s probably not ever coming out.  HP’s QuickSpecs for the BL460 G6 state offer up an “HP 4GB SD Flash Media” that has a current list price of $70, however I have been unable to find any documentation that says you MUST use this SD card, so if you want to try and use it with your own personal SD card first, good luck.  It is important to note that HP does not currently offer VMware ESXi, or any other virtualization vendor’s software, pre-installed on an SD card, unlike Dell.

Dell
Dell has been offering SD slots on select servers for quite a while.  In fact, I can remember seeing it at VMworld 2008.  Everyone else was showing “embedded hypervisors” on USB keys while Dell was using an SD card.  I don’t know that I have a personal preference of USB vs SD, but the point is that Dell was ahead of the game on this one.

Dell currently only offers their SD slot on their M805 and M905 blade servers.  These are full-height servers, which could be considered good candidates for a virtualization server due to its redundant connectivity, high memory offering and high I/O (but that’s for another blog post.)

Dell chose to place the SD slots on the bottom rear of their blade servers.  I’m not sure I agree with the placement, because if you needed to access the card, for whatever reason, you have to pull the server completely out of the chassis to service.  It’s a small thing, but it adds time and complexity to the serviceability of the server.  

An advantage that Dell has over HP is they offer to have VMware ESXi 4 PRE-LOADED on the SD key upon delivery.  Per the Dell website, an SD card with ESXi 4 (basic, not Standard or Enterprise) is available for $99.  It’s listed as “VMware ESXi v4.0 with VI4, 4CPU, Embedded, Trial, No Subsc, SD,NoMedia“.  Yes, it’s considered a “trial” and it’s the basic version with no bells or whistles, however it is pre-loaded which equals time savings.  There are additional options to upgrade the ESXi to either Standard or Enterprise as well (for additional cost, of course.)

It is important to note that this discussion was only about SD slots.  All of the blade server vendors, including IBM, have incorporated USB slots internally to their blade servers, so whereas a specific server may not have an SD slot, there is still the ability to load the hypervisor onto an USB key (where supported.)

1/13/2010 UPDATE –SD slots are also available on the BL 280G6 and BL 685 G6.

There is also an HP Advisory discouraging use of an internal USB key for embedded virtualization.  Check it out at:

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01957637&lang=en&cc=us&taskId=101&prodSeriesId=3948609&prodTypeId=3709945

HP Converged Infrastructure

In the wake of the Cisco, EMC and VMware announcement, HP today is formally announcing the HP Converged Infrastructure.  You can take a look at the full details of this design on HP’s Website, but I wanted to try and simplify:

The HP Converged Infrastrcture is comprised of four core areas:

  • HP Infrastructure Operating Environment
  • HP FlexFabric
  • HP Virtual Resource Pools
  • HP Data Center Smart Grid

According to HP, achieving the benefits of a “converged infrastructure” requires the following core attributes:

  1. Virtualized pools of servers, storage, networking
  2. Resiliency built into the hardware, software, and operating environment
  3. Orchestration through highly automated resources to deliver an application aligned according to policies
  4. Optimized to support widely changing workloads and different applications and usage models
  5. Modular components built on open standards to more easily upgrade systems and scale capacity

Let’s take a peak into each of the core areas that makes up the HP Converged Infrastructure.

Operating EnvironmentHP Infrastructure Operating Environment
This element of the converged infrastructure provides a shared services management engine that adapts and provisions the infrastructure.  The goal of this core area is to expedite delivery and provisioning of the datacenter’s infrastructure. 

The HP Infrastructure Operating Environment is comprised of HP Dynamics – a command center that enables you to continuously analyze and optimize your infrastructure; and HP Insight Control– HP’s existing server management software.

FlexFabric

HP FlexFabric
HP defines this core area as  a “next-generation, highly scalable data center fabric architecture and a technology layer in the HP Converged Infrastructure.”  The goal of the HP FlexFabric is to create a highly scalable, flat network domain that enables administrators to easily provision networks as needed and on-demand to meet the virtual machines requirements. 

HP’s FlexFabric is made up of HP’s ProCurve line and their VirtualConnect technologies.  Beyond the familiar network components, the HP Procurve Data Center Connection Manager is also included as a fundamental component offering up automated network provisioning.

virtualResourcePools

HP Virtual Resource Pools
This core area is designed to allow for a virtualized collection of storage, servers and networking that can be shared, repurposed and provisioned as needed.

Most of HP’s Enterprise products fit into this core area.  The HP 9000 and HP Integrity servers use HP Global Workload Managerto provision workloads; HP Proliant servers can use VMware or Microsoft’s virtualization technologies and the HP StorageWorks SAN Virtualization Services
Platform
(SVSP) enables network-based (SAN) virtualization of heterogeneous disk arrays.

datacenter

HP Data Center Smart Grid
The goal of this last core area of the HP Converged Infrastructure is to “create an intelligent, energy-aware environment across IT and facilities to optimize and reduce energy use, reclaiming facility capacity and reducing energy costs.”

HP approaches this core area with a few different products.  The Proliant G6 server lines offer a “sea of sensors” that aid with the consumption of power and cooling.  HP also offers a Performance Optimized Datacenter (POD)– a container based datacenter that optimize power and cooling.    HP also uses the HP Insight Control software to manage the HP Thermal Logic technologies and control peaks and valleys of power management on servers.

Summary
In summary, HP’s Converged Infrastructure follows suit with what many other vendors are doing – taking their existing products and technologies and re-marketing them to closely align and reflect a more coherent messaging.  Only time will tell as to if this approach will be successful in growing HP’s business.

Cisco, EMC and VMware Announcement – My Thoughts


By now I’m sure you’ve read, heard or seen Tweeted the announcement that Cisco, EMC and VMware have come together and created the Virtual Computing Environment coalition .   So what does this announcement really mean?  Here are my thoughts:

Greater Cooperation and Compatibility
Since these 3 top IT giants are working together, I expect to see greater cooperation between all three vendors, which will lead to understanding between what each vendor is offering.  More important, though, is we’ll be able to have reference architecturethat can be a starting point to designing a robust datacenter.  This will help to validate that an “optimized datacenter” is a solution that every customer should consider.

Technology Validation
With the introduction of the Xeon 5500 processor from Intel earlier this year and the announcement of the Nehalem EX coming early in Q1 2010, the ability to add more and more virtual machines onto a single host server is becoming more prevalent.  No longer is the processor or memory the bottleneck – now it’s the I/O.  With the introduction of Converged Network Adapters (CNAs), servers now have access to  Converged Enhanced Ethernet (CEE) or DataCenter Ethernet (DCE) providing up to 10Gb of bandwidth running at 80% efficiency with lossless packets.  With this lossless ethernet, I/O is no longer the bottleneck.

VMware offers the top selling virtualization software, so it makes sense they would be a good fit for this solution.

Cisco has a Unified Computing System that offers up the ability to combine a server running a CNA to a Interconnect switch that allows the data to be split out into ethernet and storage traffic.  It also has a building block design to allow for ease of adding new servers – a key messaging in the Coalition announcement.

EMCoffers a storage platform that will enable the storage traffic from the Cisco UCS 6120XP Interconnect Switch and they have a vested interest in VMware and Cisco, so this marriage of the 3 top IT vendors is a great fit.

Announcement of Vblock™ Infrastructure Packages
According to the announcement, the Vblock Infrastructure Package “will provide customers with a fundamentally better approach to streamlining and optimizing IT strategies around private clouds.”  The packages will be fully integrated, tested, validated, and that combine best-in-class virtualization, networking, computing, storage, security, and management technologies from Cisco, EMC and VMware with end-to-end vendor accountability.  My thought on these packages is that they are really nothing new.  Cisco’s UCS has been around, VMware vSphere has been around and EMC’s storage has been around.  The biggest message from this announcement is that there will soon be  “bundles” that will simplify customers solutions.  Will that take away from Solution Providers’ abilities to implement unique solutions?  I don’t think so.  Although this new announcement does not provide any new product, it does mark the beginning of an interesting relationship between 3 top IT giants and I think this announcement will definitely be an industry change – it will be interesting to see what follows.

UPDATE – click here check out a 3D model of the vBlocks Architecture.

IBM BladeCenter HS22 Delivers Best SPECweb2005 Score Ever Achieved by a Blade Server

HS22According to IBM’s System x and BladeCenter x86 Server Blog, the IBM BladeCenter HS22 server has posted the best SPECweb2005 score ever from a blade server.  With a SPECweb2005 supermetric score of 75,155, IBM has reached a benchmark seen by no other blade yet to-date.  The SPECweb2005 benchmark is designed to be a neutral, equal benchmark for evaluting the peformance of web servers.  According to the IBM blog, the score is derived from three different workloads measured:

  • SPECweb2005_Banking – 109,200 simultaneous sessions
  • SPECweb2005_Ecommerce – 134,472 simultaneous sessions
  • SPECweb2005_Support – 64,064 simultaneous sessions

The HS22 achieved these results using two Quad-Core Intel Xeon Processor X5570 (2.93GHz with 256KB L2 cache per core and 8MB L3 cache per processor—2 processors/8 cores/8 threads). The HS22 was also configured with 96GB of memory, the Red Hat Enterprise Linux® 5.4 operating system, IBM J9 Java® Virtual Machine, 64-bit Accoria Rock Web Server 1.4.9 (x86_64) HTTPS software, and Accoria Rock JSP/Servlet Container 1.3.2 (x86_64).

It’s important to note that these results have not yet been “approved” by SPEC, the group who posts the results, but as soon as they are, they’ll be published at at http://www.spec.org/osg/web2005

The IBM HS22 is IBM’s most popular blade server with the following specs:

  • up to  2 x Intel 5500 Processors
  • 12 memory slots for a current maximum of 96Gb of RAM
  • 2 hot swap hard drive slots capable of running RAID 1 (SAS or SATA)
  • 2 PCI Express connectors for I/O expansion cards (NICs, Fibre HBAs, 10Gb Ethernet, CNA, etc)
  • Internal USB slot for running VMware ESXi
  • Remote management
  • Redundant connectivity

ibm_hs22_nehalem_blade

IBM Announces Emulex Virtual Fabric Adapter for BladeCenter…So?

Emulex Virtual Fabric AdapterEmulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC.  The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card. 

When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card.  According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.  The one catch with this mode is that it ONLY operates with the  BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC.  This means no connection to Cisco Nexus…yet.  According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade.  Not sure if that means compatibility with Cisco Nexus 5000 or not.  We’ll have to wait and see.

When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a  standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.   The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.

BladeCenter H I-O

So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value.  HP has had this functionality for over a year now in their VirtualConnect Flex-10  offering so this technology is nothing new.  Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic.  I’m just not sure, but am open to any comments or thoughts.

legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms