Tag Archives: Cisco

Blade Networks Announces Industry’s First and Only Fully Integrated FCoE Solution Inside Blade Chassis

BLADE Network Technologies, Inc. (BLADE), “officially” announces today the delivery of the industry’s first and only fully integrated Fibre Channel over Ethernet (FCoE) solution inside a blade chassis.   This integration significantly reduces power, cost, space and complexity over external FCoE implementations.

You may recall that I blogged about this the other day (click here to read), however I left off one bit of information.  The (Blade Networks) BNT Virtual Fabric 10 Gb Switch Module does not require the QLogic Virtual Fabric Extension Module to function.  It will work with an existing Top-of-Rack (TOR) Convergence Switch from Brocade or Cisco to act as a 10Gb switch module, feeding the converged 10Gb link up to the TOR switch.  Since it is a switch module, you can connect as few as 1 uplink to your TOR switch, therefore saving connectivity costs, as opposed to a pass-thru option (click here for details on the pass-thru option.) 

Yes – this is the same architectural design as the Cisco Nexus 4001i provides as well, however there are a couple of differences:

BNT Virtual Fabric Switch Module (IBM part #46C7191) – 10 x 10Gb Uplinks, $11,199 list (U.S.)
Cisco Nexus 4001i Switch (IBM part #46M6071) – 6 x 10Gb Uplinks, $12,999 list (U.S.)

While BNT provides 4 extra 10Gb uplinks, I can’t really picture anyone using all 10 ports.  However, it does has a lower list price, but I encourage you to check your actual price with your IBM partner, as the actual pricing may be different.  Regardless of whether you choose BNT or Cisco to connect into your TOR switch, don’t forget the transceivers!  They add much more $$ to the overall cost, and without them you are hosed.

About the BNT Virtual Fabric 10Gb Switch Module
The BNT Virtual Fabric 10Gb Switch Module includes the following features and functions:

  • Form-factor
    • Single-wide high-speed switch module (fits in IBM BladeCenter H bays #7 and 9.) 
  • Internal ports
    • 14 internal auto-negotiating ports: 1 Gb or 10 Gb to the server blades
    • Two internal full-duplex 100 Mbps ports connected to the management module
  • External ports
    • Up to ten 10 Gb SFP+ ports (also designed to support 1 Gb SFP if required, flexibility of mixing 1 Gb/10 Gb)
    • One 10/100/1000 Mb copper RJ-45 used for management or data
    • An RS-232 mini-USB connector for serial port that provides an additional means to install software and configure the switch module
  • Scalability and performance
    • Autosensing 1 Gb/10 Gb internal and external Ethernet ports for bandwidth optimization

To read the extensive list of details about this switch, please visit the IBM Redbook located here.

Cisco Takes Top 8 Core VMware VMmark Server Position

Cisco is getting some (more) recognition with their UCS blade server product, as they recently achieved the top position for “8 Core Server” on VMware’s VMmark benchmark tool.  VMmark is the industry’s first (and only credible) virtualization benchmark for x86-based computers.  According to the VMmark website, the Cisco UCS B200 blade server reached a score of 25.06 @ 17 tiles.  A “tile” is simple a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment.   The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity.

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot?  Here’s the configuration:

server config:

  • 2 x Intel Xeon X5570 Processors
  • 96GB of RAM (16 x 8GB)
  • 1 x Converged Network Adapter (Cisco UCS M71KR-Q)

storage config:

  • EMC CX4-240
  • Cisco MDS 9130
  • 1154.27GB Used Disk Space
  • 1024MB Array Cache
  • 41 disks used on 4 enclosures/shelves (1 with 14 disk, 3 with 9 disks)
  • 37 LUNs used
    *17 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *17 at 15GB (database) + 2 LUNs at 400GB (Misc) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

While first place on the VMmark page (8 cores) shows Fujitsu’s RX300, it’s important to note that it was reached using Intel’s W5590 processor – a processor that is designed for “workstations” – not servers.  Second place, of server processors, currently shows HP’s BL490 with 24.54 (@ 17 tiles)

Thanks to Omar Sultan (@omarsultan) for Tweeting about this and to Harris Sussman for blogging about it.

384GB RAM in a Single Blade Server? How Cisco Is Making it Happen (UPDATED 1-22-10)

UPDATED 1/22/2010 with new pictures 
Cisco UCS B250 M1 Extended Memory Blade Server
Cisco UCS B250 M1 Extended Memory Blade Server

 Cisco’s UCS server line is already getting lots of press, but one of the biggest interests is their upcoming Cisco UCS B250 M1 Blade Server.  This server is a full-width server occupying two of the 8 server slots available in a single Cisco UCS 5108 blade chassis.  The server can hold up to 2 x Intel Xeon 5500 Series processors, 2 x dual-port mezzanine cards, but the magic is in the memory – it has 48 memory slots.  

This means it can hold 384GB of RAM using 8GB DIMMS.  This is huge for the virtualization marketplace, as everyone knows that virtual machines LOVE memory.  No other vendor in the marketplace is able to provide a blade server (or any 2 socket Intel Xeon 5500 server for that matter) that can achieve 384GB of RAM. 

 

So what’s Cisco’s secret?  First, let’s look at what Intel’s Xeon 5500 architecture looks like.

 
 

intel ram

 

As you can see above, each Intel Xeon 5500 CPU has its own memory controller, which in turn has 3 memory channels.  Intel’s design limitation is 3 memory DIMMs (DDR3 RDIMM) per channel, so the most a traditional server can have is 18 memory slots or 144GB RAM with 8GB DDR3 RDIMM. 

With the UCS B-250 M1 blade server, Cisco adds an additional 15 memory slots per CPU, or 30 slots per server for a total of 48 memory slots which leads to 384GB RAM with 8GB DDR3 RDIMM. 

 

b250-ram

How do they do it?  Simple – they put in 5 more memory DIMM slots then they present all 24 memory DIMMs across all 3 channels to an ASIC that sits between the memory controller and the memory channels.  The ASIC presents the 24 memory DIMMs as 1 x 32GB DIMM to the memory controller.  For each 8 memory DIMMs, there’s an ASIC.  3 x ASICs per CPU that represents 192GB RAM (or 384GB in a dual CPU config.) 

It’s quite an ingenious approach, but don’t get caught up in thinking about 384GB of RAM – think about 48 memory slots.  In the picture below I’ve grouped off the 8 DIMMs with each ASIC in a green square (click to enlarge.)

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

With that many slots, you can get to 192GB of RAM using 4GB DDR3 RDIMMs– which currently cost about 1/5th of the 8GB DIMMs.  That’s the real value in this server.

Cisco has published a white paper on this patented technology at http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300.html so if you want to get more details, I encourage you to check it out.

Interesting HP Server Facts (from IDC)

As you can see from my blog title, I try to focus on “all things blade servers”, however I came across this bit of information that I thought would be fun to blog.  An upfront warning – this is an HP biased blog post, so sorry for those of you who are Cisco, Dell or IBM fans.

Market research firm, IDC released a quarterly update to their Worldwide Quarterly Server Tracker, citing market share figures for the 3rd calendar quarter of 2009 (3Q09).  From this report, there are a few fun HP server facts (thanks to HP for passing these facts along to me:)

HP is the #1 vendor in worldwide server shipments for the 30th consecutive quarter (7.5 years). HP shipped more than 1 out of every 3 servers worldwide and captured 36.5 percent total unit shipment share.

According to IDC:

  • HP shipped over 161,000 more servers than #2 Dell.
  • HP shipped 2.6 times as many servers as #3 IBM
  • 9.0 times as #4 Fujitsu
  • 12.9 times as many as #5 Sun.
  • HP ended up in a statistical tie with IBM for #1 in total server revenue market share with 30.9 percent.  This includes all server (UNIX and x86 revenues.)

HP leads the blade server market, with a 50.7 percent revenue share, and a 47.7 percent unit share.

I blogged about this in early December (see this link for details),but it’s no surprise that HP is leading the pack in blade sales.  Their field sales team is actively promoting blades for nearly every server opportunity and they continue to make innovative additions to their blades (like 10Gb NICs standard on G6 blades.)   HP Integrity blades claimed the #1 position in revenue share for the RISC+EPIC blade segment with a 53.2 percent share gaining 1.8 points year over year.

For the 53rd consecutive quarter, more than 13 years, HP ProLiant is the x86 server market share leader in both factory revenue and units, shipping more than 1 out of every 3 servers in this market with a 36.9 percent unit share.

HP’s x86 revenue share was 14.6 points higher than its nearest competitor; Dell. HP’s x86 revenue share was 19.2 percentage points higher than IBM.

 For the 3 major operating environments UNIX®, Windows and Linux combined (representing 99.3 percent of all servers shipped worldwide), HP is number 1 worldwide in server unit shipment and revenue market share.

HP holds a 36.5 percent unit market share worldwide, which is 2.6 times more than IBM’s unit market share and 12.9 times the unit share of Sun.

HP holds a 35.4 percent revenue market share worldwide which is 2.2 times the revenue share of Dell and 4.0 times the revenue share of Sun.

FINAL NOTE:  All of the following market share figures are for the 3rd quarter (unless otherwise noted) and represent worldwide results as reported by the IDC Worldwide Quarterly Server Tracker for Q309, December 2009.

Cisco Wants IBM’s Blade Servers??

In an unusual move Tuesday, Cisco CEO, John Chambers, commented that Cisco is still open to a blade server “partnership” with IBM.  “I still firmly believe that it’s in IBM’s best interests to work with us. That door will always be open,” Chambers told the audience at the Cisco’s financial analyst conference yesterday at Cisco’s HQ in San Jose. 

John Chambers and other executives spent much of the day talking with financial analysts about Cisco’s goal to become the preeminent IT and communications vendor because of the growing importance of virtualization, collaboration and video, a move demonstrated by their recent partnership announcement with EMC and VMware.  According to reports, analysts at the event said they think Chambers is sincere about his willingness to work with IBM. The two companies have much in common, such as their enterprise customer base, and Cisco’s products could fit into IBM’s offerings, said Mark Sue of RBC Capital Markets.

So – is this just a move for Cisco to tighten their relationship with IBM in the hopes of growing to an entity that can defeat HP and their BladeSystem sales, or has Cisco decided that the server market is best left to manufacturers who have been selling servers for 20+ years?  What are your thoughts?  Please feel free to leave some comments and let me know.

Cisco, EMC and VMware Announcement – My Thoughts


By now I’m sure you’ve read, heard or seen Tweeted the announcement that Cisco, EMC and VMware have come together and created the Virtual Computing Environment coalition .   So what does this announcement really mean?  Here are my thoughts:

Greater Cooperation and Compatibility
Since these 3 top IT giants are working together, I expect to see greater cooperation between all three vendors, which will lead to understanding between what each vendor is offering.  More important, though, is we’ll be able to have reference architecturethat can be a starting point to designing a robust datacenter.  This will help to validate that an “optimized datacenter” is a solution that every customer should consider.

Technology Validation
With the introduction of the Xeon 5500 processor from Intel earlier this year and the announcement of the Nehalem EX coming early in Q1 2010, the ability to add more and more virtual machines onto a single host server is becoming more prevalent.  No longer is the processor or memory the bottleneck – now it’s the I/O.  With the introduction of Converged Network Adapters (CNAs), servers now have access to  Converged Enhanced Ethernet (CEE) or DataCenter Ethernet (DCE) providing up to 10Gb of bandwidth running at 80% efficiency with lossless packets.  With this lossless ethernet, I/O is no longer the bottleneck.

VMware offers the top selling virtualization software, so it makes sense they would be a good fit for this solution.

Cisco has a Unified Computing System that offers up the ability to combine a server running a CNA to a Interconnect switch that allows the data to be split out into ethernet and storage traffic.  It also has a building block design to allow for ease of adding new servers – a key messaging in the Coalition announcement.

EMCoffers a storage platform that will enable the storage traffic from the Cisco UCS 6120XP Interconnect Switch and they have a vested interest in VMware and Cisco, so this marriage of the 3 top IT vendors is a great fit.

Announcement of Vblock™ Infrastructure Packages
According to the announcement, the Vblock Infrastructure Package “will provide customers with a fundamentally better approach to streamlining and optimizing IT strategies around private clouds.”  The packages will be fully integrated, tested, validated, and that combine best-in-class virtualization, networking, computing, storage, security, and management technologies from Cisco, EMC and VMware with end-to-end vendor accountability.  My thought on these packages is that they are really nothing new.  Cisco’s UCS has been around, VMware vSphere has been around and EMC’s storage has been around.  The biggest message from this announcement is that there will soon be  “bundles” that will simplify customers solutions.  Will that take away from Solution Providers’ abilities to implement unique solutions?  I don’t think so.  Although this new announcement does not provide any new product, it does mark the beginning of an interesting relationship between 3 top IT giants and I think this announcement will definitely be an industry change – it will be interesting to see what follows.

UPDATE – click here check out a 3D model of the vBlocks Architecture.

Cisco’s Unified Computing System Management Software

Cisco’s own Omar Sultan and Brian Schwarz recently blogged about Cisco’s Unified Computing System (UCS) Manager software and offered up a pair of videos demonstrating its capabilities.  In my opinion, the management software of Cisco’s UCS is the magic that is going to push Cisco out of the Visionary quadrant of the Gartner Magic Quadrant for Blade Servers to the “Leaders” quadrant. 

The Cisco UCS Manager is the centralized management interface that integrates the entire set of Cisco Unified Computing System components.   The management software  not only participates in UCS blade server provisioning, but also in device discovery, inventory, configuration, diagnostics, onitoring, fault detection, auditing, and statistics collection. 

On Omar’s Cisco blog, located at http://blogs.cisco.com/datacenter, Omar and Brian created two videos.  Part 1 of their video offers a general overview of the Management software, where as in Part 2 they highlight the capabilities of profiles

I encourage you to check out the videos – they did a great job with them.

Cisco's New Virtualized Adapter (aka "Palo")

Previously known as “Palo”, Cisco’s virtualized adapter allows for a server to split up the 10Gb pipes into numerous virtual pipes (see belowpalo adapter) like multiple NICs or multiple Fibre Channel HBAs.  Although the card shown in the image to the left is a normal PCIe card, the initial launch of the card will be in the Cisco UCS blade server. 

So, What’s the Big Deal?

When you look at server workloads, their needs vary – web servers need a pair of NICs, whereas database servers may need 4+ NICs and 2+HBAs.  By having the ability to split the 10Gb pipe into virtual devices, you can set up profiles inside of Cisco’s UCS Manager to apply the profiles for a specific servers’ needs.  An example of this would be a server being used for VMware VDI (6 NICs and 2 HBAs) during the day, and at night, it’s repurposed for a computational server needing only 4 NICs.

Another thing to note is although the image shows 128 virtual devices, that is only the theoretical limitation.  The reality is that the # of virtual devices depends on the # of connections to the Fabric Interconnects.  As I previously posted, the servers’ chassis has a pair of  4 port Fabric Extenders (aka FEX) that uplink to the UCS 6100 Fabric Interconnect.  If only 1 of the 4 ports is uplinked to the UCS 6100, then only 13 virtual devices will be available.  If 2 FEX ports are uplinked, then 28 virtual devices will be available.  If 4 FEX uplink ports are used, then 58 virtual devices will be available. 

Will the ability to carve up your 10Gb pipes into smaller ones make a difference?  It’s hard to tell.  I guess we’ll see when this card starts to ship in December of 2009.

(UPDATED) Officially Announced: IBM’s Nexus 4000 Switch: 4001I (PART 2)

I’ve gotten a lot of response from my first post, “REVEALED: IBM’s Nexus 4000 Switch: 4001I” and more information is coming out quickly so I decided to post a part 2. IBM officially announced the switch on October 20, 2009, so here’s some additional information:

  • The Nexus 4001I Switch for the IBM BladeCenter is part # 46M6071 and has a list price of $12,999 (U.S.) each
  • In order for the Nexus 4001I switch for the IBM BladeCenter to connect to an upstream FCoE switch, an additional software purchase is required. This item will be part # strong>49Y9983, “Software Upgrade License for Cisco Nexus 4001I.” This license upgrade allows for the Nexus 4001I to handle FCoE traffic. It has a U.S. list price of $3,899
  • The Cisco Nexus 4001I for the IBM BladeCenter will be compatible with the following blade server expansion cards
    • 2/4 Port Ethernet Expansion Card, part # 44W4479
    • NetXen 10Gb Ethernet Expansion Card, part # 39Y9271
    • Broadcom 2-port 10Gb Ethernet Exp. Card, part # 44W4466
    • Broadcom 4-port 10Gb Ethernet Exp. Card, part # 44W4465
    • Broadcom 10 Gb Gen 2 2-port Ethernet Exp. Card, part # 46M6168
    • Broadcom 10 Gb Gen 2 4-port Ethernet Exp. Card, part # 46M6164
    • QLogic 2-port 10Gb Converged Network Adapter, part # 42C1830
  • (UPDATED 10/22/09) The newly announced Emulex Virtual Adapter WILL NOT work with the Nexus 4001I IN VIRTUAL NIC (vNIC) mode.  It will work in pNIC mode according to IBM.

The Cisco Nexus 4001I switch for the IBM BladeCenter is a new approach to getting converged network traffic. As I posted a few weeks ago in my post, “How IBM’s BladeCenter works with BladeCenter H Diagram 6 x 10Gb UplinksCisco Nexus 5000” before the Nexus 4001I was announced, in order to get your blade servers to communicate with a Cisco Nexus 5000, you had to use a CNA,and a 10Gb Pass-Thru Module as shown on the left. The pass-thru module used in that solution requires for a direct connection to be made from the pass-thru module to the Cisco Nexus 5000 for every blade server that requires connectivity. This means for 14 blade servers, 14 connections are required to the Cisco Nexus 5000. This solution definitely works – it just eats up 14 Nexus 5000 ports. At $4,999 list (U.S.), plus the cost of the GBICs, the “pass-thru” scenario may be a good solution for budget conscious environments.

In comparison, with the IBM Nexus 4001I switch, we now can have as few as 1 uplink to the Cisco Nexus 5000 from the Nexus 4001I switch. This allows you to have more open ports on the Cisco Nexus 5000 for connections to other IBM Bladecenters with Nexus 4001I switches, or to allow connectivity from your rack based servers with CNAs.

Bottom line: the Cisco Nexus 4001I switch will reduce your port requirements on your Cisco Nexus 5000 or Nexus 7000 switch by allowing up to 14 servers to uplink via 1 port on the Nexus 4001I.

For more details on the IBM Nexus 4001I switch, I encourage you to go to the newly released IBM Redbook for the Nexus 4001I Switch.

IBM Announces Emulex Virtual Fabric Adapter for BladeCenter…So?

Emulex Virtual Fabric AdapterEmulex and IBM announced today the availability of a new Emulex expansion card for blade servers that allows for up to 8 virtual nics to be assigned for each physical NIC.  The “Emulex Virtual Fabric Adapter for IBM BladeCenter (IBM part # 49Y4235)” is a CFF-H expansion card is based on industry-standard PCIe architecture and can operate as a “Virtual NIC Fabric Adapter” or as a dual-port 10 Gb or 1 Gb Ethernet card. 

When operating as a Virtual NIC (vNIC) each of the 2 physical ports appear to the blade server as 4 virtual NICs for a total of 8 virtual NICs per card.  According to IBM, the default bandwidth for each vNIC is 2.5 Gbps. The cool feature about this mode is that the bandwidth for each vNIC can be configured from 100 Mbps to 10 Gbps, up to a maximum of 10 Gb per virtual port.  The one catch with this mode is that it ONLY operates with the  BNT Virtual Fabric 10Gb Switch Module, which provides independent control for each vNIC.  This means no connection to Cisco Nexus…yet.  According to Emulex, firmware updates coming later (Q1 2010??) will allow for this adapter to be able to handle FCoE and iSCSI as a feature upgrade.  Not sure if that means compatibility with Cisco Nexus 5000 or not.  We’ll have to wait and see.

When used as a normal Ethernet Adapter (10Gb or 1Gb), aka “pNIC mode“, the card can is viewed as a  standard 10 Gbps or 1 Gbps 2-port Ethernet expansion card.   The big difference here is that it will work with any available 10 Gb switch or 10 Gb pass-thru module installed in I/O module bays 7 and 9.

BladeCenter H I-O

So What?
I’ve known about this adapter since VMworld, but I haven’t blogged about it because I just don’t see a lot of value.  HP has had this functionality for over a year now in their VirtualConnect Flex-10  offering so this technology is nothing new.  Yes, it would be nice to set up a NIC in VMware ESX that only uses 200MB of a pipe, but what’s the difference in having a fake NIC that “thinks” he’s only able to use 200MB vs a big fat 10Gb pipe for all of your I/O traffic.  I’m just not sure, but am open to any comments or thoughts.

legalization of cannabis
beth moore blog
charcoal grill
dell coupon code
cervical cancer symptoms