Tag Archives: blade servers

IBM to Acquire BLADE Network Technologies

IBM announced on Sept. 27, 2010 that it has entered into a definitive agreement to acquire BLADE Network Technologies (BLADE), a privately held company based in Santa Clara, CA. BLADE specializes in software and devices that route data and transactions to and from servers. The acquisition is anticipated to close in the fourth quarter of 2010, subject to the satisfaction of customary closing conditions and applicable regulatory reviews. Financial terms were not disclosed.

To read the full press release, continue reading.

REVISED HP Loses Blade Market Share in Q2

Revised 9/29/2010 – I previously titled this blog post, “HP Loses Blade Server Market Share to IBM…” however I have since re-thought that statement. The report from IDC does not specify where HP’s blade server market share went from Q1 2010 – it only provides Q2 2010 market share numbers. I regret any confusion I may have caused.
Continue reading

One Stop Shop for Blade Server Links

I continuously find myself re-visiting the same links to find additional information regarding blade servers, so I finally came up with a revolution: why not consolidate the links and put them on my site?    Introducing a new addition to my blog – the “Helpful Links” page.  Located at the top of every page, the Helpful Links page is designed to be a single stop shop for the best links related to blade servers.  My goal is to continue to update this page, so if you have links you want to see, or if something is broken, please let me know.  This is YOUR site.  I built this site for you, so in the words of Jerry Maguire, “Help me, help you.”    Thanks for your continued support.

Go to Helpful Links Section »

ford focus rs
american airlines promotion code
bellingham high school
google apps for education
music notes facebook

New Study Shows Dell M1000e Chassis Most Power Efficient Chassis

A white paper released today by Dell shows that the Dell M1000e blade chassis infrastructure offers significant power savings compared to equivalent HP and IBM blade environments. In fact, the results were audited by an outside source, the Enterprise Management Associates (http://www.enterprisemanagement.com). After the controversy with the Tolly Group report discussing HP vs Cisco, I decided to take the time to investigate these findings a bit deeper. Continue reading

VMworld 2010: Up Close and Personal with HP and Dell Blades

Last week at VMworld 2010 I had the opportunity to get some great pictures of HP and Dell’s newest blade servers. The HP Proliant BL620 G7, the HP Proliant BL680 G7 and the Dell PowerEdge M610X and M710HD.   These newest blade servers are exciting offerings from HP and Dell so I encourage you to take a few minutes to look.    Continue reading

My Interview with HP Vice President of Converged Infrastructure, Doug Oathout

Doug OathoutOne of the biggest things revealed at HP Technology Forum in Las Vegas last month was the introduction of the HP VirtualConnect FlexFabric Module.  According to HP, this new module allows for “one module” to access “any data or storage network” and is a key piece of HP’s Converged Infrastructure.  Thanks to my friends at SDR News, I had the opportunity to discuss HP’s converged infrastructure with the guy in charge of designing HP’s strategy, Doug Oathout,Vice President of Converged Infrastructure at HP.   Continue reading

LEFT BEHIND in The Venetian Casino Data Center (Really!)

The Venetian Hotel and Casino Data CenterThey make it look so complicated in the movies.  Detailed covert operations with the intent to hack into a casino’s mainframe preceeded by weeks of staged planned rehearsals, but I’m here to tell you it’s much easier than that.  

This is my story of how I had 20 seconds of complete access to The Venetian Casino’s data center, and lived to tell about it.

Continue reading

Will LIGHT Replace Cables in Blade Servers?

Part of the Technology Behind Lightfleet's Optical Interconnect Technology (courtesy Lightfleet.com)

CNET.com recently reported that for the past 7 years, a company called Lightfleet has been working on a way to replace the cabling and switches used in blade environments with light, and in fact has already delivered a prototype to Microsoft LabsContinue reading

Details on Intel’s Nehalem EX (Xeon 7500 and Xeon 6500)

Intel is scheduled to “officially” announce today the details of their Nehalem EX CPU platform, although the details have been out for quite a while, however I wanted to highlight some key points.

Intel Xeon 7500 Chipset
This chipset will be the flagship replacement for the existing Xeon 7400 architecture.  Enhancements include:
•Nehalem uarchitecture
•8-cores per CPU 
•24MB Shared L3 Cache
• 4 Memory Buffers per CPU
•16 DIMM slots per CPU for a total of 64 DIMM slots supporting up to 1 terabyte of memory (across 4 CPUs)
•72 PCIe Gen2 lanes
•Scaling from 2-256 sockets  
•Intel Virtualization Technologies

Intel Xeon 6500 Chipset
Perhaps the coolest addition to the Nehalem EX announcement by Intel is the ability for certain vendors to cut the architecture in half, and use the same quality of horsepower across 2 CPUs.  The Xeon 6500 chipset will offer 2 CPUs, each with the same qualities of it’s bigger brother, the Xeon 7500 chipset.  See below for details on both of the offerings.

Additional Features
Since the Xeon 6500/7500 chipsets are modeled off the familiar Nehalem uarchitecture, there are certain well-known features that are available.  Both Turbo Boost and HyperThreading have been added to the and will provide users for the ability to have better performance in their high-end servers (shown left to right below.)

HyperThreading

Memory
Probably the biggest winner of the features that Intel’s bringing with the Nehalem EX announcement is the ability to have more memory and bigger memory pipes.  Each CPU will have 4 x high speed “Scalable Memory Interconnects” (SMI’s) that will be the highways for the memory to communicate with the CPUs.  As with the existing Nehalem architecture, each CPU has a dedicated memory controller that provides access to the memory.  In the case of the Nehalem EX design, each CPU has 4 pathways that each have a Scalable Memory Buffer, or SMB, that provide access to 4 memory DIMMs.  So, in total, each CPU will have access to 16 DIMMs across 4 pathways.  Based on the simple math, a server with 4 CPUs will be able to have up to 64 memory DIMMs.  Some other key facts:
• it will support up to 16GB DDR3 DIMMs
•it will support up to 1TB with 16GB DIMMS
•it
will support DDR3 DIMMs up to 1066MHz, in Registered, Single-Rank, Dual-Rank and Quad-Rank flavors.

Another important note is the actual system memory speed will depend on specific processor capabilities (see reference table below for max SMI link speeds per CPU):
•6.4GT/s SMI link speed capable of running memory speeds up to 1066Mhz
•5.86GT/s SMI link speed capable of running memory speeds up to 978Mhz
•4.8GT/s SMI link speed capable of running memory speeds up to 800Mhz

Here’s a great chart to reference on the features across the individual CPU offerings, from Intel:

Finally, take a look at some comparisons between the Nehalem EX (Xeon 7500) and the previous generation, Xeon 7400:

That’s it for now.  Check back later for more specific details on Dell, HP, IBM and Cisco’s new Nehalem EX blade servers.