Digging around the web tonight, I stumbled upon an interesting Czech web site, ExtraHardware.cz, that appears to offer some details around Intel’s upcoming E5-2600 v2 processor. I’m not sure of the timeline for when the E5-2600 v2 will be released, but I imagine we can expect them to be available sometime in the next few months. Here is a quick summary of what was revealed.
- Up to 12 cores per CPU
- Shared cache of up to 30MB
- Memory speeds of up to 1866MHz
- Socket compatibility with existing blade servers with Intel Xeon E5-2600 CPUs
As an added bonus, here’s the Czech website roughly translated:
24 CPU cores per 2 socket blade server would be a powerful virtualization host, but I’m curious to know your thoughts.
Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 15 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.
Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.
My opinion is like with this amount of CPU power in a blade, it will be I/O limited quite often as if it runs 60-100VM, it can generate huge network and storage I/O. Network and storage back-end should be designed propely to make sure it will handle it.
I disagree with your comment about I/o being limited in a blade server. I can show you a 2 CPU blade with 8 x 10GbE ports which should be plenty of bandwidth for 100 vms. Thanks for the comment though.
Depends on the vendor and the form factor.
I don’t see any problem with that. I run a capacity platform for an outsourcing company with about 10000 VMs. Our hosts hold from 50-150 VMs each, with 2x10GB ethernet and 2x8GB FC. I seldom see more than 1gbit network traffic(other than vmotion) per host. Storage is more, but still far from the max.
Question is really if someone wants to have hundreds of VMs down when an ESXi blade is having some issues. It can be a serious time in case of maintenance to evacuate all VMs.