Earlier this month, Intel's Sandy Bridge architecture finally made its way to the company's dual- and quad-socket-capable server processors with the new Xeon E5 product family. The launch is very important for Intel's business. Not only because of the growing server market, fueled by cloud computing initiatives and Internet-based companies, but also for its push to expand to storage and networking equipment.

We had the opportunity to chat with Ajay Chandramouly, Cloud Computing and Data Center Industry Engagement Manager at Intel IT, who gave us some insight on how they have leveraged the Xeon processor family across their global data centers to drive performance and cost savings.

TS: First of all, thank you for taking the time to chat with us about Intel's latest server chip and its strategy for this market. Can you tell us a little bit about yourself and your job at Intel?

Sure, my name is Ajay Chandramouly, I've been with Intel now for 10 years.  Currently I'm in Intel IT, actually – serving as Intel IT's Cloud Computing and Data Center Industry Engagement Manager. So, I think, for this interview what I can do is provide some insights from an actual end-user IT perspective on how the new Xeon E5 2600 product family could actually benefit a real IT shop.

TS: Back in November we reviewed your consumer-level flagship CPU, the Sandy Bridge-E based Core i7-3960X. We understand the new Xeon E5-2600 is based on very similar technology. Please go ahead and give us your full pitch on what's new and what are the key features of the new outgoing Xeons.

The new Xeon E5 2600 product family is based on the Sandy Bridge architecture. It offers the best combination of performance, capabilities and cost effectiveness, which can really benefit the entire data center, not just in servers and workstations, but also in storage devices and network switches. Because the new Xeon brings benefits to compute, storage and networking we feel it will become the foundation for public and private clouds.

Speaking from an Intel IT perspective, we intend to deploy the Xeon E5 widely across our entire environment, including our silicon design environment, manufacturing and our office and enterprise private cloud.

To address the question about what's new and what's so impressive about this, in summary the E5 2600 offers more of everything that affects performance. In other words, more cores, more cache, more memory, more integration with bandwidth across the entire platform to get your data where it needs to be faster than before.

Compared to our previous generation Xeon 5600 we now have up to two additional cores, there's up to 8MB more LLC (last-level cache), we've added a fourth DDR3 memory channel, we integrated the I/O onto the die itself which reduces latency by 30%. This E5 is really a bandwidth machine, and the reason why that is so important is because as you add more cores to the processor, you have more data that has to get in and out of the CPU as quickly as possible. So now that we have up to eight cores, each of them hyper-threaded, you effectively have up to sixteen threads. The bottleneck has shifted to I/O.

And if you look at where the CPU gets its bandwidth from it comes primarily from three sources: one is the last-level cache, secondly from the memory bandwidth, and thirdly from access to remote memory.

Last level cache is on die memory with a very low latency and very high bandwidth. We also implemented this ring topology interconnect, which allows all the cores to access the data simultaneously, so logically it's one cache but physically it's divided into multiple slices.

TS: You mention performance and cost effectiveness as key points for the new Xeon. Can you cite specific figures for improvements compared to the previous generation?

Absolutely. I can speak from actual test results using real workloads from our own Intel IT perspective. When we tested the Xeon E5 in our silicon design environment we found – using what we call our electronic design automation (EDA) workloads – up to a 55% performance improvement over the prior generation Xeon 5600.

Whether you are short on data center space or are pushing the power and cooling limits of your facilities, it becomes a no-brainer when you can replace roughly 20 old servers with one new E5 2600 system.

On the energy efficiency side of things, the new Xeon E5 2600 is about 50% more energy efficient than our previous generation Xeon processors when you look at the total ratio of power to performance. We were able to achieve this reduction in power by scaling memory, cache and I/O to match core needs.

So essentially when a core is active it is designed to scale power depending upon the use of cores. Now that we are delivering up to 16 effective threads, versus the prior generation which had up to 12 in the 5600 series, that means the E5 2600 can deliver a lot more throughput in roughly the same power envelope.

The other thing about the E5 2600 is that the processor can tune interfaces to match the performance and power consumption across 23 different points of control, so that systems do not consume power unnecessarily but instead tightly link performance to the amount of energy that is consumed.

We feel that for energy efficiency this is really important.

From our own Intel IT perspective, our proactive technology refreshes are a  critical part of our overall datacenter strategy and has been one of the single largest drivers for delivering business value back to Intel Corporation. So for example, our proactive refresh strategy has helped Intel IT save hundreds of millions of dollars in efficiencies. By adapting latest generation Xeon servers we've been able to achieve server consolidation ratios of up to 20:1 (when upgrading from a single-core Xeon), and that strategy will allow us to further reduce our data centers by another 35% over the next few years. 

For other enterprise IT shops, whether you are short on data center space or are pushing the power and cooling limits of your facilities, it really becomes a no-brainer when you can replace roughly 20 old servers with one new E5 2600 system.

TS: And how does it compare to the competition?

We don't use AMD at all in our environment so I wouldn't know.

TS: What about actual cash savings from refreshing an entire data center with the latest Xeons? What can companies look forward to in order to justify the investment?

In our own private cloud we've actually saved up to $9 million to date, and expect another $6 million annually over the next four years. So that's real hard cash savings that we've been able to deliver back to the corporation. And that doesn't include non-cash benefits that we've seen through our private cloud efforts.

Things like being able to provision a server used to take us 90 days back in 2009 before we began implementing our private cloud in our office and enterprise environment, now it takes us as little as 45 minutes. That's a huge boost in productivity and helps our engineers go from idea to implementation within a day. 

We've also found that moving to a blade form factor has helped improve our cloud TCO by 29%. As I mentioned before, it's not just about compute anymore, but refreshing across storage, networking and facilities is also critical to drive overall benefits. We found that upgrading to 10Gbit Ethernet has helped reduce our network costs by up to 25%. The new Xeon E5 is also going to be playing a big role in storage devices and we found that our storage refresh and optimization strategy has helped drive $9.2 million in savings.

These are all real-world benefits Intel IT has delivered.

TS: What are the biggest challenges in the data center today and what are you doing to address those needs?

The biggest challenges today can probably be summarized in five areas: performance, I/O bottlenecks, security, energy efficiency, and lastly storage and switching constraints. At Intel we recognize that just increasing compute performance alone does not address all the challenges that prevent IT from achieving the scale that they need. That's why we not only focus on improving performance, but on improving all aspects of the E5 2600. So that means supporting more platforms in the data center such as storage and networking, addressing the security risks that IT faces every day, helping to reduce operating costs and energy use, and improving I/O to get data where it needs to be when it needs to be there.

TS: The newly announced Xeon E5 chips are divided between server-oriented and workstation processors. We've talked a bit about benefits for servers and data centers. What are some of the benefits that businesses will see from upgrading workstations with new Xeon E5 processors?

We've tested the new E5 both in servers and workstations. At Intel IT we have tested the E5 2600 in high performance workstations in our silicon design environment, and we found very compelling benefits.

The E5 2600 offers more of everything that affects performance. In other words, more cores, more cache, more memory, more integration with bandwidth across the entire platform.

In the past our design engineers had to stagger design tasks due to limitations in processing power and the number of cores that were available. Now, with the new E5 2600 they can create and test designs more quickly using multiple EDA applications concurrently.

What that means from an end-user benefit or business value benefit is that this now allows for faster design iterations with more demanding design workloads, and helps us accelerate product time to market, impacting Intel's top-line.

We're seeing IT not only impacting bottom-line results, but IT now has an opportunity to affect the top-line, in this case by helping get products to market faster. In addition to that, these E5-based workstations have allowed us to do more validation cycles earlier so that we can identify and fix problems sooner in product development, and therefore improve product quality as well. So faster time to market and improving product quality are two of the biggest benefits of why we are adopting E5 2600 and why it will in fact become the standard for our Intel IT workstation deployment, including refreshing our older systems.

TS: In the past, clock speed was the most straightforward way to measure a processor's superiority. More recently that race shifted to number of cores and today the focus seems to be increasingly on efficiency. With Ivy Bridge you will soon move to a 22nm fabrication process and will also debut a new transistor design paradigm. What will this mean for Intel's server strategy once we see these advancements join the Xeon line-up?

I'd like to focus this on the Xeon E5, because it is the latest and greatest and the newest product that OEMs have just delivered to market, so its a bit premature to talk about the Ivy Bridge platform. The E5 just launched March 6 and we're really excited about deploying this product in our data centers.

TS: On the topic of efficiency. The dense server market is growing as companies look to curb electricity costs by deploying low-power servers for cloud-centric data centers. AMD recently made headlines with its acquisition of SeaMicro, which up until now has been selling "micro servers" using Intel chips. Give us your thoughts on this move and how it affects your high-density server strategy.

I'd rather not comment about the competition and their acquisitions. From my perspective it really hasn't affected anything. 

TS: From the top of your mind, what other industries that you wouldn't think as necessarily technology-driven are using the Xeon platform to drive innovation or serve their customers?

Let me start by first describing how the new Xeon E5 will actually impact our own Intel IT environment, from a practical perspective, and then extend that to other vertical industries that could also benefit.

Like I mentioned earlier, we plan to refresh / deploy the new Xeon E5 accross our entire environment. That includes our silicon design environment, our manufacturing environment, and of course our office and enterprise private cloud, so this will have pervasive broad impact accross all of our data centers.

When we did early testing in our silicon design environment – which is were we tested this first and where we have a lot of our high performance compute environment, and in particular where simulation and verification are a large part of the workflow – we found a lot of benefit particularly from the use of a technology called AVX (Advanced Vector Extensions). That was part of the contributing factors that helped us achieve up to a 55% performance improvement over the previous generation Xeon 5600.

What AVX does is that it offers up to double the floating point operation per clock cycle and helps improve software performance by delivering more agility and greater user experience accross a wide range of uses, especially in compute intensive applications.

So, we feel like other industries that could take advantage of these types of workflows are things like aeronautical industries, automobile industries, oil and gas. Our testing, of course, focuses on silicon design and electronic design automation workloads but these sorts of throughput improvements that we've seen could also benefit other compute intensive applications, such as climate modeling, financial analysis and video creation. So those are just a few examples, off the top of my head.


This content is sponsored by Intel. Follow up on this and other innovations at Intel IT's website and learn about how Intel is working to solve some of today's most demanding and complex technology issues.