AMD wants to make its chips 30 times more energy-efficient by 2025

nanoguy

Posts: 1,355   +27
Staff member
In brief: AMD has been stealing Intel's performance thunder for years, and now it wants to hold the energy-efficiency crown. To that end, the company will have to innovate fast enough to outperform the entire industry by 150 percent over the next three years.

For years, Intel had been sitting on its CPU throne, gradually losing its drive to innovate process technology. Instead, the company chose to take the "abnormally bad" Skylake platform and tweak it for several generations, right up until this year's Rocket Lake lineup. This has prompted Apple to transition its Mac products from x86 CPUs to its Arm-based custom silicon and left a lot of room for underdog AMD to make a comeback.

At this point, everyone that's been running a Ryzen-powered PC and following the news on progress made on the Zen microarchitecture knows that AMD has so far executed on its promise to deliver significant performance improvements with each new generation of Ryzen CPUs.

However, the Lisa Su-powered Team Red isn't stopping here. Today, the company announced its most ambitious goal yet—to increase the energy efficiency of its Epyc CPUs and Instinct AI accelerators 30 times by 2025. This would help data centers and supercomputers achieve high performance with significant power savings over current solutions.

If it achieves this goal, the savings would add up to billions of kilowatt-hours of electricity saved in 2025 alone, meaning the power required to perform a single calculation in high-performance computing tasks will have decreased by 97 percent.

Increasing energy efficiency this much will involve a lot of engineering wizardry, including AMD's stacked 3D V-Cache chiplet technology. The company acknowledges the difficult task ahead of it, now that "energy-efficiency gains from process node advances are smaller and less frequent."

To put things in perspective, AMD is talking about outperforming the rest of the computing industry in energy efficiency improvements by 150 percent. The company has been stealing Intel's server lunch as of late, and it currently holds the performance crown in the desktop and notebook departments. For now, Team Red has earned the benefit of the doubt, as it has been pumping innovation into CPUs and GPUs for almost five years with no signs of slowing down.

Permalink to story.

 
The 5800U gives Apple's m1 a run for the money, and that is a node behind and uses really old GCN tech. Really not AMD's best in any way, and more of an after thought for Ultrabook style of computers. Most of these devices are already Intel devices and even though AMD outdoes Intel in every way, getting manufactures to switch over to AMD is slow going. That clearly shows in the 5800U's dated options.

We already known that AMD could be a performance per watt leader even with Zen3 if they had access to the best node available along with designed focused to do so. I don't see AMD doing so for another 2years. Zen4 with RDNA2 in a 15w max package would be the dream for a ultrabook. Even better if the move the RAM to the package and force the use of high speed SSD.
 
I think the race is on - AMD is only stating what is happening already - that's why NVidia ARM deal is probably blocked . Other architectures are coming online , or being revamped RISC-V
Intel, Apple, NVidia, Qualcomm , MediaTek , Google, Samsung, multitude chinese and lots of others will be competing .
I'm actually quite excited to see what the next 5 years bring - will the IPAD pro 2021 CPUs seem slow and clunky in 5 years to a no-name brand
 
I think the race is on - AMD is only stating what is happening already - that's why NVidia ARM deal is probably blocked . Other architectures are coming online , or being revamped RISC-V
Intel, Apple, NVidia, Qualcomm , MediaTek , Google, Samsung, multitude chinese and lots of others will be competing .
I'm actually quite excited to see what the next 5 years bring - will the IPAD pro 2021 CPUs seem slow and clunky in 5 years to a no-name brand
Iv'e wet myself already 5 days later is 5 years later.
Pass me a calendar yesterday.
 
Most of these devices are already Intel devices and even though AMD outdoes Intel in every way, getting manufactures to switch over to AMD is slow going.

What are you smoking?

Intel has outdone AMD consistently (per core) at relatively the same process node

You are only looking at what AMD can do with a much better process node than Intel has been using

When AMD and Intel are on the exact same TSMC node, Intel will kick AMD to the curb
 
What are you smoking?

Intel has outdone AMD consistently (per core) at relatively the same process node

You are only looking at what AMD can do with a much better process node than Intel has been using

When AMD and Intel are on the exact same TSMC node, Intel will kick AMD to the curb

Intel's 10nm is pretty competitive with TSMC's 7nm.

AMD simply has more IPC, and better performance per watt. Really important for mobile. 4800U which is older than their current 5800U still wipes the floor with intel's best 15watt chips.

Intel's biggest advantage during the Phenom days was largely their fab advantage. It allowed for higher clock speeds that Phenom just couldn't achieve until near the end of the product cycle. But AMD wasn't able to scale Phenom down like they wanted, mobile products were always behind Core2.

Unlike then, AMD has equal to better footing then Intel in terms of Node advantage. Better Architecture than intel and that doesn't seem like it will change in the next few years atleast. Zen 3 has a decent leg up on the best that intel has, even with the higher clocking intel chip as the comparison. Zen 4 should close the clock speed gap even more as well as not only increase IPC put performance per Watts overall.

If only we get some really decent AMD laptops.
 
What are you smoking?

Intel has outdone AMD consistently (per core) at relatively the same process node

You are only looking at what AMD can do with a much better process node than Intel has been using

When AMD and Intel are on the exact same TSMC node, Intel will kick AMD to the curb

You do know that up until Zen 2 on TSMC 7nm, Intel was almost always ahead one or two nodes ?
 
Purely on law of physics I think it's just marketing talk. Worthwhile goal in itself, but with x86 arch nope...

30x energy efficiency across 5 years (of which 2 are almost up), while in previous 5 years they achieved only 12x with much easier, bigger nodes. I'll believe in 2025 when we see 4GHz-all core 64-core EPYCs cluster sipping 50W of power. Pipe-dream.
 
30x energy efficiency across 5 years (of which 2 are almost up), while in previous 5 years they achieved only 12x with much easier, bigger nodes. I'll believe in 2025 when we see 4GHz-all core 64-core EPYCs cluster sipping 50W of power. Pipe-dream.
That's what I'm thinking. Pipe Dream! However I would welcome this in reality. Unfortunately they would also charge 10x more. Because I'm not seeing anything from Intel (lets just say my bubble is popped).
 
"For years, Intel had been sitting on its CPU throne, gradually losing its drive to innovate process technology. Instead, the company chose to take the "abnormally bad" Skylake platform and tweak it for several generations, right up until this year's Rocket Lake lineup"

That's just not true. The truth is their fab couldn't keep up with their designs and TSMC wouldn't expand their fab to meet the order volume from Intel because as soon as Intel fixed their fabrication issues they would leave TSMC and TSMC would be left with, at the time, extra overhead from expanding their business without clients to pay for it.
 
Intel's 10nm is pretty competitive with TSMC's 7nm.

AMD simply has more IPC, and better performance per watt. Really important for mobile. 4800U which is older than their current 5800U still wipes the floor with intel's best 15watt chips.

Intel's 10nm laptop CPU's aren't as mature as the process TSMC uses for AMD's CPUs also the 4800U is an up to 25W CPU so I hope it could beat a 15W CPU. Also "wipes the floor" is a bit of an exaggeration. Who's relying on a low power laptop to do any real work that needs to be done as fast as possible?
 
I hope AMD goes through with it.

Nvidia announced with much fanfare its 2x energy efficiency from maxwell (and excelled with pascal) but quickly abandoned with Turing and Volta and sort a little comeback with Ampere.
 
Intel's 10nm laptop CPU's aren't as mature as the process TSMC uses for AMD's CPUs also the 4800U is an up to 25W CPU so I hope it could beat a 15W CPU. Also "wipes the floor" is a bit of an exaggeration. Who's relying on a low power laptop to do any real work that needs to be done as fast as possible?

Intel rates chips based on the "base speed", while AMD rates chips based on overall speed. The result is that a 15W chip from Intel will generally draw 25-40W, while a 15W chip from AMD will really tend to use 15-20W. The whole process of deciding how high and for how long to boost for is done differently, with AMD ratings tending to be closer to real-world usage.

As far as, "real work", the AMD chips mentioned are 8 core chips, while Intel doesn't have 8 core laptop chips that are not, "H" series(meaning high performance/high power). You think an 8 core laptop can't be used for real work?

Yes, rendering, and true high end work won't be done on a laptop, but the 4700u, 4800u, 5700u, and 5800u are VERY solid for doing "real work", including using a 2-3 monitor setup where you want to be doing multiple things at once.
 
Intel's 10nm laptop CPU's aren't as mature as the process TSMC uses for AMD's CPUs also the 4800U is an up to 25W CPU so I hope it could beat a 15W CPU. Also "wipes the floor" is a bit of an exaggeration. Who's relying on a low power laptop to do any real work that needs to be done as fast as possible?
4800U is a 15W part with the option to configure upto 25W.

But pretty much all Ultrabook configs using the 4800U keep it locked to 15W. Most 4800U notebooks that also pack nvidia graphics bring the power limit upto 25W.

Also Intel's 10nm mobile chips have gotten pretty mature. Considering it's been in use since late 2019 with IceLake. Upcoming Alder Lake will use an Enhanced 10nm process (10ESF) which should actually make it slightly better than TSMC's 7nm.

Too bad Intel's Arch is so outdated at this point, and we probably wont see big Performance per Watt gains. But Intel's Move to Big-Little in the mobile front should help get Intel back closer to AMD when it comes to low power devices. As right now Intel is a good deal behind.


Also the Apple M1 is more than capable of real workloads just fine, and it is a sub 15w chip. The only one to be able to even compete with the M1 on any level is AMD. Period.
 
"For years, Intel had been sitting on its CPU throne, gradually losing its drive to innovate process technology. Instead, the company chose to take the "abnormally bad" Skylake platform and tweak it for several generations, right up until this year's Rocket Lake lineup"

That's just not true. The truth is their fab couldn't keep up with their designs and TSMC wouldn't expand their fab to meet the order volume from Intel because as soon as Intel fixed their fabrication issues they would leave TSMC and TSMC would be left with, at the time, extra overhead from expanding their business without clients to pay for it.
That is only partially true. Intel had locked the 14nm fab process to Skylake for whatever reason, so had significant problems putting a different design on the 14nm fab process. Looking back to Skylake, Intel had a problem with overinflated egos. The thought that moving to 10nm would go smoothly made the decision that the design of a chip and the fab process to make the chip should go hand in hand, rather than design being its own thing, and then implementing it on whichever fab process is available.

So, Intel had a problem getting 10nm working, but egos would not allow Intel engineers to just say, "ok, new design, old fab process" and then be done with it. And so, Skylake for four years happened. Even the idea that, "the density is too high to allow the 10nm fab process to work well" was something that Intel couldn't handle rethinking until a year and a half ago, so easing off on the density is something Intel engineers just refused to consider.

Beat your head against a problem, and some more, and still more, rather than taking a step back, considering where the problem comes from, and then finding a way forward is another problem Intel people have. Designs should NEVER be entirely reliant on the fab process to move things forward.

All of those IPC improvements that AMD put into place from Zen to Zen+ to Zen2 and then Zen3...all of them could have been done on Global Foundries 14nm, but the TSMC 7nm process allowed them to be implemented better and with a lower power requirement. I don't think we would see 8 core laptop chips if AMD was stuck on Global Foundries for example, and AMD might not have been able to move to 8 core, but most of the other improvements would have come. Clock speeds would also have been limited due to Global Foundries. But, it's the IPC that has really allowed AMD to make such huge strides since 2017. Remember, first generation Ryzen was able to get up to 4.0GHz all-core, so, figure the IPC improvements, 5% to Zen+, 13% to Zen2, 19% to Zen3 on IPC alone would have gotten AMD that much better performance, even if limited to 4.0GHz for clock speeds. Intel in the same period, went from 4 core to 8 core chips, but clock speeds and IPC haven't really improved by all that much.
 
Intel rates chips based on the "base speed", while AMD rates chips based on overall speed. The result is that a 15W chip from Intel will generally draw 25-40W, while a 15W chip from AMD will really tend to use 15-20W. The whole process of deciding how high and for how long to boost for is done differently, with AMD ratings tending to be closer to real-world usage.

Yeah they really made a mess of power ratings

The older chips made a bit more sense

My 35 Watt Dualcore Sandy Bridge chips are all rated at 35 Watts

Using a Bronze ATX power supply, I get 40 - 45 Watts average at the wall whether I'm using a 80+ Bronze Seasonic / Corsair or EVGA power supply under normal use

Using a 12 volt Pico Power supply, I'm getting 20 - 23 Watts average draw

The max draw under a stress test will briefly peak around 32 - 33 Watts on the pico
 
This does not mean at all power usage will drop far from it they will still skyrocket. It just means FLOPS per watt will be better, but FLOPS will be so much higher power usage will continue to soar. Do you think an RDNA5 9900XT that can do 4K480 with Ultra setting will use less energy than a 6900XT? Of course not it'll be maybe 4x more efficient but be ~ 8x faster, so use 2x as much energy. No one ever clocks the cars much lower to get much lower energy use at same performance as previous gen. RDNA3 7900Xt will use more power than 6900XT as sure as the sun rises and Lovelace 4090 is already said to be a 400W+ card.
 
Lisa Su likes to hear herself make grandiose proclamations. Oh well, at least we now know she's like any other CEO.

Don't mind me, I still have v bad taste in my mouth from Intel's "road map".. (Which they apparently couldn't read themselves).
 
Back