AMD predicts GPUs will reach 600-700W consumption by 2025

So with so many governments around the world passing climate change policies and going after energy and every dam day in the news climate change is in the news and a company thinks it is okay to just increase power drew. And in Europe they are talking about rolling black outs in the fall because they have energy criss.

How wonderful is capitalism when you only have one or two companies with no incentive yes no incentive of bankruptcy or major drop in market shares.

Why should I get the 4090? I should keep the 3090 or buy used cheaper 3090 and just increase the clock and call it 4090.

The way GPU and CPU is going now days with every year only 10% to 15% gain only and more power draw and becoming a space heater I’m done yes done buying or building computers these days.

A computer in year 2030 will not be two or three times faster than a computer in year 2020.

For GPU it is way more than 15% and we already know this next gen it's 80-110% increase on the top cards. RDNA2 and Ampere weren't 15% either. CPU is also more than 15% now and the gap will increase by 2025. Both AMD and Intel are claiming ~ 75% by 2025 which means 30%+ gen-on-gen.

You do realise you don't have to run a next gen card at full power. 50% increase in performance per watt means you can stay at the same power as you currently have and average 50% more fps. If 6700XT can run the game you want with the setting you want at 60fps in 1440p say, then with 7700XT you can run it at 90fps for same power, rather than 120fps for 33% more power. Obviously it's hard to set-up direct comparisons with architectural changes etc, but the point stands.
 
I'll stick to my 2070 Super then, mind most of the games I play on PC are old retro jobbies anyway so its always just plodding.
 
Why does AMD need to predict. By now, they should already know what sorts of power draw their next gen GPUs will require since they should already be already be deep in the RDNA4 development cycle.
Anyway, I think laptops are the way to go for me in the future. While the power consumption of laptop CPUs and GPUs are also going up, they are at least, limited by the cooling capability of the laptop. So 400W GPUs are not going to cut it there.
 
I'm not going to buy anything with a power draw like that. It's unnecessary to play games and will probably cost way too much anyway.
 
Wow....that is ridiculous. So basically they are going to get the gains by throwing power at it.
 
Why this difference between the advancement of GPU and CPU?

in the cpu world they optimize a lot and remain with nearly the same requirements but gain much better performance, for example a high-end Pentium 4 from the 2004/2005 used around 103 watt (single core or maybe HT), and now a 12900k with 16 cores uses 125 watt, not 500 watt !!!
 
Why this difference between the advancement of GPU and CPU?

in the cpu world they optimize a lot and remain with nearly the same requirements but gain much better performance, for example a high-end Pentium 4 from the 2004/2005 used around 103 watt (single core or maybe HT), and now a 12900k with 16 cores uses 125 watt, not 500 watt !!!
GPUs are substantially larger chips than CPUs: not only in terms of physical dimensions, but also in transistor count. For example, the EPYC 7773X has 8 CCDs, each 81 square millimetres in die area, and a single IO chip 416 square mm in area, for a total of 33.1 billion transistors. Nvidia's A100 has a chip with 54.6 billion transistors in an area of 826 square mm.

That said, both products have similar TDP: 280W for 7773X and 250W for A100, although the former has a clock domain double that of the Nvidia GPU (2.2 to 3.5 GHz compared with 1.1 to 1.4 GHz).

It's also easy to pick products from the past that seem to agree and disagree with the notion that GPUs have been less optimised in their design, over the years, than CPUs. Ten years ago, AMD released the Radeon HD 7970 X2, a double GPU card, with a TDP of 500W - it's peak theoretical floating point performance was 3.8 TFLOPS. The GeForce RTX 3080 Ti has a TDP of 350W and 30.6 TFLOPS. So in ten years, that's ten times more performance, and for less power.

But, also ten years ago, AMD released the HD 7850 with a TDP of 130W and 1.8 TFLOPS; the recent RTX 3050 has the same TDP but a theoretical peak performance of 9.1 TFLOPS. Ten years, same power, but just 5 times more throughput.

I did a short analysis of the improvement of GPU efficiency, and the evidence is quite clear:

2020-04-11-image-3.png


Modern GPUs are more efficient than older ones in terms of utilising their die area and power window.

Anyway, for retrospective interest, the Pentium 4 HT 3.4 GHz, released 2004, was 131 square mm in size, for a TDP of 89W. The Core i9-1200KS is 216 square mm, for a TDP of 150 to 240W (depending on the power level). Intel haven't indicated what the transistor count is like, but it will be in the billions - roughly a factor of 100 times more than the Pentium 4.

From the same era, Nvidia's GeForce 6800 Ultra has the NV45 GPU in it: 220 or so million transistors in 287 square mm, for a TDP of 80W. The aforementioned 3080 Ti has a 628 square mm chip, with 28.3 billion transistors, with a 350W TDP.

So, the CPU has grown in area by a factor of 2, a transistor count by a factor of 100 or so, and a TDP by a factor of 1.6 to 2.6; the GPU has an area growth of a factor 2.2, transistor count by a factor of 129, and a TDP growth factor of 4.4 - so yes, GPUs have developed a taste for power at a greater rate than CPUs have. But that doesn't mean they're not optimised for what they do.
 
GPUs are substantially larger chips than CPUs: not only in terms of physical dimensions, but also in transistor count. For example, the EPYC 7773X has 8 CCDs, each 81 square millimetres in die area, and a single IO chip 416 square mm in area, for a total of 33.1 billion transistors. Nvidia's A100 has a chip with 54.6 billion transistors in an area of 826 square mm.

That said, both products have similar TDP: 280W for 7773X and 250W for A100, although the former has a clock domain double that of the Nvidia GPU (2.2 to 3.5 GHz compared with 1.1 to 1.4 GHz).

It's also easy to pick products from the past that seem to agree and disagree with the notion that GPUs have been less optimised in their design, over the years, than CPUs. Ten years ago, AMD released the Radeon HD 7970 X2, a double GPU card, with a TDP of 500W - it's peak theoretical floating point performance was 3.8 TFLOPS. The GeForce RTX 3080 Ti has a TDP of 350W and 30.6 TFLOPS. So in ten years, that's ten times more performance, and for less power.

But, also ten years ago, AMD released the HD 7850 with a TDP of 130W and 1.8 TFLOPS; the recent RTX 3050 has the same TDP but a theoretical peak performance of 9.1 TFLOPS. Ten years, same power, but just 5 times more throughput.

I did a short analysis of the improvement of GPU efficiency, and the evidence is quite clear:

2020-04-11-image-3.png


Modern GPUs are more efficient than older ones in terms of utilising their die area and power window.

Anyway, for retrospective interest, the Pentium 4 HT 3.4 GHz, released 2004, was 131 square mm in size, for a TDP of 89W. The Core i9-1200KS is 216 square mm, for a TDP of 150 to 240W (depending on the power level). Intel haven't indicated what the transistor count is like, but it will be in the billions - roughly a factor of 100 times more than the Pentium 4.

From the same era, Nvidia's GeForce 6800 Ultra has the NV45 GPU in it: 220 or so million transistors in 287 square mm, for a TDP of 80W. The aforementioned 3080 Ti has a 628 square mm chip, with 28.3 billion transistors, with a 350W TDP.

So, the CPU has grown in area by a factor of 2, a transistor count by a factor of 100 or so, and a TDP by a factor of 1.6 to 2.6; the GPU has an area growth of a factor 2.2, transistor count by a factor of 129, and a TDP growth factor of 4.4 - so yes, GPUs have developed a taste for power at a greater rate than CPUs have. But that doesn't mean they're not optimised for what they do.
Fantastic/accurate information, but that doesn't change the fact that a hot 450W GPU is NOT something I'd want for my rig regardless its performance.
 
Back