Next-gen graphics cards rumored to offer significant leap in performance

nanoguy

Posts: 1,355   +27
Staff member
Rumor mill: Intel's Arc A-series GPUs are expected to be energy-efficient designs, but AMD and Nvidia's next-gen dreams could be headed in the opposite direction to squeeze the most out of their upcoming GPU architectures. There are some fears that we're about to see four-slot graphics cards in the enthusiast segment, but this could be the price that needs to be paid to unlock up to 100 teraflops of single-precision performance for the first time in consumer GPUs.

The rumor mill is replete with hints that the next generation of GPUs from AMD and Nvidia are going to be power-hungry and offer a huge leap in performance over the current crop. Some Nvidia offerings might even require 900 watts in certain workloads, and AMD is expected to once again divide their graphics silicon into chiplets.

There's a lot we still don't know about these upcoming GPUs, but industry watchers have been doing some diligent digging that surfaced a few interesting new details. For instance, popular leaker and Twitter dweller Greymon55 says they've found evidence of an upcoming AMD RDNA 3 GPU that will offer as much as 92 teraflops of FP32 compute performance.

For reference, that's four times the performance of the Navi 21 GPU at the heart of the RX 6900 XT and RX 6950 XT graphics cards, and also higher than a previously leaked figure of 75 teraflops. At this point, it's not clear how AMD might achieve this, but it could be higher GPU clocks (in the neighborhood of 3,000 MHz), the use of special instructions to accelerate FP32 data processing similar to Nvidia's Ampere, or a combination of both.

Team Green might also be working on a monster GPU, at least according to Twitter leaker Kopite7kimi. The full-fat AD102 die could pack enough compute power to more than double the FP32 performance afforded by the RTX 3090 Ti and almost triple that of the RTX 3090.

As usual, take these rumors with a healthy dose of salt. If true, however, we could soon witness a teraflop war between Nvidia and AMD while Intel is warming up with its Arc Alchemist GPUs. It certainly makes sense to expect a significant generational leap in performance if the power requirements are truly north of 450 watts.

Masthead credit: Caspar Camille Rubin

Permalink to story.

 
I'd be interesting to look at another statistic: How many gamers that own a 3080 or better would actually notice if we put them on medium and high settings, cap their framerate at "only" 120hz and tell them "This is a 3080" while they actually run a 3060ti or 3070 and honestly on a practical sense, don't even notice or care about the actual performance tier they buy?

I bet it would be the majority because so many responses overvalue high refresh rating and are dead set on remaining on 1440p monitors and don't even need to drive 4k or refresh rates past 90-120hz anyway (I maintain that 99% of people don't have fast enough reflexes in competitive gaming to benefit for more than 120-144hz refresh rates)

So tl;dr I don't think nearly as many people *need* a 100 teraflops product no matter how much they think they do, it's just the clever marketing the PCMR people, likely unbeknownst to them, have been doing to help Nvidia's arms race.
 
Again, if this is true, both companies are completely failing to read the room regarding power efficiency of their cards, as if they are completely oblivious to the rampaging energy costs across many areas in the world- it seems to obscene to think we could soon have graphics cards that could draw the same power as kitchen appliances, and we already try to get the most efficient type we can with those as a lot of them are required for day to day life, and yet Nvidia/AMD think we will just gloss over the fact because it's a graphics card....
 
I personally love ultra-wide monitors approaching 40". 34" and 38" to be exact.

Anyone with one of these monitors is gaming in 1080p or 1440p. If they buy a 4K card capable of 60fps, they'll likely be underutilizing it but getting high FPS at up to 1440p.

I guess the next generation of cards will be offering 120fps at 4K on the high end.

Bottom Line: I don't care about the energy use. I just want performance. Just get a 1500 W PSU.
 
This is the first article where I’ve seen mentioned that the Intel’s offering is expected to be energy efficient. Maybe I’ve missed the others.

That got me wondering if Intel is going to spin their (expected) lower performance as a good thing from nature’s point of view: “We may be slow but we don’t warm the planet”. It could maybe actually work.
 
If this is real, then we might just see one of the biggest performance increases over a generation, of all time.
But personally, this seems to good to be true.
Chances are, the real numbers will be significantly lower.
 
Again, if this is true, both companies are completely failing to read the room regarding power efficiency of their cards, as if they are completely oblivious to the rampaging energy costs across many areas in the world- it seems to obscene to think we could soon have graphics cards that could draw the same power as kitchen appliances, and we already try to get the most efficient type we can with those as a lot of them are required for day to day life, and yet Nvidia/AMD think we will just gloss over the fact because it's a graphics card....

You are NOT the intended audience for these cards. "Quantum Physics" isn't the intended audience for these cards.

Miners are and miners either do not pay for their electricity at all or pay very low prices due to registering themselves as an "industrial consumer" with the Power Company.
 
Not listed: the doubling of MSRPs alongside the performance.
Again, if this is true, both companies are completely failing to read the room regarding power efficiency of their cards, as if they are completely oblivious to the rampaging energy costs across many areas in the world- it seems to obscene to think we could soon have graphics cards that could draw the same power as kitchen appliances, and we already try to get the most efficient type we can with those as a lot of them are required for day to day life, and yet Nvidia/AMD think we will just gloss over the fact because it's a graphics card....
Again, If $5 a month extra in power is an issue for you, you can't afford a $2000 GPU int he first place.
I'd be interesting to look at another statistic: How many gamers that own a 3080 or better would actually notice if we put them on medium and high settings, cap their framerate at "only" 120hz and tell them "This is a 3080" while they actually run a 3060ti or 3070 and honestly on a practical sense, don't even notice or care about the actual performance tier they buy?

I bet it would be the majority because so many responses overvalue high refresh rating and are dead set on remaining on 1440p monitors and don't even need to drive 4k or refresh rates past 90-120hz anyway (I maintain that 99% of people don't have fast enough reflexes in competitive gaming to benefit for more than 120-144hz refresh rates)

So tl;dr I don't think nearly as many people *need* a 100 teraflops product no matter how much they think they do, it's just the clever marketing the PCMR people, likely unbeknownst to them, have been doing to help Nvidia's arms race.
TL;DR: "I dont enjoy high refresh rate montiors so nobody really enjoys them they're just being fooled".
 
This is the first article where I’ve seen mentioned that the Intel’s offering is expected to be energy efficient. Maybe I’ve missed the others.

That got me wondering if Intel is going to spin their (expected) lower performance as a good thing from nature’s point of view: “We may be slow but we don’t warm the planet”. It could maybe actually work.
They could, but you know what Nvidia has on their unpublicized and greatly de-emphasized mid tier offerings? Great software optimization. This is something intel would need to address otherwise, yes their cards will technically use less power but will compete so bad against the much more optimized Nvidia offerings (And the somehow more optimized AMD offerings because even them are probably going to outmatch software and driver support from intel for at least a couple more years) then it doesn't matters: On white papers their cards might be more efficient but after you factor in Nvidia's optimization and tricks a 3050 is likely to punch far above it's chip raw compute power and effectively end up being a more "eco-friendly" card to own.
 
If this is real, then we might just see one of the biggest performance increases over a generation, of all time.
But personally, this seems to good to be true.
Chances are, the real numbers will be significantly lower.
I actually think Nvidia will probably use the chance to push something new and obnoxious like "Ultra-sampling" so even without 8k you can get 8k-like experience on 4k monitors using a brand new flavor of magic-ML-sauce AA.

They're always trying to introduce these kind of proprietary new software+hardware features and if they're at all ready, it's always better to introduce them when they have a significant and not just incremental lead on Raster performance to well, waste it on the new feature to mitigate the overhead and such.

But then again this is the exact opposite when they launched Ray Tracing support so it will imply learning from that, not sure they did tbh.
 
Again, if this is true, both companies are completely failing to read the room regarding power efficiency of their cards, as if they are completely oblivious to the rampaging energy costs across many areas in the world...
No one is buying a $1000 gaming GPU because they care about how efficient it is. Nothing has changed in the enthusiast market. ARM and 65w x86 CPU's have no influence here.
 
Last edited:
Not listed: the doubling of MSRPs alongside the performance.

Again, If $5 a month extra in power is an issue for you, you can't afford a $2000 GPU int he first place.
TL;DR: "I dont enjoy high refresh rate montiors so nobody really enjoys them they're just being fooled".

Just remember electricity prices have jumped in many places, especially in EU. Some folks have seen double the cost of electricity this year since 2020. It might not sound like much, but if you were paying $100 a month for electricity in 2020, you're now paying upwards of $200 a month - that's an extra $1200 a year.

You're right that a few extra $ a month for running a high power draw GPU might not be much, but when costs are going up for electricity it just means less money in your pocket for everything else that's going up in price.
 
So tl;dr I don't think nearly as many people *need* a 100 teraflops product no matter how much they think they do, it's just the clever marketing the PCMR people, likely unbeknownst to them, have been doing to help Nvidia's arms race.

VR still struggles on the best of the current cards so I do see the need for them.
I agree with you that most people don't need anything like what these cards will be able to deliver.
 
I actually think Nvidia will probably use the chance to push something new and obnoxious like "Ultra-sampling" so even without 8k you can get 8k-like experience on 4k monitors using a brand new flavor of magic-ML-sauce AA.

They're always trying to introduce these kind of proprietary new software+hardware features and if they're at all ready, it's always better to introduce them when they have a significant and not just incremental lead on Raster performance to well, waste it on the new feature to mitigate the overhead and such.

But then again this is the exact opposite when they launched Ray Tracing support so it will imply learning from that, not sure they did tbh.

I think the big focus on this next generating is going to be Ray-Tracing performance.
Currently, RT performance sucks on every GPU. nVidia has a decent margin on nVidia. But the moment we turn on RT, performance goes down to half, even on Ampere.
We need RT performance to improve to the point where turning it on, has a smaller impact on performance.
 
This is ridiculous. Ten teraflops ought to be enough for anyone. What are they planning on doing, rendering a Pixar movie in real time?
 
We will have to wait to see benchmarks as this could be just marketing blather to keep potential customers interested. If there is little or less than what is being promised, this marketing blather might just backfire, IMO.
 
VR still struggles on the best of the current cards
Much like the majority of people, I truly don't care about VR. Make what you will about my lack of faith on VR but honestly I don't it's going to be a selling point for GPUs, I'd be fabulous to be proven wrong a few months down the line with an explosive surge of VR popularity but the chances of that happening are close to 0%
 
I think the big focus on this next generating is going to be Ray-Tracing performance.
Currently, RT performance sucks on every GPU. nVidia has a decent margin on nVidia. But the moment we turn on RT, performance goes down to half, even on Ampere.
We need RT performance to improve to the point where turning it on, has a smaller impact on performance.
Well that would be the wise thing to do but to your question another one: Do you think Nvidia would be happy to just staying on the current iteration of Ray-Tracing?

Because I honestly think that what you might be right, their solution wouldn't be to just go "Hey kids! Leather Jacket Jack*** here, remember Ray Tracing!? Is back! But this time it doesn't suck....As much!"

What is more likely is the introduction of "Ray Tracing 2.0" which will be a proprietary implementation, using Nvidia or rather, only Nvidia hardware and only the newest cards. Because if they (think they) can finally get Ray Tracing right and having it become and actual selling point and widely implemented and publicized feature, I don't think they would pass out the opportunity to somehow introduce it to a proprietary elements like "Ray Tracing works a lot better...with our magic Machine-Learning sauce! that is!" even if the truth behind the claims is well, mostly finally just enough raster performance to waste to enable RT.
 
I agree. In fact, personally, as long as the minimums exceed 30fps, it's perfect for me. all the computing power available for visual quality, physics and AI.
 
Well that would be the wise thing to do but to your question another one: Do you think Nvidia would be happy to just staying on the current iteration of Ray-Tracing?

Because I honestly think that what you might be right, their solution wouldn't be to just go "Hey kids! Leather Jacket Jack*** here, remember Ray Tracing!? Is back! But this time it doesn't suck....As much!"

What is more likely is the introduction of "Ray Tracing 2.0" which will be a proprietary implementation, using Nvidia or rather, only Nvidia hardware and only the newest cards. Because if they (think they) can finally get Ray Tracing right and having it become and actual selling point and widely implemented and publicized feature, I don't think they would pass out the opportunity to somehow introduce it to a proprietary elements like "Ray Tracing works a lot better...with our magic Machine-Learning sauce! that is!" even if the truth behind the claims is well, mostly finally just enough raster performance to waste to enable RT.

I doubt nVidia will make some RT implementation that is proprietary. They could have done it with Turing, but chose not to. Instead, they just use APIs.

I would think they can improve on their BVH processing and sorting.
This would make their BVH structures much more efficient than Ampere and Turing. But this might need some kind of new DXR 1.2
 
While this is interesting in theory, I'm more interested in the performance of the next batch of graphics cards under $500. I paid MSRP for my 3060Ti ($440), and it was still the most expensive single PC component I have ever purchased. It's a great card, but with the supply shortages after COVID, many people paid MUCH more than I did. I'm glad to see GPU pricing return near MSRP, but am even more excited to see how performance improvements trickle down to the mainstream market with the next generation. RTX 3060Ti performance cost $400+ in 2020, so I'd love to see that same $400+ get RTX 3080 performance in 2022/2023.
 
Back