AMD Radeon RX 7000 flagship graphics card will reportedly come with 24 GB of VRAM

Tudor Cibean

Posts: 172   +11
Staff
Highly anticipated: AMD will unveil its new Radeon RX 7000 series graphics cards based on the new RDNA 3 architecture next week. The company claims that the new GPUs will offer over 50 percent higher performance per watt compared to its current lineup thanks to a 5nm process node and chiplet design.

According to the latest rumors, Team Red will announce two high-end models next week, the Radeon RX 7900 XTX and RX 7900 XT. Both will utilize AMD's flagship Navi 31 GPU, which might feature a Multi-Chip-Module design with one graphics compute die and six memory dies. The latter should allow for up to 96 MB of Infinite cache in a standard configuration (or more if the company decides to use its 3D stacking technology).

The flagship Radeon RX 7900 XTX will reportedly use a full-fat version of the Navi 31 GPU with 12,288 stream processors and a 384-bit memory bus width. It will come with twelve 16 Gbit GDDR6 memory chips running at 20 Gbps for a total of 24 GB of VRAM with a bandwidth of 960 GB/s.

Meanwhile, the Radeon RX 7900 XT should make use of a cut-down Navi 31 GPU with 10,752 stream processors and a 320-bit wide memory bus. As a result, it will only have ten memory chips of the same capacity and speeds for a total of 20 GB of GDDR6 VRAM and 800 GB/s bandwidth.

Unfortunately, TDP values are still a mystery at this time. However, AMD SVP and GM Scott Herkelman recently confirmed that the company's upcoming cards won't use the 12VHPWR connector. Therefore, they should be compatible with current PSUs without requiring any adapters, possibly saving users from some headaches. Custom designs from board partners will probably ship with up to three 8-pin power connectors for increased overclocking headroom.

AMD will officially announce the specs, availability, and pricing of its Radeon RX 7000 cards on November 3. They will go up against Nvidia's GeForce RTX 4090 and (now sole) RTX 4080, with lower-end cards from both companies only arriving next year.

Permalink to story.

 

Geralt

Posts: 1,320   +2,149
Why not 7900XT and 7800 as usual? Hopefully this is not another scam like that of Nvidia with two 4080. And they are speaking about 450W for the cards. I don't like it at all if it's true. We'll see.
 

Puiu

Posts: 5,957   +5,003
TechSpot Elite
Why not 7900XT and 7800 as usual? Hopefully this is not another scam like that of Nvidia with two 4080. And they are speaking about 450W for the cards. I don't like it at all if it's true. We'll see.
This is more akin to the 4090/4090ti. I think they may be reserving the 7950 for the 3D stacked GPU to counter the 4090ti.

While AMD's naming is also really bad, Nvidia's naming made it look like the only difference was the VRAM.
 

NeoMorpheus

Posts: 1,502   +3,248
I wonder why AMD is adding so much memory to their GPUs?

I mean, my 690xt rarely gets above 12 GB of mem usage at 4K.

On another note, I hate that name (XTX). Yes, I know, they used it before and I hated it then.

And regarding on that model, I feel is a mistake to release it right away. They need to wait for nvidia to release the Ti or Titan then trash it, so the cult members suffer a bit more.
 

wiyosaya

Posts: 8,410   +7,845
And regarding on that model, I feel is a mistake to release it right away. They need to wait for nvidia to release the Ti or Titan then trash it, so the cult members suffer a bit more.
All I have to say on that is "Karma is a B!tch" be careful what you wish for.

However, if they do hit those performance/W target, it may very well be a welcome success and alternative to the RTX Flamethrower series from nVidia. 🤣
 
Last edited:

R00sT3R

Posts: 773   +2,403
I really don't see why it needs this, In the professional workspace, where many Nvidia 90 class cards go, AMD isn't in the same league when it comes to rendering & productivity performance.

They'd be much better off dropping some of the VRAM (down to 16GB) and reducing the cost of their top end card in the gaming segment and pick up market share.

4090 class performance for $1200 would be pretty tempting for gamers looking to leave the abusive relationship they have with Nvidia.


 

neeyik

Posts: 2,420   +2,964
Staff member
I wonder why AMD is adding so much memory to their GPUs?
They'd be much better off dropping some of the VRAM (down to 16GB)
It's about memory bus width. At 256-bits, using GDDR6 or 6X, means the memory will have to be 8GB or 16GB; at 320-bits, it's either 10GB or 20GB, and at 384-bits, 12GB or 24GB.

Given that AMD's last top end models all came with 16GB, it would look rather poor for them to offer 10 or 12GB -- that's assuming the rumors of the wider bus are correct. If it's still 256-bits, then it will be 16GB, but then the bandwidth won't be anything to shout about.

However, the general assumption now is that the Navi 31 GPU will comprise one compute die and six memory controller+cache dies, on the same package. Six MCs equates to 384-bits (AMD typically rates their MCs as 64-bits wide, in most of their technical documents).
 

hahahanoobs

Posts: 4,718   +2,683
I'd like to get excited about more VRAM, but 16GB didn't give AMD an advantage last time.

I'll wait for reviews for confirmation.
 

Theinsanegamer

Posts: 3,957   +6,999
I wonder why AMD is adding so much memory to their GPUs?

I mean, my 690xt rarely gets above 12 GB of mem usage at 4K.
Well you just answered it, it's using 12GB now. Tomorrow, next year, or 3 years from now, it may be 16 GB. Just a year ago everyone said 8GB was plenty, now were up to 12. You dont want to end up like nvidia's flagships that got kneecapped by low VRAM 680, 980, 3080, ece).

When you are building $1000+ GPUs, you dont skip on RAM. Unless you're nvidia, then you can do it all day. Especially if the 7900xtx is the rumored MCM design with two 6144 core dies, that thing is gonna be a performance monster.
 

samlebon2306

Posts: 19   +24
Why not 7900XT and 7800 as usual? Hopefully this is not another scam like that of Nvidia with two 4080. And they are speaking about 450W for the cards. I don't like it at all if it's true. We'll see.

The problem with beefed up GPUs is created mostly by Nvidia. How could AMD compete against a 600 Watts GPU with just 300 Watts.

Nvidia pushed their 4090 to the limits because they knew what AMD was cooking with their revolutionary chiplet GPU.
 

toooooot

Posts: 1,827   +983
Best time to upgrade after it is available. New CPUs, new video cards. It is the time for those who did not upgrade for years like myself.
 

tellmewhy

Posts: 230   +127
No one can compete with Nvidia with higher nm nodes. At least they should double the vram to 48gb (and maybe 3d cache too) so that there is a real reason for buyers to change their minds. Ram has a very simple structure design, almost all the surface of the chip is the same. So if they can't find cheap ram outsourcing providers, they have to print their own ram chips. Like Intel back in the day. People don’t buy hardware every day, price alone is not a first tier factor to decision. It doesn't matter what fps will produce, the most people will buy Nvidia, it’s a behavior in the area of religion.

If it’s shorter at least must have big ram.
 

nnguy2

Posts: 649   +1,488
I wonder why AMD is adding so much memory to their GPUs?

I mean, my 690xt rarely gets above 12 GB of mem usage at 4K.

On another note, I hate that name (XTX). Yes, I know, they used it before and I hated it then.

And regarding on that model, I feel is a mistake to release it right away. They need to wait for nvidia to release the Ti or Titan then trash it, so the cult members suffer a bit more.

They should bring back "Pro" for flagship and "SE" for cut down...
 

Mr Majestyk

Posts: 1,564   +1,469
It's about memory bus width. At 256-bits, using GDDR6 or 6X, means the memory will have to be 8GB or 16GB; at 320-bits, it's either 10GB or 20GB, and at 384-bits, 12GB or 24GB.

Given that AMD's last top end models all came with 16GB, it would look rather poor for them to offer 10 or 12GB -- that's assuming the rumors of the wider bus are correct. If it's still 256-bits, then it will be 16GB, but then the bandwidth won't be anything to shout about.

However, the general assumption now is that the Navi 31 GPU will comprise one compute die and six memory controller+cache dies, on the same package. Six MCs equates to 384-bits (AMD typically rates their MCs as 64-bits wide, in most of their technical documents).


The bus widths are true. 384 for top tier NAvi 31 79xx, 320 for mid-tier Navi 31 78xx, 256 bit for Navi 32 77xx. Interesting AMD increased bus widths after banging on about not needing to because of infinity cache, but the reality was at 4K AMD's performance dropped substantially. 6900XT was very strong at 1440p, but trailed even 3080 often at 4K. They aren't making this mistake again. Also Nvidia is gimping the 4070 with 192 bit (possibly only for the Ti) and 160 bit possibly for the 4070 regular.
 

Theinsanegamer

Posts: 3,957   +6,999
I dunno... Radeon 9700 Pro was a special card that outperform Nvidias best for a few generations during the turn of the millennium.
True, but that was the perfect storm of ATi having bought ArtX who were on the verge of creating their new GPU, which became the r300, right at the same time nvidia jumped the gun on pixel shader 2.0 bit length that resulted int he FX series having far too small of registers, all launching int he middle of a console genration stalling out future GPU demand until the next gen launched.

I miss my 9800xt, that sucker served me well for many years. Probably the best era of gaming ever.
 

Geralt

Posts: 1,320   +2,149
The problem with beefed up GPUs is created mostly by Nvidia. How could AMD compete against a 600 Watts GPU with just 300 Watts.

Nvidia pushed their 4090 to the limits because they knew what AMD was cooking with their revolutionary chiplet GPU.
More wattage doesn't necessarily imply more performance. You can compete with better engineering.
 

neeyik

Posts: 2,420   +2,964
Staff member
The bus widths are true.
It's only true when AMD officially says so, though. At this point, it's really just an interesting conjecture (although I agree that it's likely to be true).
Interesting AMD increased bus widths after banging on about not needing to because of infinity cache, but the reality was at 4K AMD's performance dropped substantially.
They did so on the basis that they wanted to double the CU count but not rob the units of bandwidth. Two ways of achieving this: lots of MCs and high speed RAM (Nvidia's choice for Ampere) or increasing the amount of cache. Given that they'd already developed a dense L3 cache system for Zen 2, it made sense for them to leverage it into RDNA 2 by adding a fourth cache level.

The reason why this is better than simply adding more MCs/faster VRAM is that the latter doesn't help much with cache misses, when you've got a shed load of CUs that require feeding. This is why Nvidia's AD102 has the same MC count as the GA102 but 16 times more L2 cache.
 

PEnnn

Posts: 1,010   +1,361
AMD: we present to you our flagship GPU, they can...
Customers: hold on, does its connector melt ?
AMD: ofc not, but they have...
Customers: say no more, I want one.

Just put the slogan: It doesn't melt! In the marketing campaign, and that's all

AMD ad:

AMD. Because none of our GPUs, especially our top of the line, don't melt!!
 

Puiu

Posts: 5,957   +5,003
TechSpot Elite
I wonder why AMD is adding so much memory to their GPUs?

I mean, my 690xt rarely gets above 12 GB of mem usage at 4K.

On another note, I hate that name (XTX). Yes, I know, they used it before and I hated it then.

And regarding on that model, I feel is a mistake to release it right away. They need to wait for nvidia to release the Ti or Titan then trash it, so the cult members suffer a bit more.
The extra memory is more important than you might think, especially if you do any 3D work or game development (workstations more often than not have regular GPUs in them) and for high end GPUs it's to be expected.

And it's also a result of adding more bandwidth (increasing the memory interface size from 256bit to 384bit)