AMD mid-range and entry-level RDNA 4 GPUs could match RTX 4080 and 4060 Ti

Daniel Sims

Posts: 1,375   +43
Staff
Rumor mill: Sources have previously told YouTuber Moore's Law is Dead that AMD plans to release only mid-range and entry-level graphics cards in its upcoming RDNA 4 series, likely launching sometime in 2024. A new report from the channel provides more concrete details regarding the die size, processors, and release window.

AMD is reportedly preparing to unveil two next-generation GPUs in the fourth quarter of 2024: Navi 48 and Navi 44. Although neither of these graphics processors will match the competition's flagships, they could compare favorably to some currently available high-end chips at much lower prices.

Such early information should always be taken with a healthy grain of salt, but a new report from Moore's Law is Dead alleges that a graphics card based on Navi 48 will reach performance somewhere between the Radeon RX 7900 XT and AMD's current flagship, the 7900 XTX, with a much smaller die. While the 7900 XT has a die size of 529 mm2, Navi 48's die is projected to be between 300 and 350 mm2 – closer to the 379 mm2 die size of Nvidia's RTX 4070 Ti Super.

Furthermore, the initial shipments of the upcoming AMD chip might feature 256-bit memory, enabling 20 Gbps GDDR6 VRAM. Overall, Navi 48 could potentially rival the performance of the GeForce RTX 4080, but while the high-end GeForce product hovers around $1,000, the upcoming RDNA 4 chip is not expected to exceed $600.

Meanwhile, the lower-end Navi 44 could perform similarly to the Radeon RX 7700 and easily surpass the RTX 4060 Ti, with a die size not exceeding 210 mm2. It is too early to speculate on Navi 44's pricing, but for context, the RX 7600 XT, which uses a 204 mm2 die, is priced at $329.

Engineering samples of RDNA 4 desktop GPUs have been reported to boost at clock speeds between 3.0 and 3.3 GHz. Additionally, the lineup will feature monolithic dies based on TSMC's 5nm N4P process.

It is presumed that RDNA 4 will compete with Intel's upcoming Battlemage lineup, scheduled for release in the second half of 2024. Previous reports have indicated that Battlemage will utilize TSMC's 5nm N4 process. Unlike RDNA 4 and Intel's initial Alchemist series, Battlemage will include an enthusiast-class GPU.

However, the main graphics competitor in late 2024 might be Nvidia's RTX 5000 series. Based on TSMC's 3nm node, it is anticipated to offer a significant performance improvement over the RTX 4000 series and could be launched in Q4 2024 or early 2025.

Permalink to story.

 
As stated a big grain of salt . Allows NVidias GPU depart to make cream at highend to subsidise the middle ( whether Nvidia will do that ) . Both need the middle to pay for R&D and drivers , AMD also has PS6 coming.

The 4060 should be the main market and AMD equivalent

The other reason maybe not going after top end , is using the best of TSMC for CPUs and AI ie a much better return on investment and engagement with the brand
 
I'd consider upgrading from my RX 6750XT if the rumors actually came true. But, I am not holding my breath.
 
As stated a big grain of salt . Allows NVidias GPU depart to make cream at highend to subsidise the middle ( whether Nvidia will do that ) . Both need the middle to pay for R&D and drivers , AMD also has PS6 coming.

The 4060 should be the main market and AMD equivalent

The other reason maybe not going after top end , is using the best of TSMC for CPUs and AI ie a much better return on investment and engagement with the brand

Nvidia is making so much money in the data center and AI space they can effective operate as a company that doesn't even have to sell a single gpu card to gamers.

My guess is they'll make top of the line halo cards for the mindshare and sell them at a premium while giving meh value prop to everything below and price gouging their fanbois and white knights.
 
It’s as good of a time as any for them to “give up” on the high end. (They’re really not, that’s just going to be focused on data center GPUs). The question is, will nVidia compete with 5070 or with 5060?
 
It’s as good of a time as any for them to “give up” on the high end. (They’re really not, that’s just going to be focused on data center GPUs). The question is, will nVidia compete with 5070 or with 5060?
The "highend" has become so absurd. And despite what all the the "influncers" would have us believe, everyone in the world isn't running around with a 4090. The most sold cards are in the $300-500 price bracket. We need to stop looking at the high-end and start focusing on what im going to call the "mainstream highend." I think the 7900xt and the 40 super series have shown the upper limit that most people are willing to spend is about $700 and even then, do so reluctantly.

If AMD can hit the $600 price point for their "high end" card(like the good old days) they might have a real winner. I've also been told that they have made significant improvements in RT performance bringing them more in line nVidia
 
The "highend" has become so absurd. And despite what all the the "influncers" would have us believe, everyone in the world isn't running around with a 4090. The most sold cards are in the $300-500 price bracket. We need to stop looking at the high-end and start focusing on what im going to call the "mainstream highend." I think the 7900xt and the 40 super series have shown the upper limit that most people are willing to spend is about $700 and even then, do so reluctantly.

If AMD can hit the $600 price point for their "high end" card(like the good old days) they might have a real winner. I've also been told that they have made significant improvements in RT performance bringing them more in line nVidia
I remember when $600 used to be considered a halo product. How things have changed. But I went into just why that was in my response to you not too long ago.

As for AMD catching up to nVidia in RT, they seem to be consistently one generation behind when it comes to RT performance (more precisely, the hit to performance), so this much is honestly believable. They almost certainly won’t be a match for rtx 5000 though, unless nVidia decides RT is suddenly no longer important (maybe AI takes the center stage? But I doubt RT will be neglected by NV)
 
If AMD can hit the $600 price point for their "high end" card(like the good old days) they might have a real winner. I've also been told that they have made significant improvements in RT performance bringing them more in line nVidia

How would that ever happen? The good old days are a lot of inflation ago. That $600 in 2010 is nearly $900 now in real terms. The modern cards look a bargain compared to the 2013 700 series titan models...
 
How would that ever happen? The good old days are a lot of inflation ago. That $600 in 2010 is nearly $900 now in real terms. The modern cards look a bargain compared to the 2013 700 series titan models...
The issue that I have with the inflation argument is that wages haven't kept up. On top of that, companies are using inflation as an excuse to increase prices and profit margins with people not only falling for it, but defending them. nVidia didn't 20x their stock price in 3 years by just matching inflation.

And sales numbers are starting to show that people are tired of if. GPU shipments are the lowest they've been in 20 years. I bet a lot of people here didn't even know what a GPU was 20 years ago. So consumers are finally starting to push back.
 
The "highend" has become so absurd. And despite what all the the "influncers" would have us believe, everyone in the world isn't running around with a 4090...
No, however, wasn't it the case not that long ago that more 4090's were active vs the entire AMD 7000 series combined?
Or at least, GPU-Z's database was claiming that for a while, obviously, not all people immediately open up GPU-Z when buying a GPU.

Some interesting statistics out there though, the 4090 sold a lot better than you'd have thought it could for such an expensive product.
 
No, however, wasn't it the case not that long ago that more 4090's were active vs the entire AMD 7000 series combined?
Or at least, GPU-Z's database was claiming that for a while, obviously, not all people immediately open up GPU-Z when buying a GPU.

Some interesting statistics out there though, the 4090 sold a lot better than you'd have thought it could for such an expensive product.
So a lot of the 4090s were sold when the H100 peaked at about 60k each. Those trying to save money were buying 4090s instead of a rack full of H100s since the 1 rack would cost $500,000 for 6 and you could buy ~30 4090s at the time for that much. If you didn't need access to all of nVidias software and other features, it was a great option. Something similar went on with the 7900xtx. The extra 4 gigs of memory on the 7900xtx makes a large difference over the 7900xt, hence the price drop. The 7900xtx is being sold by the pallet to startups on a budget but the 7900xt is being sold almost entirely to gamers.

But the H100 is down to about 20k each now and many of the 4090s are 2k+.

So I don't know if what you're saying is true, but steam hardware survey has different information than GPUz so that would be my explanation for the discrepancy.

As a side note, I'm going to try to daily drive a steam deck for a month. I'm tired of chasing performance and spending money for the privilege. I remember with the gtx 10 series we had reached what seemed like a peak in performance and I still strongly feel nVidia decided to push Ray tracing as a way to sell more units and make use feel. I have heard some very interesting things about AM6 and how AMD will start targeting 7800xt performance in their APUs. A lot of this has to do with that's their performance target in the PS6 so making a PC counterpart isn't going to be that difficult.

I also think that the power requirements some of the hardware that's being used for gaming is getting out of hand. Reminds me of the 2 powersupply days back in the 2000s but the largest one you could get was like 600 watts. Now it feels like I would need to wire a seperate breaker just for the PC to run on. It will be really interesting what can be done to lower power requirements and make smaller form factors in the future. If we could get to 7800xt levels of performance in APUs within the next 4 years I will probably never buy another GPU again
 
I also think that the power requirements some of the hardware that's being used for gaming is getting out of hand. Reminds me of the 2 powersupply days back in the 2000s but the largest one you could get was like 600 watts. Now it feels like I would need to wire a seperate breaker just for the PC to run on. It will be really interesting what can be done to lower power requirements and make smaller form factors in the future. If we could get to 7800xt levels of performance in APUs within the next 4 years I will probably never buy another GPU again

I think they are doing that becuase it is no longer possible to produce "significantly" faster chips from reducing size alone, so they do it at expense of power and temp, and make the chip more resistant to this effects :x
 
Moore's Law is Dead is clickbait trash. It is a HUGE discredit to TechSpot that you quote it, when you can piece the information together yourself and reach the same conclusions.

MLID's sources are simply users posting around on the internet (Chinese forums, Twitter/X, etc...) whose content adds up in a "yeah, that might just make sense and even if it doesn't it makes great clickbait" kind of way.

 
"While the 7900 XT has a die size of 529 mm2"

305mm2 to be more precise.
This ignores the memory controllers. By this logic the RTX 4090 is a 400mm2 die as well.
No, however, wasn't it the case not that long ago that more 4090's were active vs the entire AMD 7000 series combined?
Or at least, GPU-Z's database was claiming that for a while, obviously, not all people immediately open up GPU-Z when buying a GPU.

Some interesting statistics out there though, the 4090 sold a lot better than you'd have thought it could for such an expensive product.
Yes, it also has a major marketshare VS all of the 7000 series on steam.
The issue that I have with the inflation argument is that wages haven't kept up. On top of that, companies are using inflation as an excuse to increase prices and profit margins with people not only falling for it, but defending them. nVidia didn't 20x their stock price in 3 years by just matching inflation.

And sales numbers are starting to show that people are tired of if. GPU shipments are the lowest they've been in 20 years. I bet a lot of people here didn't even know what a GPU was 20 years ago. So consumers are finally starting to push back.
Failure of wages to keep up doesnt mean inflation isnt happening.

Go look at nvidia's margins for geforce in 2023 vs 2019, for instance. It's climbed 10%. So yes, some greed is there. But much of the price increase is NOT coming from margins, its the increased cost of doing business.
 
Failure of wages to keep up doesnt mean inflation isnt happening.

Go look at nvidia's margins for geforce in 2023 vs 2019, for instance. It's climbed 10%. So yes, some greed is there. But much of the price increase is NOT coming from margins, its the increased cost of doing business.
Failure of wages to keep up is why people feel like a card that is adjusted for inflation is more expensive than it it should be. But let's not forget the criticism that was originally given to the 40 series. I remember people saying, "nVidia has become the scalpers" with that being cited as the reason why EVGA discontinued doing business with nVidia. I've been hearing whispers about MSI taking the the EVGA route with because they can't profitably sell nVidia cards at MSRP.

while we don't know what exactly nVidias margins are, the 4070ti super is made with the same die that the 4080 super is so we know that the card is still profitable for nVidia at $800.

So I'm looking at these products and thinking, "well that's ****ing stupid."
 
Last edited:
This ignores the memory controllers. By this logic the RTX 4090 is a 400mm2 die as well.

And cache. Point is that:

1. MCDs are so small that yields are excellent.
2. MCDs are manufactured on cheaper process.
3. Bigger single die is always more expensive to make and there is practical limit on die size depending on process.

So just adding MCD area on GCD die area and making comparisons is pointless.

Another reason that invalidates comparison is fact that chiplet design also means AMD could always beat Nvidia on max performance if Nvidia uses monolithic design. Right now AMD is nearly as fast as Nvidia (7900XTX vs 4090) while AMDs largest die size is only around half of Nvidia die.
 
Why is this relevant? next gen mainstream graphics have been giving last gen flagships performance since for as long as I can remember.

For instance, the RTX 3070 was as fast as the RTX2080, the 4070 as fast as the RTX 3080 and so and so...
 
Failure of wages to keep up is why people feel like a card that is adjusted for inflation is more expensive than it it should be. But let's not forget the criticism that was originally given to the 40 series. I remember people saying, "nVidia has become the scalpers" with that being cited as the reason why EVGA discontinued doing business with nVidia. I've been hearing whispers about MSI taking the the EVGA route with because they can't profitably sell nVidia cards at MSRP.

while we don't know what exactly nVidias margins are, the 4070ti super is made with the same die that the 4080 super is so we know that the card is still profitable for nVidia at $800.

So I'm looking at these products and thinking, "well that's ****ing stupid."
Lets all just agree that it's good that you can now actually buy these at MSRP shall we :)
 
Back