AMD Radeon RX 6900 XT Review: Can AMD Take the Performance Crown?

Absolutely agree. My point was that my kid prefers the java version because it offers more in terms of playability to him (mods, online play)....
The newer version may look better but he doesn‘t care - it’s function over form. And this actually makes me happy.

I think you mentioned he was using integrated graphics and there are some performance mods out there which double the Java version's FPS even on an iGPU. I use them on an 11 year old MacBook Pro to get 50fps and higher (16 chunk render distance), so they really work. Maybe you're using them already, but there are 2 options:

Optifine
Fabric (with or without the Sodium renderer)

Fabric + Sodium gives higher FPS than Optifine on a dedicated GPU (like a GTX 1050 Ti), while both seem to work similarly on an iGPU, so you may want to stick with Optifine as it has more options.

In fact it works so well that I start using heavier mods even on the iGPU, such a slippery slope...

One observation I've made playing Minecraft: on a gaming PC where you can use Shaders and Resource Packs to make Minecraft look way better, I find that setup actually looks better overall than Minecraft RTX. I watched a YTer do a MC RTX playthrough and found that the presentation didn't have as good a sense of space than regular MC with Shaders. I was wondering if I was just not remembering properly so I set it up again on my PC and yes, I prefer Shaders to RTX.

I was really looking forward to getting a 3070 or similar to get MC RTX working but now that's evaporated. I still want a 3070 or a 6800 but luckily (?) I won't need to make that choice for many months...
 
If AMD didn't buy ATI someone else would have they were bleeding money.
Yes they were but they knew that they wouldn't have been bleeding long. They had already begun the RV770 before AMD bought them. ATi's RV770 was going to be a serious gut-punch to nVidia's Tesla and ATi knew it. The RV770 was a bigger blindside hit to nVidia than Big Navi because nVidia didn't even have an inkling of what ATi was up to until about a week before launch (and it wasn't a paper launch either, there were tonnes of these cards available).

When the RV770 was released (as the Radeon HD 4000 series), the entire industry was completely stunned. Like, take the surprise of Big Navi and multiply it by five. Then take away all the negative aspects of Big Navi away and you'll know how excited that people were to see the HD 4000 series. The GTX 260 was $400 and the 9800 GTX was $300. Well, the HD 4870 1GB was a little faster than the GTX 260 for only $300 ($285 for the 512MB variant) and the HD 4850 was definitely faster than the 9800 GTX. ATi's future was looking bright indeed:
An Oldie but a Goodie by Steve Walton - VisionTek Radeon HD 4870 Review
AnandTech - The Radeon HD 4850 & 4870: AMD Wins at $199 and $299
Tom's Hardware - Radeon HD 4870: Better Than GTX 260!
Guru3D - ATi Radeon HD 4870 1GB: That's a lot of RED
Guru3D - HIS Radeon 4870 ICEQ4+ TURBO 1GB: From Cool to Cooler
What came next was the "Evergreen" Radeon HD 5000 series which was a total win of the performance crown for ATi (and the last series that would be called ATi) with the top-performing single-GPU card in the HD 5870:
ATI Radeon HD 5870 Review by Steve Walton
Still, nVidia had the total performance crown with the GTX 295 (a dual-GPU card) but then ATi released the HD 5970. Hilbert Hagedoorn, the Guru of 3D, used the word "sodomizes" when describing what the HD 5970 did to the GTX 295 (Man, that was funny!):
Guru3D - HIS Radeon HD 5970 2048MB: The Hemlock is Here
ATi then rounded it out with a fantastic performing value card in the HD 5850:
HIS Radeon HD 5850 Review by Steve Walton
In my opinion, ATi would have been fine. They weren't exactly rookies at this and they had great things coming through the pipeline.
 
Last edited by a moderator:
I think you mentioned he was using integrated graphics and there are some performance mods out there which double the Java version's FPS even on an iGPU. I use them on an 11 year old MacBook Pro to get 50fps and higher (16 chunk render distance), so they really work. Maybe you're using them already, but there are 2 options:

Optifine
Fabric (with or without the Sodium renderer)

Fabric + Sodium gives higher FPS than Optifine on a dedicated GPU (like a GTX 1050 Ti), while both seem to work similarly on an iGPU, so you may want to stick with Optifine as it has more options.

In fact it works so well that I start using heavier mods even on the iGPU, such a slippery slope...

One observation I've made playing Minecraft: on a gaming PC where you can use Shaders and Resource Packs to make Minecraft look way better, I find that setup actually looks better overall than Minecraft RTX. I watched a YTer do a MC RTX playthrough and found that the presentation didn't have as good a sense of space than regular MC with Shaders. I was wondering if I was just not remembering properly so I set it up again on my PC and yes, I prefer Shaders to RTX.

I was really looking forward to getting a 3070 or similar to get MC RTX working but now that's evaporated. I still want a 3070 or a 6800 but luckily (?) I won't need to make that choice for many months...
Am actually using a 4GB 5500XT with a Ryzen 2700x. The GPU was bought as an intermediary solution waiting for RTX 3xxx or RX 6xxx. Am aiming for the 3060Ti (tried to get the FE) or the 6700XT.

My kid did install the Optifine mod (just asked him) with the Sildos shader (think that‘s what it‘s called).

Thanks a lot for your suggestion.
 
I think RT only matters in point 'n click adventure games. Where you can admire every scene in it's glory and takes half a minute for the protagonist to go across from one end of the screen to the other after clicking, so that you can enjoy all the reflections around slowly.
Puzzle based adventure games may also benefit from RT. (Think the puzzle pieces in 7th Guest, 11th Hour, Myst, etc.)

It would be counter-productive to argue it matters in all games, especially FPS games, where you can't appreciate it at high speed. Besides, it's severely taxing with games that run in high fps.
 
Hi, just to say that I bought a 3090 and I'm not trying to compensate for anything. Nor am I pushing it in anyone's face that I have a 3090.

I was going to buy a 3080 but I got fed up waiting to replace my near 5 year old 1080 so I bought the 3090 instead. Is it overpriced? You betcha? Would I have bought it under normal circumstances? Not a chance...

However, these are not normal circumstances and graphics cards (and tech in general) at all levels are very hard to come by - almost everything is sold out, at least in the UK. Life is short (global pandemics have a way of resolving focus) and I did not want to wait until say April next year to turn up settings in the games I play (I'm at 5120x1440). I wanted an upgrade now and I could afford it so I bought the 3090. I know quite a few people in the same boat as me and who bought for similar reasons.

BTW I started playing on the ZX Spectrum so that gives my age away - I'm no Millennial with an e-peen.
I don't know. It has been shown the 3090 is just about 10% faster than the 3080, yet the markup increase in cost over the 3080 is a whopping 114%!! And that's just the MSRP difference.

A person who has owned ZX Spectrum should have known better, especially.

As I have mentioned before, people who are buying the cards coming at the top of the benchmarks are not buying the cards. They are buying the benchmarks. When looking closer, sometimes the difference is actually in single digit or lower double digits when looking at top few cards. These does not translate into substantial difference in perception unless the difference is below and above the 60fps mark. If buying based on benchmarks, even the 6800XT took the lead in a few benchmarks over the 3090 at both the 1440p and 4K for about a third of that price.

Life is short, but doesn't have to be short-sighted.

Anyway, to each his own.
 
Last edited:
Just because the card is faster now doesn't mean that we should think of it in terms of the performance and price value. The top end was 500$ (650$ as you said after inflation), period.
Are you actually suggesting we should compare graphics cards simply based on price, without taking into account their performance? :eek:

FYI you don't know the margins NVIDIA makes so commenting on them is a moot point.
I know the margin on the 3090 is higher than that of the 3080 and below. Are you actually going to attempt to dispute that?

the RTX 2080 Ti is rumoured to have had a 40% profit margin for NVIDIA...Don't say "WTG" when NVIDIA was ripping you off
Why do you feel a 40% margin is a "ripoff"? Do you not realize your average mall store has 100 % markup? And most restaurants a 300-400% markup?

More importantly, you're still failing to acknowledge that NVidia doesn't pull new gpu designs from thin air, and they don't come down the chimney every Christmas either. NVidia spends tens of billions of dollars every year to design them. Once again: where do you suggest those billions come from?
 
I have no problems with halo products having halo pricing. I think it's great that the 3090 and 6900XT exist and in fact I think both are better at their halo job than the 2080 Ti was. The 2080 Ti offered a functional performance increase over the 2080 for a lot more money and was the price performance loser, but you did get something tangible for those excessive dollars.

The 3090 and 6900XT offer minimal gains over their smaller brothers for an even bigger price delta than the 2080 enjoyed, clearly pushing them well into the vanity purchase area and that's fine:

If the product sells, then the price is reasonable. I encourage anyone who can afford those products to buy them and bear the brunt of the R&D costs so my 3070 or 6800 is a cheaper product (whenever the hell that actually happens). And future 3060 and 6700 owners will thank you as well.
 
The value gap between the top-end cards and the next best ones is just absurd.

Not as much for AMD as for NVIDIA, but that's not saying much.
 
The RT performance hit in watch dogs ligion is small on ampere and RT in this game look noticeably better

Even the new console are using low quality RT in this. So if you turn off RT you will be behind console in some aspects.
 
4K gaming for $500 less is a win in my book. Unless RT is a priority I imagine any high end gamer would take either NV or AMD card if they could find one lol.


Watch dog and Cyperpunk look noticeable better with RT. Even new console use RT on Watch dogs

I would even say that it is better to sacrifice some unimportant setting (use optimized setting) and then enable RT + DLSS. It should fix the performance issues. On AMD, you can't do that because RT performance hit is bigger and no DLSS
 
On average it's the fastest 1080p and 1440p gaming GPU for rasterization, can't be denied.

And who buys 1000 dollar GPU to play at 1080p or 1440p and RT off ?? It is overkill

Even if I have 1440p display. I would still use super sample (play at higher than 1440p and then downscale to 1440p) and this should give you even better image quality .
 
Watch dog and Cyperpunk look noticeable better with RT. Even new console use RT on Watch dogs

I would even say that it is better to sacrifice some unimportant setting (use optimized setting) and then enable RT + DLSS. It should fix the performance issues. On AMD, you can't do that because RT performance hit is bigger and no DLSS

You have to bake RT into every game. Stop pretending its universal across games. By the way there are a huge amount of gamers that play at 1440P. Are you claiming the 6900 XT can't game at 4K? 😒
 
Am I right that if AMD didn't decide to offer these, the slim fraction of chips that came out defect-free enough would just have been sold as 6800 XTs with the extra cores or whatevers turned off?

If the underlying physical reality is that a tiny supply of near perfect chips will be produced and something ought to be done with them, I don't see any problem letting a handful of whales get their trophy and contribute extra industry revenues. I don't see how it takes anything away from the rest of us. There are plenty of rare items that sell for high costs that don't even have any performance or VRAM at all (although most of those hold their value much better and are prettier to look at .)


While I philosophically agree with your second paragraph, my issue is with the trend of across-the-board price increases that have been led by nvidia. My desire for AMD to launch at lower prices has nothing to do with any cry for fair play or fair pricing, it is wholly based in greed. I want (note the subject) a better part at a lower price, and I (again, note subject) see no reason in the supply chain for that to be unreasonable. (Ignoring demand. Demand is such that the prices logically should be about double.)

Now that I've explained my greed, I also want to say that I'm not mad at either company for the prices. People are paying those prices and that is entirely their prerogative. I do hope AMD is able to make some money and develop even better products (and keep nvidia in check) and if these over-inflated prices lead to that, so be it. Just don't expect me to buy at these ridiculous prices.
 
Am actually using a 4GB 5500XT with a Ryzen 2700x. The GPU was bought as an intermediary solution waiting for RTX 3xxx or RX 6xxx. Am aiming for the 3060Ti (tried to get the FE) or the 6700XT.

My kid did install the Optifine mod (just asked him) with the Sildos shader (think that‘s what it‘s called).

Thanks a lot for your suggestion.
If I were you, I wouldn't bother with the RX 6000-series unless you're in dire straits. I have a feeling that RDNA3 (which isn't really all that far away) will be far more worth it than this generation. This generation has been a (not so funny) comedy of errors from both nVidia and AMD and probably won't be looked back on too fondly. AMD's ray tracing sucks (not important but it's still there), their pricing is the worst that I've seen in 25 years and their performance isn't all that compelling.

As a funny aside, I tried playing Godfall on one of my old R9 Furies at 1080p medium and 1080p low with V-Sync without issue. Now, Godfall isn't that great of a game overall but it sure looks amazing no matter what graphics settings you're using.
 
But suppose you were in the market for a new card, would you buy the one that has substandard RT performance and compatibility, for no other particular benefits to speak of?
I've heard the Navi cards rasterisation offer smoother gameplay and also from a certain visual in one review with ray tracing on comparing Navi to ampere (I think it was in watch dogs), I noticed that Navi brought out more detail than ampere. From the same scene the guy had a back pack and other items in the scene which were not found in the scene with the ampere. Regarding RT, the guy's trouser in the puddle was so glittery you couldn't even tell if it was a pair of trousers he was putting on, and the water tank appeared like it was matted with jelly for ampere while for navi the scene was realistic and the detail was on point.
 
Last edited:
My desire for AMD to launch at lower prices has nothing to do with any cry for fair play or fair pricing, it is wholly based in greed. I want (note the subject) a better part at a lower price...
That's true for everyone advocating against the price of these halo cards. You're simply more honest than most. :)
 
I think there are two main reasons for the 6900XT to exist:
- show that AMD can make a high end card
- to keep nVidia from charging too much in the high end while lowering prices in the low and mid range to combat AMD's cards.

Understand that the 6900XT is a full "Die" and the 6800 series is the same Die but with some disabled CU's. The reason for this is not just quality standards but to get higher yields from a wafer. The design itself is inferiour. Nvidia knew what AMD was cooking up; hence why it prices the new 3x00 series so agressively.


 
Very level headed adult response there.

And yes children and those that still live at home just don't have the life experience yet to get it. You will see pages and pages of children arguing in forums over a gpu getting 10fps more than the other one. I find as we progress and computers get easier to use and build you have way less technical users and more children. And anything CPU vs CPU or GPU vs GPU attracts them like flies to honey.

Also doesn't help with some of the click bait titles I see on some review sites and they just seem to want the fighting for the extra clicks it brings and ad revenue I guess.

With age comes wisdom alot of this stuff is just not worth the time to argue over. I get it people are bored in lockdown due to covid but there are better things one can do with their time than obsessing over hardware and what other people choose to buy etc.
Very level headed adult response there.

And yes children and those that still live at home just don't have the life experience yet to get it. You will see pages and pages of children arguing in forums over a gpu getting 10fps more than the other one. I find as we progress and computers get easier to use and build you have way less technical users and more children. And anything CPU vs CPU or GPU vs GPU attracts them like flies to honey.

Also doesn't help with some of the click bait titles I see on some review sites and they just seem to want the fighting for the extra clicks it brings and ad revenue I guess.

With age comes wisdom alot of this stuff is just not worth the time to argue over. I get it people are bored in lockdown due to covid but there are better things one can do with their time than obsessing over hardware and what other people choose to buy etc.

The thing is, to me, the 3080 is the big winner here. The fractional improvement in raster you get with the 6800xt is totally consumed by the huge RT performance difference. And its not like AMD can just enable their own dlss, NVs dlss is using its own hardware acceleration (tensor cores).

But with the 3090/6900 issue, what really fries my brain is how tiny a difference there is. I have a 3080 ftw3 ultra from evga. With factory clocks its within 4% of the 3090 FE and can match it with a small OC. And I paid 770$ish for it. (Less actually since I sold my 1080ti for 450$ the day of the announcement so only paid 350 or so to upgrade and used my 970 for a few weeks until the 3080 came through backorder).

I'm super exited to see amd compete at the high end again even if they are likely to not really get any market share (it looks like they are prioritizing fab space for the CPUs right now). Thats going to be AMDs weak link going forward, their reliance on TSMC's leading edge node availability.

If Nvidia and apple buy up a big chunk of capacity (and intel with its gpu using tsmc) and then intel actually manages to get their cutting edge fabs online AMD will get squeezed. Its part of why they are still staying focused on the cpus because as chiplets they have a much higher yield.
 
If Nvidia and apple buy up a big chunk of capacity (and intel with its gpu using tsmc) and then intel actually manages to get their cutting edge fabs online AMD will get squeezed. Its part of why they are still staying focused on the cpus because as chiplets they have a much higher yield.

Out of the four here, which two do you think TSMC will favor ?
 
Out of the four here, which two do you think TSMC will favor ?

AMD or Apple. AMD is their biggest customer right now but apple just went all in on ARM so its not like they can go back and use Intel's fabs. So if apple wants first pick and all of the capacity from it they are going to get if (like they did with 5nm).

The problem is that these things are not very flexible. All of those companies bought up that capacity a while ago based on data from 2019 or even 2018. If something happens that vastly increases demand its much harder for AMD to just increase output of the product that's in demand (intel could just cut back on atoms, or desktop cpus, or whatever it was that was lower value and backfill with the product that was in higher demand). Its harder for AMD or Apple to bump each other to make extra room for whatever is in demand.

I suspect Samsung will play a larger role in 2021 and 2022 and hopefully intel will get it together and put pressure on amd and apple again by 2022 at least.

What happens after 2022 is going to be interesting. We have RISC-V arriving with a bang in a way most of the industry didn't expect (performance competitive, rather than simply price/power competitive). But on the gaming side we will have two fully integrated CPU/GPU makers soon amd possibly a 3rd (intel). Intel and AMD are something we are familiar with. But Nvidias acquisition of ARM changes things. What if Nvidia decides to work with microsoft to make a real and useful windows ARM SoC? What if they take Apple's idea and run with it? Could we see a fully integrated CPU/GPU combo where the ARM cpu(s) share very fast VRAM with the GPU or other tricks?

Right now AMD is building with the legacy mindset of CPU-centric and x86-centric. The GPU is designed to work in the x86-CPUs playground. But what happens if we reverse that? What happens if we make the GPU the central player and we design the CPU and the playground to best suit the GPUs goals of gaming/ rendering/ multimedia/ etc? Even the new consoles are not really doing that to any extent. They are still at their core low-medium grade PCs.
 
AMD or Apple. AMD is their biggest customer right now but apple just went all in on ARM so its not like they can go back and use Intel's fabs. So if apple wants first pick and all of the capacity from it they are going to get if (like they did with 5nm).

The problem is that these things are not very flexible. All of those companies bought up that capacity a while ago based on data from 2019 or even 2018. If something happens that vastly increases demand its much harder for AMD to just increase output of the product that's in demand (intel could just cut back on atoms, or desktop cpus, or whatever it was that was lower value and backfill with the product that was in higher demand). Its harder for AMD or Apple to bump each other to make extra room for whatever is in demand.

I suspect Samsung will play a larger role in 2021 and 2022 and hopefully intel will get it together and put pressure on amd and apple again by 2022 at least.

What happens after 2022 is going to be interesting. We have RISC-V arriving with a bang in a way most of the industry didn't expect (performance competitive, rather than simply price/power competitive). But on the gaming side we will have two fully integrated CPU/GPU makers soon amd possibly a 3rd (intel). Intel and AMD are something we are familiar with. But Nvidias acquisition of ARM changes things. What if Nvidia decides to work with microsoft to make a real and useful windows ARM SoC? What if they take Apple's idea and run with it? Could we see a fully integrated CPU/GPU combo where the ARM cpu(s) share very fast VRAM with the GPU or other tricks?

Right now AMD is building with the legacy mindset of CPU-centric and x86-centric. The GPU is designed to work in the x86-CPUs playground. But what happens if we reverse that? What happens if we make the GPU the central player and we design the CPU and the playground to best suit the GPUs goals of gaming/ rendering/ multimedia/ etc? Even the new consoles are not really doing that to any extent. They are still at their core low-medium grade PCs.

I'm not too worried about AMD at the moment they already have some ARM designs and from what I heard they will have a competitive product to Apple's M1 next year.
 
Where did you see bias? RX 6900 XT may be better value then RTX 3090 but that's not saying much. It is still terrible value as you can see in the graphs you probably didn't even look at, if you did everything would be clear as day. You get almost exactly the same performance with RX 6800 XT for way less money.

The bias is quite simple to see, unless you're blind. Better scores, lower price, yet gets a lower rating than the 3090? Not sure how else to spell it out. The numbers don't lie.
 
The two that bid the highest, of course. There are no illusions of sentimental attachments between multinational corporations.
I agree with you on there not being room for sentimental attachments, but smart companies do long term planning.
Doing so means that you prioritize stable long term revenue streams over limited time opportunities, especially if the latter hurt the former.

In this case, Intel has their own fabs and unless they close them down, any deal with TSMC will be temporary (until Intel gets their own processes in order or eliminates AMD). Would it make business sense for TSMC to hurt a loyal long term customer for a short term gig?
 
In this case, Intel has their own fabs and unless they close them down, any deal with TSMC will be temporary...Would it make business sense for TSMC to hurt a loyal long term customer for a short term gig?
Intel has no 5nm fab. It's already been reported that Intel will be using some of TSMC's 5nm capacity, starting in 2022.

I don't know Intel's plans, but they can certainly fab all their 5nm chips at TSMC, now and forever, without ever shuttering any of their current facilities. The gig very well may last forever.
 
Back