GeForce RTX 3080 vs. Radeon RX 6800 XT: 30 Game Benchmark

At the nearest Microcenter, there is only a 6800 X in stock, for over 1 grand

I'll be waiting on that card upgrade, until either more is in stock closer to list price, or when Cyberpunk irons out the bugs, whichever comes first.

Too bad, seeing as I have all this Eth burning a hole in my pocket.
 
Yup looking at Raja with working AMD and INTEL you can see he was overrated. I never liked him since day one. Lisa Su of course and since day one I was right, Lisa knows what she's doing.
 
But then what do you play with?
Currently not playing anything as I've sold my house and temporary living at my partner's until we find a property to purchase together. And the house I'm living in currently doesn't have space for a desktop gaming experience.
 
"The RTX 3080 and RX 6800 XT are evenly matched" ?

AMD are ~50% slower in RT and have no alternative to DLSS. They may not support machine learning before RDNA3. In the biggest title launch in memory, CP2077, AMD again refuse to support RT due to their performance being so bad. Hitman 3, AMD partnered, just launched looking like a throw back to 2015, but will add RT in later. Again we see AMD trying to keep the discussion away from next gen tech.

If you want RDNA2, buy a console, which does have some extra features over the PC console chips.
 
What I find interesting is that the 6800 XT's additional VRAM doesn't seem to help. The 6800 XT is clearly a better performer at 1080P. But, as you increase resolution, the 3080 erases that advantage. One would think that additional frame buffer would benefit higher resolutions, but apparently not. Maybe 10GB is enough for 4K, at least for now.

Due to RDNA2 being a budget console chip they had to cheap out using slower GDDR6 then had to use space on the GPU for a cache.
 
AMD are ~50% slower in RT and have no alternative to DLSS. They may not support machine learning before RDNA3
While the RX 6000 series are indeed a lot slower in games using DXR, the chips fully support all data formats involved in machine learning and supports DirectML. No game is currently using it for a DLSS-like temporal upscaling system yet, but the option to do so is there.

Due to RDNA2 being a budget console chip they had to cheap out using slower GDDR6 then had to use space on the GPU for a cache.
That argument suggests that the Ampere architecture is a "budget console chip" too, because the RTX 3070 is using GDDR6. But it can't be because the RTX 3080 & 3090, which both use the same architecture, and indeed the same chip, as the 3070 are using GDDR6X.

At the nearest Microcenter, there is only a 6800 X in stock, for over 1 grand
In the UK, the likes of Scan have/had some RX 6800s in stock, but for £300 over the MSRP (same with eBuyer); no GeForce 3000 series whatsoever.
 
While the RX 6000 series are indeed a lot slower in games using DXR, the chips fully support all data formats involved in machine learning and supports DirectML. No game is currently using it for a DLSS-like temporal upscaling system yet, but the option to do so is there.


That argument suggests that the Ampere architecture is a "budget console chip" too, because the RTX 3070 is using GDDR6. But it can't be because the RTX 3080 & 3090, which both use the same architecture, and indeed the same chip, as the 3070 are using GDDR6X.


In the UK, the likes of Scan have/had some RX 6800s in stock, but for £300 over the MSRP (same with eBuyer); no GeForce 3000 series whatsoever.

6800xt is no budget console chip like you state. Yes 6000 series does support DirectML like you state, we are waiting for MS/AMD to finish super-resolution which will run on every card that supports DirectML. AMD support DirectML via compute and NVidia via tensor cores. A DLSS will come for everyone, we just have to wait for it to hit xbox and then it will be carried over to windows 10 as both use DX12u.

GDDR6/x is rumored in short supply. https://www.pcinvasion.com/gddr6-shortage-nvidia-amd-supply/ https://www.hardwaretimes.com/nvidia-rtx-3080-shortage-reportedly-due-to-low-gddr6x-memory-supply/

Love how the overcome changed with the higher sample size, just goes to show how close these cards are in raster. Not everyone is after 4k or DXR but DXR is the latest version of DX (DX12u). Performance is thus still important but games will still have a raster fall back for some time.

Check out this spreadsheet, the 10900k results agree with the techspot review. https://1drv.ms/x/s!Aokcqwdcgmie-Fv2m41CtvxT1Vc2?e=bPVx7O

Draw at 1080p and 1440p. Then pulls ahead at 4k. Its a great result for AMD, who have in the past been far behind.
 
Last edited:
No adaptive refresh rate on mine, simply straight 60hz but I found turning on motion blur high helps, looks natural too, and tearing becomes not as distracting. If G Sync is needed on a 4K TV, LG OLED seems to be the only way to go.

To me Cyberpunk alone was worth it... more games should have RTX/Reflection/HDR implementation as great as Cyberpunk.. and yea DLSS is defo necessary for 4K RTX Cyberpunk too, balanced setting works great. Haven't tried Medium. Control is alright, but the RTX in it is just OK. Watch Dogs Legion seem legit, and looking forward to Dying Light 2 now, but no hurry, because really need some rest after Cyberpunk now heh..
I have the LG Nano (49in, 120hz), I'm cheap, can't afford the OLEDs haha. The Nano though has variable refresh though, its just not officially support by Nvidia (at least not yet, maybe they'll add it soon). From what I have read though, the OLEDs have the same "flickering" issue with G-Sync, though they are certainly a superior TV.
 
we are waiting for MS/AMD to finish super-resolution which will run on every card that supports DirectML
It's less to do with Microsoft and/or AMD, and more down to game developers. The tools for doing a DLSS-like routine are already available; they're just waiting for somebody to implement them in their work. The upscaling can't be tacked on or run via the drivers, as it must be part of the rendering process, set after the main shading has been done, but before any post-processing is run through.

AMD support DirectML via compute and NVidia via tensor cores.
On this point, any DirectX12-compliant GPU will support DirectML and by default, the standard shader units are used for doing the operations. Nvidia's Tensor Cores only kick in if specific coding requirements have been met (e.g. mixed precision data formats, matrix dimensions must be in multiples of 8 for FP16 and 16 for INT8) - if the upscaling model doesn't comply with these conditions, at any stage in the whole process, then the usual CUDA cores are used.
 
It's less to do with Microsoft and/or AMD, and more down to game developers. The tools for doing a DLSS-like routine are already available; they're just waiting for somebody to implement them in their work. The upscaling can't be tacked on or run via the drivers, as it must be part of the rendering process, set after the main shading has been done, but before any post-processing is run through.

On this point, any DirectX12-compliant GPU will support DirectML and by default, the standard shader units are used for doing the operations. Nvidia's Tensor Cores only kick in if specific coding requirements have been met (e.g. mixed precision data formats, matrix dimensions must be in multiples of 8 for FP16 and 16 for INT8) - if the upscaling model doesn't comply with these conditions, at any stage in the whole process, then the usual CUDA cores are used.

MS are the source for my post. You can run DLSS via DirectML using tensor cores, there is never going to be an issue with using tensor cores. NVidia will ignore super-resolution or update their tensor cores. Nvidia dont need super-resolution.

AMD will use compute out of no other choice https://www.guru3d.com/news-story/microsoft-eying-directml-as-dlss-alternative-on-xbox.html " NVIDIA however is utilized dedicated hardware or DLSS through the Tensor cores, whereas AMD would need to run it over the compute engine". The reason for the use of compute as you well know and understand is the lack of tensor cores. Compute on a nvidia card takes 4 cycles to excute but tensor cores take 1 cycle. Compute is far slower, this is a big deal for real-time rendering as you full well understand. This means performance wise a complex compute model wont be fast enough.

MS are researching super-resolution https://wccftech.com/xbox-series-di...n-area-of-very-active-research-for-microsoft/ " Microsoft has actually been working on a 'super-resolution' AI upscaling technique based on DirectML for quite some time." MS are using compute because they have AMD as parkners on xbox.

The tools for doing a DLSS like feature were provided to MS by nvidia as per the open available all over the web slide.

You have made my post all about DLSS when the core part of it was about something else. Your whole post is you talking to your strawman. On my screen my post is more about sample size and game choice for reviews.

Yet my post is still, "This message is awaiting moderator approval, and is invisible to normal visitors."

DLSS part " 6800xt is no budget console chip like you state. Yes 6000 series does support DirectML like you state, we are waiting for MS/AMD to finish super-resolution which will run on every card that supports DirectML. AMD support DirectML via compute and NVidia via tensor cores. A DLSS will come for everyone, we just have to wait for it to hit xbox and then it will be carried over to windows 10 as both use DX12u. "

I expect this post will be, "This message is awaiting moderator approval, and is invisible to normal visitors." I wont be trying to post again.
 
Last edited:
On this point, any DirectX12-compliant GPU will support DirectML and by default, the standard shader units are used for doing the operations. Nvidia's Tensor Cores only kick in if specific coding requirements have been met (e.g. mixed precision data formats, matrix dimensions must be in multiples of 8 for FP16 and 16 for INT8) - if the upscaling model doesn't comply with these conditions, at any stage in the whole process, then the usual CUDA cores are used.
[/QUOTE]

Tensor cores were developed the way they were specifically to boost machine learning applications, especially neural networks which are used in DLSS and would presumably be the best way for a Direct ML super resolution to work as well. Since tensor cores were developed to accelerate the math, and the math wasn't developed to take advantage of the tensor cores, it would seem that logically, the developers would code Direct ML in a way that the tensor cores would be able to provide a significant boost compared to regular shader cores. Just as Nvidia has an advantage at Direct X RT, I would expect Nvidia to have an advantage at Direct X ML. At least with the current GPUs.
 
Two comments:

At least in Germany, the 6800XT is available (in stock) at several etailers (a good bit above MSRP) whereas the RTX 3080 is completely MIA and has been since lauch. As for being „listed“ at MSRP: Stores can list all they want but if they are not in stock that‘s worthless.

Good to see you are getting increasingly positive towards RT and DLSS (unlike the apparently worthless SAM which is supported on Intel mainboards now, as well as on 3xx and 4xx chipsets and does work with Zen 2 and Zen+), so I guess there won‘t be an issue with nVidia review samples going forward. It gives the impression (although I may be completely wrong) that their little email served its intended purpose.

Not contesting your overall summary as that‘s supported by benchmark results and DLSS plus RT are bonus features that you get, but rather the tone.
This is also a very lazy comment. We evaluate every product individually and if you take a proper look at our review history you'll find significantly more positive Nvidia product reviews than AMD, that's just a fact and it's why your comment is very lazy.

It's the same old crap and we often heard the opposite during the AMD FX era, "ohh they've got a strong Intel bias". Didn't matter that Intel made significantly better products, nope that couldn't be it.

Anyway your AMD favoritism argument falls apart when you start to look at our content. Go find the pro Radeon VII coverage, what about the Vega 56 and 64 coverage? How about the RX 590 day one review? Don't be lazy with your opinions, do some actual research first.

Techspot/HUB has always been very positive about dlss 2.0
 
"The RTX 3080 and RX 6800 XT are evenly matched" ?

AMD are ~50% slower in RT and have no alternative to DLSS. They may not support machine learning before RDNA3. In the biggest title launch in memory, CP2077, AMD again refuse to support RT due to their performance being so bad. Hitman 3, AMD partnered, just launched looking like a throw back to 2015, but will add RT in later. Again we see AMD trying to keep the discussion away from next gen tech.

If you want RDNA2, buy a console, which does have some extra features over the PC console chips.
Is Nvidia paying illiterate people to post dumb **** on forums?
 
6800XT and 3080 are evenly matched only at 1440p, at 4K the 3080 wins by a landslide (14 games vs 3).
Without DLSS alternative, 4K gaming is simply not viable for 6800XT in the near future.
6800XT is already expensive at the 650usd MSRP, not to mention the inflated pricing. Had the 6800XT priced at <600usd and produced at sufficient quantity it would pose a real threat to 3080.
 
Nvidia does seem to pull ahead in software features, and it's interesting to see the 3080 pull ahead in 4k with less VRAM.

On the other hand it's great to see AMD finally competing broadly on par again at least in raw performance metrics.

The elephant in the room here is that these are pretty much phantom products for all intents and purposes. The author makes a salient point in the article - they're just not worth the scalper prices going at the moment, if you can even find one...
 
Yes, you have a dedicated article about RT + DLSS, but in a comparison you can't compare something AMD doesn't have, and RT performance on Radeon are mediocre at best.
For me is a BIG point, in 2021...
 
You're not trolling, you actually believe the things you say...which is oh my god even worse.
I do. And I am right. At some point Techspot is going to have to test ray tracing, other tech outlets already do.

Imagine thinking we shouldn’t test a feature that gives the highest image quality on flagship GPUs lmao!
 
This article inspired me to make a bad joke:

Q: Why is comparing the RX 6800 XT and RTX 3080 like comparing a Warbird to a Star Destroyer?
A: Neither of them are real so comparisons are irrelevant anyway.

SUFFER!!! :laughing:
 
Trolling impressively @Shadowboxer. There's a whole context to the article and the conclusion. The GPUs are evenly matched in performance/cost per frame, except in the games where DLSS/RT is a factor (not hundreds of games, not even dozens) and for streaming. It's all there.
I'm pretty sure that he's the first user here that I ever put on ignore. I could feel my IQ dropping every time I read one of his posts. It got painful.
 
You're not trolling, you actually believe the things you say...which is oh my god even worse.
You're talking about someone who is a shill for Intel, nVidia and Donald Trump. There's nothing that would surprise me from him anymore. :laughing:
 
This is also a very lazy comment. We evaluate every product individually and if you take a proper look at our review history you'll find significantly more positive Nvidia product reviews than AMD, that's just a fact and it's why your comment is very lazy.

It's the same old crap and we often heard the opposite during the AMD FX era, "ohh they've got a strong Intel bias". Didn't matter that Intel made significantly better products, nope that couldn't be it.

Anyway your AMD favoritism argument falls apart when you start to look at our content. Go find the pro Radeon VII coverage, what about the Vega 56 and 64 coverage? How about the RX 590 day one review? Don't be lazy with your opinions, do some actual research first.
Steve, I don't know why you get so much flak. Maybe these people haven't been around long enough to see that you've been doing benchmarking and reviews for decades like I have. Back in 2008 (Jesus, how old are we now? :laughing:), your review of the VisionTek HD 4870 was the article that cemented my decision to buy my XFX HD 4870 1GB.

Through the years, I've always found you to be completely focused on two things, value and reality (which are the most important things to me). You always maintained your objectivity whether people liked it or not (and they often didn't like it). What you never did was become arrogant, which was both impressive and endearing. You always encouraged your readers to look at other reviews as well, not only accepting the fact that there were variables that you may not have accounted for, but actually pointing it out to the reader. Not everyone is so talented and so humble at the same time. In that regard, you have the wisdom of Sokrates.

This is the reason that you're one of (if not the) most respected reviewers in the English-speaking world today. Don't change a thing about yourself because your success and long-time fans like me prove that you've been doing it right.
 
I wonder what that job position looked like.

Nvidia: Seeking blithering ***** to defend our honor using the most piss poor arguments you can concoct.

(lol this is a joke)
I know you're joking Steve, but Shadowboxer does exactly that in every single post that I've ever read by him. :laughing:
 
Back