AMD at Computex 2021: FSR vs. DLSS, Ryzen 5000 APUs, and Radeon RX 6000M GPUs

I'm skeptical about FSR. What I keep thinking is that if its really possible to have something that is close to DLSS 2.0 quality without machine learning and its able to be implemented even on hardware that is 6+ years old... why is it just now happening. I mean honestly, this could have been a huge boost for PS4 and XBO and it sounds like it could be implemented even on that hardware. The XOX and PS4 pro could have both been pretty capable 4K machines with this tech, it certainly would have been better than checkerbox rendering used so often on PS4 Pro. Yet, the tech only comes now after DLSS 2.0 is starting to be rather successful. The idea is not new though, and I'm not aware of any new math. The idea of superior upscaling tech has been around for a very long time.

I'm not saying that its impossible that this will be as good as DLSS 2.0, I'm just saying I'm extremely skeptical. This is not an "Nvidia is better rant". In fact, this tech sounds great if it can be implemented on any modern GPU, we all benefit, whether you have an Nvidia or AMD GPU, or if you are a console gamer. But, when things sound too good to be true, they usually are. Nvidia invested a lot in DLSS 2.0 and of course swears by the machine learning aspect of it. Maybe they'll have egg on their face (though they still deserve a pat on the back because FSR is obviously a response to DLSS 2.0), but, knowing how much hype comes out of this industry, I'm going to have to say "show me", before I get excited about this one.
 
Biggest issue for them is they still need per game support and it's not something as simple as turning it on.

With dlss years ahead of them in support it will likley only grow and with time more and more generations of gpu's from Nvidia will support it.

Sure I guess it's nice for those guys holding on to hardware from 5 - 10 years ago trying to milk everything they can out of it but for most it's going to be a pretty simple choice when upgrading.... Get the one with more support and more benefit.
Yes, it needs per game implementation but the question is which is easier to implement with a bigger effect (in terms of target audience) if devs / studios have to foot the bill themselves ?

From my understanding, nVidia has mostly done the DLSS implementation for games or at least supported this in one way or another.

Apart from that, I cannot overstate how much I have come to dislike the term ‚milk‘ on IT related forums.

Also, what‘s wrong with wanting things you bought stay relevant as long as possible ? And for GPU like the 1650 Super which was released a year and a half ago, it‘s not really asking too much.
 
I'm skeptical about FSR. What I keep thinking is that if its really possible to have something that is close to DLSS 2.0 quality without machine learning and its able to be implemented even on hardware that is 6+ years old... why is it just now happening. I mean honestly, this could have been a huge boost for PS4 and XBO and it sounds like it could be implemented even on that hardware. The XOX and PS4 pro could have both been pretty capable 4K machines with this tech, it certainly would have been better than checkerbox rendering used so often on PS4 Pro. Yet, the tech only comes now after DLSS 2.0 is starting to be rather successful. The idea is not new though, and I'm not aware of any new math. The idea of superior upscaling tech has been around for a very long time.

I'm not saying that its impossible that this will be as good as DLSS 2.0, I'm just saying I'm extremely skeptical. This is not an "Nvidia is better rant". In fact, this tech sounds great if it can be implemented on any modern GPU, we all benefit, whether you have an Nvidia or AMD GPU, or if you are a console gamer. But, when things sound too good to be true, they usually are. Nvidia invested a lot in DLSS 2.0 and of course swears by the machine learning aspect of it. Maybe they'll have egg on their face (though they still deserve a pat on the back because FSR is obviously a response to DLSS 2.0), but, knowing how much hype comes out of this industry, I'm going to have to say "show me", before I get excited about this one.
Would telling you that such a move could only be done by AMD, which AMD lacked the resources to do said move, satisfy you?

And now that it has acquired some resources, it can comfortably hire engineers to challenge innovations from bigger companies
 
The article here already mention that fsr produce inferior image quality and blur compared to native resolution and that based on screen provided by AMD. It will be even worse when it is tested by 3rd party reviewer

So it safe to assumme that is worse than dlss2.0. Also, it works on both Nvidia and AMD

Nvidia is the only winner here
For consoles I could see FSR being a huge boon. On the PC side of things, gamers tend to be much pickier. Well, for one thing they have the option to see the difference between native and upscaled. If the quality is not there, then what is the actual point to FSR? Nvidia's DLSS 1.0 was lambasted for not really delivering on its claims. Now, if you have an older GPU and FSR provides some benefit, like letting you actually play newer games with decent quality, that's one thing, but for the high-end market that wants both high fps and ultra visuals, blur or lower image quality than DLSS 2.0 won't be well received.
 
I'm skeptical about FSR. What I keep thinking is that if its really possible to have something that is close to DLSS 2.0 quality without machine learning and its able to be implemented even on hardware that is 6+ years old... why is it just now happening. I mean honestly, this could have been a huge boost for PS4 and XBO and it sounds like it could be implemented even on that hardware. The XOX and PS4 pro could have both been pretty capable 4K machines with this tech, it certainly would have been better than checkerbox rendering used so often on PS4 Pro. Yet, the tech only comes now after DLSS 2.0 is starting to be rather successful. The idea is not new though, and I'm not aware of any new math. The idea of superior upscaling tech has been around for a very long time.

I'm not saying that its impossible that this will be as good as DLSS 2.0, I'm just saying I'm extremely skeptical. This is not an "Nvidia is better rant". In fact, this tech sounds great if it can be implemented on any modern GPU, we all benefit, whether you have an Nvidia or AMD GPU, or if you are a console gamer. But, when things sound too good to be true, they usually are. Nvidia invested a lot in DLSS 2.0 and of course swears by the machine learning aspect of it. Maybe they'll have egg on their face (though they still deserve a pat on the back because FSR is obviously a response to DLSS 2.0), but, knowing how much hype comes out of this industry, I'm going to have to say "show me", before I get excited about this one.
I have a fair degree of skepticism too. I had initially assumed, given the improvements AMD had made in tensor math support with RDNA2, that FSR would involve the use of motion vectors and a DNN. However, with neither being used and it simply being a spatial, rather than temporal, upscaling set of algorithms, AMD will need to have some shader magic going on to avoid the image quality issues that spatial upscaling always generates. They may well have cracked all or some of these problems, but until one can see the code and the end results, I think everyone just needs to be a little mindful of these matters.
 
Would telling you that such a move could only be done by AMD, which AMD lacked the resources to do said move, satisfy you?

And now that it has acquired some resources, it can comfortably hire engineers to challenge innovations from bigger companies
Well not really because any developer, including MS or Sony could have worked on similar tech since its open source and hardware agnostic. It's not like its specific to AMD if they can run it on a GTX 1060. Why didn't MS and/or Sony implement something like this on previous gen consoles, it would have been huge if they had tech like this and suddenly were doubling the performance of their rival with little to no loss of visual fidelity. So no, that answer doesn't satisfy, but we'll see in a few weeks I suppose.
 
I think everyone just needs to be a little mindful of these matters.
That is always true until independent reviews are done.

What I like of this and other situations, DLSS was absolute crap when it was released, now its ok.

AMD has its 1.0 release and people are expecting, actually, demanding that from the get go, it must be superior to nvidia dlss 2.1.

And overall, I personally hate how people keep touting RT and DLSS when as of right now, its still available on some titles and it will take time to be available on all games.

Which would be at least a good couple of years and by that time, both AMD and Nvidia would have released newer GPUS.

TLDR: Worry about what you can play now and perhaps in a couple of years, until then, this is marketing fluff.
 
Last edited by a moderator:
Very interesting how they decided that the APUs will be the lower cost version of the chips this time around: this is likely since they trust their more aggressive auto-overlocking a lot more that they don't really have many cut down regular CPUs.

And while I realize the APUs are fairly unpopular among these parts, the performance we've seen on the OEM versions it's really quite good for integrated graphics: it can easily compete with a 1030 in performance now and can play many games at 1080p medium to low. It only really struggles on things like 2077 but hey AMD is confirming they might be able to enable FSR on at least some of these higher end games. It has potential to be the best "Better than nothing" option out there unless of course Nvidia manages to get a steady supply of 3050s out there which they might still, but not without it's own set of disadvantages (If it's not attractive to miners, it's because of limited VRAM which is also not desirable to do stuff like DLSS 2.0 with it)
"auto-overlocking" -- is that a word? But here is a definition: "overlock is a kind of stitch that sews over the edge of one or two pieces of cloth for edging, hemming, or seaming." Good to know!
 
...it isn't possible for Radeon to be more disappointing to me as a product.

Dude, the Radeon came out in 2000, how are you even still using it in games 2 decades later? Maybe move on from playing Black & White and go all the way to Skyrim. It's time to upgrade and stop beating yourself up with 20 year old tech.
 
For consoles I could see FSR being a huge boon. On the PC side of things, gamers tend to be much pickier. Well, for one thing they have the option to see the difference between native and upscaled. If the quality is not there, then what is the actual point to FSR? Nvidia's DLSS 1.0 was lambasted for not really delivering on its claims. Now, if you have an older GPU and FSR provides some benefit, like letting you actually play newer games with decent quality, that's one thing, but for the high-end market that wants both high fps and ultra visuals, blur or lower image quality than DLSS 2.0 won't be well received.
The actual point ? Increasing or in some cases enabling playability.
Personally, I find upsampling much more relevant for lower end / older products than for new high end ones. For someone who‘s stuck with an older system and cannot afford or does not want to spend big on the latest hardware this is great.

If Steam stats are to be believed, 1060 class GPU are still the most wide spread.

And tbh, if I were on an iGPU or older low end card and FSR allowed me to play games that previously would stutter on 720p low, I could not care less if trees in the distance looked blurry as long as the game ran smooth and did not look crappy (particularly compared to the lowest possible settings that may have been necessary before).

But yes, if I spent big on a high end card that would not be acceptable, in fact I would expect that it could play all games with high quality or fps and not just a select few ones.
 
Well not really because any developer, including MS or Sony could have worked on similar tech since its open source and hardware agnostic. It's not like its specific to AMD if they can run it on a GTX 1060. Why didn't MS and/or Sony implement something like this on previous gen consoles, it would have been huge if they had tech like this and suddenly were doubling the performance of their rival with little to no loss of visual fidelity. So no, that answer doesn't satisfy, but we'll see in a few weeks I suppose.
That's the thing. MSFT, Sony etc cannot commit resources to open source the same way AMD can. That's why they never attempted this.
In a way, AMD's ideals of business operations differ from most other companies which operate on a strongly selfish centric paradigm.
 
FSR is going to take a lot more work if that Godfall shot is anything to go by. Open source is of course important, but if it's still considerably inferior to a rival's alternative it doesn't add a great deal of value.

DLSS manages to end up looking like native resolution or even better, that shot makes FSR look worse than the checkerboarding employed on consoles quite frankly. Basic temporal upscaling. We'll wait and see how it works out.

Looking forward to AMD's mobile GPUs. The way the GPU market has been this past year I have contemplated just buying a gaming notebook over chasing desktop upgrades!
 
That is always true until independent reviews are done.

What I like of this and other situations, DLSS was absolute crap when it was released, now its ok.

AMD has its 1.0 release and people are expecting, actually, demanding that from the get go, it must be superior to nvidia dlss 2.1.

And overall, I personally hate how people keep touting RT and DLSS when as of right now, its still available on some titles and it will take time to be available on all games.

Which would be at least a good couple of years and by that time, both AMD and Nvidia would have released newer GPUS.

TLDR: Worry about what you can play now and perhaps in a couple of years, until, this is marketing fluff.
I don't think that it has to be as good as DLSS 2.1. But, it has to be significantly better than other traditional upscaling techniques that exist, but still are rarely implemented. For example, 4A developers implemented an upscaling technique for AMD's 6000 series cards in the Metro Exodus Enhanced version recently released. They also implemented DLSS 2.1. It's also very obvious that DLSS is superior tech. Interestingly 4A did not seem to think that FSR would be much better than their own technique as their reasoning for not implementing it instead, so either they are guessing, or they have some inside information on what AMD has to offer. If AMD's solution isn't much better than 4A's solution then it might actually backfire for AMD as it will justify the use of ML and tensor cores in Nvidia hardware.
 
I have a fair degree of skepticism too. I had initially assumed, given the improvements AMD had made in tensor math support with RDNA2, that FSR would involve the use of motion vectors and a DNN. However, with neither being used and it simply being a spatial, rather than temporal, upscaling set of algorithms, AMD will need to have some shader magic going on to avoid the image quality issues that spatial upscaling always generates. They may well have cracked all or some of these problems, but until one can see the code and the end results, I think everyone just needs to be a little mindful of these matters.
Everyone is talking about how this is huge for AMD and it might very well be. But, if it falls short of DLSS 2.1 significantly, in terms of quality, it also might actually justify RTX and tensor cores. AMD had to respond to DLSS, it is getting too much traction and of course its a proprietary Nvidia tech, but its much trickier, I think, than people realize. It kind of has to be good. If the selling point is simply that it works on more GPUs and gives you higher fps, but the image quality falls significantly short of DLSS, then does it really live up to the hype? Is it any better than 4A's solution with Metro Exodus Enhanced? The Godfall image from the GTX 1060 seems to be the only high quality image available and there is a massive difference in sharpness from the native to upscaled image and that is in "quality mode". This is most notable on the trees with pink leaves, they're a lot blurrier on the right. If this image represents what we can expect from FSR, then I think its going to be lagging quite far behind DLSS.

What AMD has going for it more than anything is the lock on the console market. This ensures that whatever solution can be implemented on PS5 and XSX will likely make it to the PC versions of those games. Because of that I do believe that AMDs solution will be more widely used, but its possible that many developers will still implement DLSS, like 4A did with Metro EE, if DLSS still offers visual benefits.
 
Last edited:
Of course the biggest disappointment was that no Zen 3 Threadrippers were announced.
However, given the timeframe provided for the stacked cache that was previewed as a future technology, could it be that the Zen 3 Threadripper will be one of the first products to have that tech when it finally comes out?
 
Of course the biggest disappointment was that no Zen 3 Threadrippers were announced.
However, given the timeframe provided for the stacked cache that was previewed as a future technology, could it be that the Zen 3 Threadripper will be one of the first products to have that tech when it finally comes out?
I’m wondering if they’re just gonna skip it and go to zen 4 threadrippers in 18 months (or whenever they’re ready)... sad, as this was the only thing I was interested in...
 
The article here already mention that fsr produce inferior image quality and blur compared to native resolution and that based on screen provided by AMD. It will be even worse when it is tested by 3rd party reviewer

So it safe to assumme that is worse than dlss2.0. Also, it works on both Nvidia and AMD

Nvidia is the only winner here

And it has not even come out, give it 18 months like NVIDIA had, and then we can compare apples to apples.
 
I just wonder if FSR will be utilised on the new amd driven consoles

All of this has to be implemented by the developers, they make those decisions, NOT AMD. I am certain that AMD is going to be pushing this for newer games only. Old consoles are VERY unlikely and it was never mentioned in the presentation.
 
I'm skeptical about FSR. What I keep thinking is that if its really possible to have something that is close to DLSS 2.0 quality without machine learning and its able to be implemented even on hardware that is 6+ years old... why is it just now happening. I mean honestly, this could have been a huge boost for PS4 and XBO and it sounds like it could be implemented even on that hardware. The XOX and PS4 pro could have both been pretty capable 4K machines with this tech, it certainly would have been better than checkerbox rendering used so often on PS4 Pro. Yet, the tech only comes now after DLSS 2.0 is starting to be rather successful. The idea is not new though, and I'm not aware of any new math. The idea of superior upscaling tech has been around for a very long time.

I'm not saying that its impossible that this will be as good as DLSS 2.0, I'm just saying I'm extremely skeptical. This is not an "Nvidia is better rant". In fact, this tech sounds great if it can be implemented on any modern GPU, we all benefit, whether you have an Nvidia or AMD GPU, or if you are a console gamer. But, when things sound too good to be true, they usually are. Nvidia invested a lot in DLSS 2.0 and of course swears by the machine learning aspect of it. Maybe they'll have egg on their face (though they still deserve a pat on the back because FSR is obviously a response to DLSS 2.0), but, knowing how much hype comes out of this industry, I'm going to have to say "show me", before I get excited about this one.

One major point you failed to understand is that FSR doesn't need to be better it just needs to be competitive. There are A LOT of people that want choices and this is an excellent choice to have.
 
One major point you failed to understand is that FSR doesn't need to be better it just needs to be competitive. There are A LOT of people that want choices and this is an excellent choice to have.
My first sentence says "..something that is close to DLSS 2.0 quality.." I never said it has to be better. But, here's the thing, it does have to be close. If it turns out to be just spatial upscaling which has been available, but rarely used, for years, then it's not a competitor to DLSS. This is pretty obvious with 4A's solution for AMD on Metro EE. They used a spatial upscaling technique and the results were okay, but they were not DLSS, that's why for Nvidia cards they still implemented DLSS.

I think the point that is being missed is actually this. AMD is responding to DLSS 2.0 because DLSS is starting to actually become a real selling point for Nvidia RTX cards. They are responding with a purely spatial upscaling technique that can be used on old hardware. I think that is their selling point here (again), that it can be utilized on a much wider range of hardware, but, will likely yeild inferior results to DLSS. This worked for FreeSync, but the gamble might not pay off here. FreeSync is inferior to G-Sync, but G-Sync required the purchase of expensive monitors to take advantage of it. But, here we are dealing with image quality, and we saw what happened when DLSS 1.0 didn't live up to expectations. So there is a chance, if FSR isn't that great, that it actually justifies RTX and tensor cores.
 
I don't want to debate over something not yet released and tested but if FSR supports a wide variety of hardware that's good news. Mobile and even handheld devices could take advantage a lot and even weaker hardware could make for a basic gaming station.
 
This thread is fun to catch up on, a healthy blend of good constructive discussion on the tech, some delightful premature conclusions on the success or death of FSR or DLSS, and it wouldn't be complete without a pinch of sanctimonious arrogance. Some of ya'll are overthinking this, to a fault.

At the very least it means toxic users with literally nothing constructive to add to the conversation have outed themselves, bless the ignore button.
 
Last edited:
Remember, BETAmax was superior to VHS:

FSR = works on 200 million pieces of hardware..
DLSS = works on 350,000 RTX cards..
That argument doesn't really work when the reality would be betamax players that could play both betamax and VHS while VHS player could ONLY play VHS.

Do you really think it would have went the same if the same situation was applied?

I think not.

Again I don't care that the one supports everything old but weaker I care about the one that supports everything ( including the better option).

Again maybe for old holdouts tmits appreciated and heck even for rtx cards having ANOTHER Option is always welcome....

But tell me how the buyer in the future given the choice between one $600 card and another where both have very similar performance in most regards but one has the option of 2 "resolution boosts" and the other can only use 1 would feel like the one with LESS options is the better choice?

Again comparing things like FreeSync vs gsync doesn't work because again FreeSync displays were cheaper and still offered similar performance elsewhere while Nvidia needed that extra hardware and had higher costs.

We all know that Nvidia and amd gpu's are effectively priced "the same" or very close enough for it to be less important.

I've always said amd would be killer if they could match Nvidia in most regards while being significantly less expensive.

But as we've seen with the 6000 series (and their cpus) they'd much rather have even pricing and make more profit then be the absolute "best buy".
 
Yes, it needs per game implementation but the question is which is easier to implement with a bigger effect (in terms of target audience) if devs / studios have to foot the bill themselves ?

From my understanding, nVidia has mostly done the DLSS implementation for games or at least supported this in one way or another.

Apart from that, I cannot overstate how much I have come to dislike the term ‚milk‘ on IT related forums.

Also, what‘s wrong with wanting things you bought stay relevant as long as possible ? And for GPU like the 1650 Super which was released a year and a half ago, it‘s not really asking too much.
And look at 5700xt and all rnda1 cards released in similar time frame that are effectively dead end cards for where games are moving (you cant even run metro exodus enhanced on any of them)
 
And look at 5700xt and all rnda1 cards released in similar time frame that are effectively dead end cards for where games are moving (you cant even run metro exodus enhanced on any of them)

Metro Exodus Enhanced is a ray-traced graphical mode for Metro Exodus. It's not like it's a different game or anything. Games are not moving towards RT-only as eliminating 80% of your previous customers would be an extremely stupid decision which puts your game studio out of business.
 
Back