FidelityFX Super Resolution is AMD's answer to Nvidia's DLSS, set to debut this year

tellmewhy

Posts: 102   +50
If it’s not based on neural networks and if it’s not an innovation bigger than neural networks then it will not succeed.
The trained neural networks they have embedded dense external information which they use to correct the picture. If they don’t have access to that information they can’t produce a noticeable better image no matter what algorithms(which are also external information but generic and not dense) they will use. It’s waste of time, they have to put tensor cores to next generation and use neural networks.
If they can’t implement tensor cores (which is the hardware interface to efficient handle neural networks) because of patents then they have really BIG trouble. In few years that method will be the de facto standard.
 

Shadowboxer

Posts: 1,717   +1,322

Every single post in this article is from an AMD apologist, who STILL doesn't understand this technology.

Image reconstruction requires much more than up-sampling, and its not gonna come through drivers, only decoding\accelerating hardware. You fools fault NVIDIA for "releasing it too soon" and now they are on top, why? Because DLSS 1.0 did what it needed to do: provide Nvidia hard data with regards to what parts of NON SPECIFIC GAME DATA need to be decoded. That was the whole game; YOU CANT DO THIS MASS TESTING INTERNALLY.

Now, DLSS is automated without any human training required AND its accelerated by tensor cores (vs 1 which was NOT). It literally adds more image data in a same resolution scenario; essentially a 1080p screen looks better than it could without DLSS at that same resolution, while providing a better than TAA supersample, FOR FREE because the image data its working with is already supersampled to 16k-32k (I think up to 32k?).

THEN, you have to high refresh monitors which are the true benefactors to this tech. Everyone talks about 4k this and that, yet DLSS is the difference between 120hz and 165hz. Which is absolutely enormous, especially with VRR\2.1.

You people sound like FOSS-ONLY proponents; this tech MUST be outsourced for it to be widespread and "good"... HA. Did you ever consider that Nvidia has had a clear vision of what they are doing with this tech? Because clearly they have.

Sorry you chose AMD this time around (if you could) but without a REAL answer to machine learning image reconstruction, you made the wrong one.
Don’t you dare label me an AMD apologist ever again! I get enough stick from the members of this forum for not subscribing to their fantasies about how AMD are more “ethical” or how Nvidia **** their pants whenever AMD release another GPU. Or that DLSS 1.0 was shafting customers etc, or the funniest yet was how DLSS locks consumers into GeForce.

But by all means continue to speak the truth. I completely agree with everything else you just posted.

Nvidias 10 year, multi billion dollar RTX project paid off and all AMD seem to have is some vague press releases with half promises. It’s beginning to show badly for them, they may not be able to leverage so much money out of Sony and MS using the Radeon brand next time because of this.
 

meric

Posts: 320   +334
While I find AMD's hardware stronger and better in terms of architecture, I think Nvidia always finds better ways to exploit the strengths of their hardware, supporting it with better software. AMD needs a more refined software dev team and a bigger R&D budget, they can't go on like this...
 

hahahanoobs

Posts: 3,600   +1,710
They already stated that it will be available to everyone that is interested.

This is why I like AMD, for whatever reasons, they always do this and its a valid effort, unlike the d!cks at nvidia, everything is propetary and a lock-in trap for the customers.
AMD does not because their software is mediocre at best, which makes sense since Nvidia is primarily a software company.

You like open tech?
Freesync started a while back following the gsync module and some still have PATHETICALLY small VRR windows as low as 20Hz, while gsync compatible monitors support 40-144hz+. Seems to me like validating each monitor to follow the same high spec is far superior to AMD just leaving it up to monitor manufacturers. No thanks. That's not the kind of openness I want. That's just being lazy.
 

EdmondRC

Posts: 141   +118
"Won't necessarily be based on machine learning"
"so the new upscaling tech could very well be based on Microsoft's DirectML API"

Doesn't the ML stand for machine learning?
 

Lounds

Posts: 893   +796
AMD does not because their software is mediocre at best, which makes sense since Nvidia is primarily a software company.

You like open tech?
Freesync started a while back following the gsync module and some still have PATHETICALLY small VRR windows as low as 20Hz, while gsync compatible monitors support 40-144hz+. Seems to me like validating each monitor to follow the same high spec is far superior to AMD just leaving it up to monitor manufacturers. No thanks. That's not the kind of openness I want. That's just being lazy.
What are you basing that on? Freesync gen 1?
Freesync has come a long way and it's definitely perfectly fine nowadays.
 

EdmondRC

Posts: 141   +118
I could be wrong, but I can't see how AMD could bring anything but an inferior solution to the market as it will require more from the FP32 cores than what is required of DLSS thanks to the tensor cores in RTX cards. I'm sure that it will be better than nothing, but I wouldn't expect double framerates with hardly any fidelity loss. The real issue for Nvidia is going to be that since AMDs solution doesn't require the tensor cores and its already going to be incorporated in so many console games, devs might skip out on DLSS, so pushing it to be supported natively in more of the major game engines is essentially for Nvidia at the moment. There is always the chance that AMDs solution will still be enhanced by the tensor cores, but something tells me they'll try their best to avoid that.
 

HardReset

Posts: 1,237   +892
If it’s not based on neural networks and if it’s not an innovation bigger than neural networks then it will not succeed.
The trained neural networks they have embedded dense external information which they use to correct the picture. If they don’t have access to that information they can’t produce a noticeable better image no matter what algorithms(which are also external information but generic and not dense) they will use. It’s waste of time, they have to put tensor cores to next generation and use neural networks.
If they can’t implement tensor cores (which is the hardware interface to efficient handle neural networks) because of patents then they have really BIG trouble. In few years that method will be the de facto standard.
DLSS and FidelityFX are not about making Better image quality. Both are about sacrificing image quality to get more speed. Why AMD needs any neural networks or any extra hardware (like tensor cores) when target is to sacrifice image quality is out of my mind.
I could be wrong, but I can't see how AMD could bring anything but an inferior solution to the market as it will require more from the FP32 cores than what is required of DLSS thanks to the tensor cores in RTX cards. I'm sure that it will be better than nothing, but I wouldn't expect double framerates with hardly any fidelity loss. The real issue for Nvidia is going to be that since AMDs solution doesn't require the tensor cores and its already going to be incorporated in so many console games, devs might skip out on DLSS, so pushing it to be supported natively in more of the major game engines is essentially for Nvidia at the moment. There is always the chance that AMDs solution will still be enhanced by the tensor cores, but something tells me they'll try their best to avoid that.
Just because someone wastes transistors on something, does not necessarily mean those wasted transistors have real use. I cannot see why AMD's solution couldn't be better in every way.

To illustrate, we have a robot that we would like to pick up boxes from ground and put them into shelf.

AMD tries to make program for robot and then robot does the job. If AMD could make this, then robot can also do job under different conditions (AMD's solution will work on every game).

Nvidia takes dumb robot, that couldn't do anything. So Nvidia tries to teach robot to do the job. After many trials and errors, robot finally learns how to do the job. However, under different conditions, robot must be retrained (DLSS needs new teaching for every game).

Basically Nvidia's solution is brute force solution that also needs quite lot processing power (tensor cores). If AMD could find algorithm based solution for sacrificing image quality, that is both more efficient and faster than Nvidia's solution. Also when image quality is sacrificed, that means LESS load for computing units.
 

tellmewhy

Posts: 102   +50
DLSS and FidelityFX are not about making Better image quality. Both are about sacrificing image quality to get more speed. Why AMD needs any neural networks or any extra hardware (like tensor cores) when target is to sacrifice image quality is out of my mind.

Just because someone wastes transistors on something, does not necessarily mean those wasted transistors have real use. I cannot see why AMD's solution couldn't be better in every way.

To illustrate, we have a robot that we would like to pick up boxes from ground and put them into shelf.

AMD tries to make program for robot and then robot does the job. If AMD could make this, then robot can also do job under different conditions (AMD's solution will work on every game).

Nvidia takes dumb robot, that couldn't do anything. So Nvidia tries to teach robot to do the job. After many trials and errors, robot finally learns how to do the job. However, under different conditions, robot must be retrained (DLSS needs new teaching for every game).

Basically Nvidia's solution is brute force solution that also needs quite lot processing power (tensor cores). If AMD could find algorithm based solution for sacrificing image quality, that is both more efficient and faster than Nvidia's solution. Also when image quality is sacrificed, that means LESS load for computing units.
If you want to increase speed by sacrifice quality then you have simple play the game at lower resolution (ex 720p).
But dlss is not about that, it’s about rendering the image at lower resolution and upscale it via neural network. When you upscale an image via a well trained neural network you increase it’s quality because the neural network fills the gaps with high quality info it has stored in it and recognize it mach with the local condition of the image. The power of well trained neural network can be seen by the paradox that you get higher quality 4k image if you upscale it from lower resolution ex 720p than higher resolution ex 1080p because that way you use more of the info inside the neural network. The reason they are doing that is because the rendering of an image with lower pixels is faster specially for ray trace lighting. It’s like how cache memory works but the cache (neural net) is prefilled and very agile type.
A traditional upscale algorithm fills the gaps with info which is produced by applying mathematical functions to nearby pixels, in that case we have lose in quality.
I hope I have help you understand little better how that thing is working. Don’t underestimate the neural networks technology. When it fits right to a problem it works very well.
 

MaxSmarties

Posts: 501   +294
Is this supposed to be a good news ? Months to release something they should have released one year ago (or, at least, with the Radeon 6800) ?
 

MaxSmarties

Posts: 501   +294
If you want to increase speed by sacrifice quality then you have simple play the game at lower resolution (ex 720p).
But dlss is not about that, it’s about rendering the image at lower resolution and upscale it via neural network. When you upscale an image via a well trained neural network you increase it’s quality because the neural network fills the gaps with high quality info it has stored in it and recognize it mach with the local condition of the image. The power of well trained neural network can be seen by the paradox that you get higher quality 4k image if you upscale it from lower resolution ex 720p than higher resolution ex 1080p because that way you use more of the info inside the neural network. The reason they are doing that is because the rendering of an image with lower pixels is faster specially for ray trace lighting. It’s like how cache memory works but the cache (neural net) is prefilled and very agile type.
A traditional upscale algorithm fills the gaps with info which is produced by applying mathematical functions to nearby pixels, in that case we have lose in quality.
I hope I have help you understand little better how that thing is working. Don’t underestimate the neural networks technology. When it fits right to a problem it works very well.

Great explanation but unfortunately you are just wasting your time.
He is an hardcore AMD supporter in denial about DLSS, and every link provided to him was ignored.
DLSS is Nvidia technology, so according to his agenda it can’t be good
 

HardReset

Posts: 1,237   +892
If you want to increase speed by sacrifice quality then you have simple play the game at lower resolution (ex 720p).
But dlss is not about that, it’s about rendering the image at lower resolution and upscale it via neural network. When you upscale an image via a well trained neural network you increase it’s quality because the neural network fills the gaps with high quality info it has stored in it and recognize it mach with the local condition of the image. The power of well trained neural network can be seen by the paradox that you get higher quality 4k image if you upscale it from lower resolution ex 720p than higher resolution ex 1080p because that way you use more of the info inside the neural network. The reason they are doing that is becaus the rendering of an image with lower pixels is faster specially for ray trace lighting. It’s like how cache memory works but the cache (neural net) is prefilled and very agile type.
A traditional upscale algorithm fills the gaps with info which is produced by applying mathematical functions to nearby pixels, in that case we have lose in quality.
I hope I have help you understand little better how that thing is working. Don’t underestimate the neural networks technology. When it fits right to a problem it works very well.
Bolded most interesting part. When you take information away and then fill that with another information, you simply cannot get better end result. You could only get "better", not better.

To illustrate this, MP3 compression is way to make audio file smaller. It basically takes away "useless" information. End result is surely worse than original. Now, we could also take original audio file, strip large amount of information from it and then fill gaps with machine learning algorithm that takes information from original. Result would be smaller and better quality audio file. I wonder why this is not yet implemented widely and instead we are using lossy and lossless compressions for audio files. I gladly would take better sound quality with machine learning. To be honest, no. Because, it's not original. Same with DLSS, no matter if it's "better", it's not original.

Same thing with DLSS. When taking information away from original image, reconstructed image cannot be better. It make be "better" but it cannot be better, because original is best one available. Basically DLSS is and FidelityFX probably will be nothing more than sophisticated way to cheat with image quality. We have seen drivers that worse image quality a bit to make games/benchmarks run faster. Those were quickly condemned because image quality was not original but something else. Same with these techniques, when worsening image quality is permitted, then it's only question how much it can be worsened. And soon we have something much worse than original but because it was made with "machine learning" (and with other BS terms), it's "better than original" and "you won't notice difference", it's supposed to be great thing. "Yeah".

With FidelityFX we are not talking about just traditional upscaling. How AMD actually makes it work is still somewhat unknown but as it takes some time to develop, they surely have something more than just basic one coming. But as with DLSS, I do not approve it if it sacrifices image quality (that it likely will).

Great explanation but unfortunately you are just wasting your time.
He is an hardcore AMD supporter in denial about DLSS, and every link provided to him was ignored.
DLSS is Nvidia technology, so according to his agenda it can’t be good
Yeah right. Saying DLSS makes image quality better is something simply not true.

And when talking about agenda, like I say above, I do not approve FidelityFX stuff either if image quality is sacrificed. No matter if it's AMD or not.
 

Puiu

Posts: 4,851   +3,737
TechSpot Elite
This feels too late to me, they haven’t got any demos, no promised list of games, firm dates or anything, I’m guessing they don’t have much to show to press yet. But creating the technology is not even half the battle. Developers need to adopt it, studios need to release games for it. It’s taken Nvidia with their enormous market share and wealth years to get to the point we are at now.

Hardly anyone is buying AMD cards these days, so why would developers spend the time and money implementing this new super resolution solution? It’s going to be a long uphill battle and I have a feeling DLSS isn’t going to stop at 2.0.

Oh and of course it’s open source. AMD want to create this software then let everyone else work on it because they don’t have the expertise to own a project like this. They can’t even provide basic customer support for its CPUs, instructing users to go to Reddit! Believe me, no executive likes to spend large amounts of R&D to create a technology and float it out there for free.

Of course at this point it’s all just really obvious that AMD doesn’t give a dam about Radeon. They use the brand to win lucrative contracts with Sony and MS. They don’t give a dam about the tiny percentage of their customers buying their dedicated GPUs.
it's not too late.
 

hahahanoobs

Posts: 3,600   +1,710
More affordable to more gamers making it mainstream. I hardly know anyone who has a Gsync monitor. Freesync is for the everyday gamer, Gsync is for snobs and that's fine if you're willing to pay the most for minimal differences.
You should really Google "gsync compatible monitors".
 

tellmewhy

Posts: 102   +50
Bolded most interesting part. When you take information away and then fill that with another information, you simply cannot get better end result. You could only get "better", not better.

To illustrate this, MP3 compression is way to make audio file smaller. It basically takes away "useless" information. End result is surely worse than original. Now, we could also take original audio file, strip large amount of information from it and then fill gaps with machine learning algorithm that takes information from original. Result would be smaller and better quality audio file. I wonder why this is not yet implemented widely and instead we are using lossy and lossless compressions for audio files. I gladly would take better sound quality with machine learning. To be honest, no. Because, it's not original. Same with DLSS, no matter if it's "better", it's not original.

Same thing with DLSS. When taking information away from original image, reconstructed image cannot be better. It make be "better" but it cannot be better, because original is best one available. Basically DLSS is and FidelityFX probably will be nothing more than sophisticated way to cheat with image quality. We have seen drivers that worse image quality a bit to make games/benchmarks run faster. Those were quickly condemned because image quality was not original but something else. Same with these techniques, when worsening image quality is permitted, then it's only question how much it can be worsened. And soon we have something much worse than original but because it was made with "machine learning" (and with other BS terms), it's "better than original" and "you won't notice difference", it's supposed to be great thing. "Yeah".

With FidelityFX we are not talking about just traditional upscaling. How AMD actually makes it work is still somewhat unknown but as it takes some time to develop, they surely have something more than just basic one coming. But as with DLSS, I do not approve it if it sacrifices image quality (that it likely will).


Yeah right. Saying DLSS makes image quality better is something simply not true.

And when talking about agenda, like I say above, I do not approve FidelityFX stuff either if image quality is sacrificed. No matter if it's AMD or not.
Please don’t blend different fields. Sound is not the same as picture. The place in brain which process sound is older in evolution history than the place it’s process pictures.
To understand why is better quality you can visit the website thispersondoesnotexist . com , it shows a neural network that creates high quality random faces pixel per pixel (it doesn’t use textures). Imagine you have a portrait picture but it’s low quality. So you can take that neural network and instead of let it make random high quality faces you can give it as input the low quality portrait picture and tell it to use that as input pattern. So it will fill the gaps around the pixels of the photo you have and the end result will be high quality as you can see and in the website. There is no deterministic way to accomplish that result and because of that it’s impossible to do it with algorithms because algorithms are deterministic. Maybe it’s not 100% match with original but it’s about reconstructing photos not DNA. If they look good nobody cares if they match 100% or 99% with the original.
 

HardReset

Posts: 1,237   +892
Please don’t blend different fields. Sound is not the same as picture. The place in brain which process sound is older in evolution history than the place it’s process pictures.
To understand why is better quality you can visit the website thispersondoesnotexist . com , it shows a neural network that creates high quality random faces pixel per pixel (it doesn’t use textures). Imagine you have a portrait picture but it’s low quality. So you can take that neural network and instead of let it make random high quality faces you can give it as input the low quality portrait picture and tell it to use that as input pattern. So it will fill the gaps around the pixels of the photo you have and the end result will be high quality as you can see and in the website. There is no deterministic way to accomplish that result and because of that it’s impossible to do it with algorithms because algorithms are deterministic. Maybe it’s not 100% match with original but it’s about reconstructing photos not DNA. If they look good nobody cares if they match 100% or 99% with the original.
Sound is not same as picture. Still, both picture and sound there is difference when something is taken away vs something is replaced with something. You can compare this to reality TV shows. Usually, production decides what is shown on TV (something is taken away). Everything shown on TV has actually happened. What if production shows something that has NOT happened and claim it happened? This is problem with DLSS and such, there is something that originally is not there.

That page makes high quality pictures, yes. But, question remains: how is that AI created high quality face better than original. Yeah, it looks better but is it better compared to original when considering original is best possible? I'd say: no.
 

Shadowboxer

Posts: 1,717   +1,322
I know what that is, it's literally a term for freesync so uneducated Geforce owners know that their card will with VRR on the monitor.
An uneducated person would believe that Freesync and Gsync are the same. They are not. Gsync has a much tighter standard. Most Freesync monitors do not meet the minimum criteria for Gsync certification.
 

HardReset

Posts: 1,237   +892
An uneducated person would believe that Freesync and Gsync are the same. They are not. Gsync has a much tighter standard. Most Freesync monitors do not meet the minimum criteria for Gsync certification.
Not single "Freesync only" monitor meets criteria for G-Sync because G-Sync needs extra hardware.

Then again, G-Sync compatible is same as adaptive sync that is same as Freesync. No matter if Nvidia certifies all Freesync monitors as G-Sync compatible or not, we are talking about same thing.