AMD FSR 4 upscaling shows much improved image quality, fewer artifacts, but it will be exclusive to newer GPUs

Daniel Sims

Posts: 1,876   +49
Staff
The big picture: With FSR 4, AMD has finally integrated machine learning into its upscaler, following in the footsteps of Nvidia DLSS and Intel XeSS. Previous versions relied on spatial and temporal upsampling, which supports most hardware but ultimately delivers inferior results. Our good friends at Hardware Unboxed have provided an early look at FSR 4 running Ratchet & Clank: Rift Apart. Although the analysis is preliminary, the visual improvements over FSR 3.1 are immediately noticeable.

While AMD has yet to provide detailed specifications for the Radeon RX 9070 GPUs introduced at CES 2025, various outlets have had access to systems running them on the show floor. One such exhibit highlights that the company's FSR 4 upscaling technology dramatically surpasses its predecessor.

In Ratchet & Clank: Rift Apart, FSR 3.1 struggles with ghosting, transparency issues, and other visual artifacts. To showcase FSR 4's improvements, AMD had two monitors running the game side by side in 4K performance mode – a setting where FSR traditionally struggles against DLSS.

Although Hardware Unboxed captured the footage by recording the monitors which is far from ideal, the improvements are quite noticeable, even when viewed through YouTube compression. Visual elements such as particles, transparent surfaces, Ratchet's fur, and distant details appear much sharper with FSR 4. The artifacts that marred FSR 3 are almost entirely eliminated.

Unfortunately, the demo did not include frame rate counters, so proper scrutiny of any performance impact must wait until we can thoroughly test FSR 4.

AMD has, however, confirmed one significant limitation: FSR 4 requires a Radeon RX 9000 GPU. When AMD launches the RX 9070 and 9070 XT in the coming weeks, these cards will go head-to-head with Nvidia's GeForce RTX 50 series and DLSS 4 technology.

A recent preview by Digital Foundry showed DLSS 4's notable improvements to image quality and frame generation. The new transformer-based AI model reduces visual flaws, and multi-frame generation can triple or quadruple framerates with a relatively minor detriment to latency.

While Nvidia's new multi-frame rendering technology is exclusive to RTX 50 series GPUs, all GeForce RTX graphics card owners will benefit from the image quality enhancements in games that support DLSS. Meanwhile, AMD's vague details on FSR 4 suggest that the update might apply to any game that uses FSR 3.1.

At TechSpot, we plan to provide a detailed comparison of FSR 4, DLSS 4, and their predecessors when time comes – that is, after we get our hands on the new GPUs, and after we can fully benchmark them. Until then, the only third-party performance data we have comes from a Call of Duty: Black Ops 6 benchmark, which suggests that the RX 9070 could compete with Nvidia's RTX 4080 Super – and possibly the RTX 5070 / Ti.

Permalink to story:

 
If FSR 4 depends on machine learning hardware, and older GPUs do not have that hardware, how do you expect them to support it?
Do you think AMD should never move to ML-based upscaling because it wouldn't be fair to their older GPUs?
It is also slightly disingenuous to say all RTX cards support DLSS 4, because DLSS 4 is a basket of different technologies that are not all supported by all RTX cards. It would be the equivalent of AMD merging FSR 2 - 4 under the same FSR 4 banner, still having ML upscaling a RDNA 4-exclusive feature, but touting that all RDNA cards support FSR 4.
 
Love it. All these ML and AI-based tech upgrades really are the future of GPU advancements. It's only logical. I just wish AMD also competed at the high-end with the RTX Blackwell.
 
It’s a shame it took AMD several generations to get to this point, I truly thought the 7000 series was going to move the game on but better late than never I guess.
 
Whatever happened to the ”upto 192 AI cores” of the rdna3 series? This stinks of artificial limitation…
There are no AI cores in RDNA 3. RDNA 3 supports WMMA, which are just instructions to accelerate certain operations used for AI stuff, but those run on the regular shader cores (stream processors) in the CUs, not on dedicated hardware.
 
Whatever happened to the ”upto 192 AI cores” of the rdna3 series? This stinks of artificial limitation…

RDNA3 does not have dedicated Matrix cores like CDNA, in RDNA3, WMMA is used for AI acceleration but it is not as efficient.https://gpuopen.com/learn/wmma_on_rdna3/
 
There are no AI cores in RDNA 3. RDNA 3 supports WMMA, which are just instructions to accelerate certain operations used for AI stuff, but those run on the regular shader cores (stream processors) in the CUs, not on dedicated hardware.
”Dedicated AI cores”, that’s what it says. Machine learning is mentioned specificly and it all sounds very impressive.

 
RDNA3 does not have dedicated Matrix cores like CDNA, in RDNA3, WMMA is used for AI acceleration but it is not as efficient.https://gpuopen.com/learn/wmma_on_rdna3/
rdna3 doesn’t have dedicated RT cores either, it still has hardware RT just like it has hardware matrix multiplication. And they state WMMA is very efficient. And they certainly can run the dp4a version of xess that presumably doesn’t even make use of wmma. They even advertise upscaling in their own presentation slides for rdna3 ai. It’s hard to imagine imagine all of this isn’t enough for fsr4.
 
Just hoping they sprinkle some fairy dust without AI on the older GPUs, as a lot of the artifacts probably can be corrected having static profiling of certain features or adjustments and GCN family has plenty computing resources.
 
Great way of boosting cards, happy it's improving.
Remember, the only negative here is advertising performance increases with a misleading comparison, the technology itself is useful and you can disable it if you aren't a fan of frame generation.
You can argue it makes dev optimisation lazier, but you can't really blame AMD for that, every new generation of graphics cards makes brute-forcing bad optimisation easier.

My main hope for FSR4+ is the improved handheld gaming performance. Steam Deck can't quite handle Space Marine 2 (tbf its a big ask), but I'd love to be able to play that on the go, unfortunately server connectivity on Steam Deck also feels a bit inconsistent, no issues on my PC.
 
If FSR 4 depends on machine learning hardware, and older GPUs do not have that hardware, how do you expect them to support it?
Do you think AMD should never move to ML-based upscaling because it wouldn't be fair to their older GPUs?
Hmmm... as far as I understand, the ML stuff is happening in training, not in real time on your local hardware. But it is entirely possible the older Radeon HW is not powerful enough for computing in general, hence the limitation.
 
Hmmm... as far as I understand, the ML stuff is happening in training, not in real time on your local hardware.
It's both. In order for you to get AI upscaling on your PC, you have two stages. First, Nvidia/AMD create and train their AI model to make it capable of producing good results. That part happens on Nvidia/AMD servers. Once the model is trained enough that it's producing good results, it's shipped to you via driver updates (or .dll files). Then your GPU runs that model while you're playing a game, giving it a low-res input image and receiving a high-res output image to be displayed. That part happens locally in your GPU.
Training is what Nvidia and AMD do to create the model. That's different from you running that model in your PC to upscale your games. That's the part that requires tensor cores to run DLSS in Nvidia's case, and whatever ML hardware AMD put in RDNA 4 to run FSR 4.

But it is entirely possible the older Radeon HW is not powerful enough for computing in general, hence the limitation.
The 9070 XT is certainly going to be slower than the 7900 XTX, possibly even the 7900 XT. So that's not it. There's also most certainly going to be 9060 cards later on, and those will be even weaker but still support FSR 4.
 
The 9070 XT is certainly going to be slower than the 7900 XTX, possibly even the 7900 XT. So that's not it. There's also most certainly going to be 9060 cards later on, and those will be even weaker but still support FSR 4.

Well in that case some backwards compatibility for fast/old GPUs does make sense... ?
 
Well in that case some backwards compatibility for fast/old GPUs does make sense... ?
maybe the rdna4 new ML hardware is different so that rdna3 to 4 is not code compatible. So they need time to write a different version, or they just wont bother with one at all.
 
I feel kind of sorry for the people who have bought a 7900XT(X) model in the last 6 months outside of some unmissable bargain. The fine wine seems to have run out, unfortunately.
 
So AMD’s strategy is “fix ghosting and artifacts, but also ghost older GPUs”? Bold move.

Because RDNA4 has new AI hardware that do AI upscalling

Older GPU will have hit in FPS if they do AI upscalling. Look at XeSS for example, it performs worse on Nvidia and AMD GPU because it does not take advantage of Intel AI hardware
 
It is said, that FSR4 will be features in the newer Consoles coming out this year...!

So Game Developers who are already using FSR, will have even more fidelity at their disposal.

 
Back