Nvidia DLSS Frame Generation works surprisingly well with AMD FSR and Intel XeSS

mongeese

Posts: 643   +123
Staff
Why it matters: Nvidia announced that DLSS 3 would come equipped with the ability to generate whole frames when it announced the RTX 4000-series and its new software stack. What it didn't say was that frame generation could be de-coupled from DLSS and even works with upscalers from the competition.

According to their write-up, the team at Igor's Lab was fooling around with Spider-Man Remastered when they noticed that the game's settings gave them some strange options: to switch frame generation on without enabling DLSS and to pair frame generation with AMD FSR and Intel XeSS.

Igor's Lab went straight to Nvidia to ask if those options were meant to be there. Nvidia confirmed that they were and explained that frame generation functioned separately from upscaling, but added that DLSS 3 had been optimized to work with it. Igor's Lab found an RTX 4090 and started testing to see what frame generation was capable of without DLSS 3.

Paired with an Intel i9-12900K and running Spider-Man Remastered with the maximum visual quality preset at 4K, the RTX 4090 reached a sensible 125 fps. With either DLSS, FSR, or XeSS enabled with their performance settings, the game was bottlenecked by the CPU to about 135 fps.

And then, with frame generation enabled but without any upscaling, the framerate jumped to 168 fps. And it jumped again to about 220 fps with both DLSS and FSR on (again, performance settings). XeSS lagged a bit behind, only managing 204 fps when working in tandem with frame generation.

Igor's Lab also tested the impact of frame generation with other DLSS, FSR, and XeSS quality settings and the story stayed the same. Frame generation helped all three to reach much higher framerates but DLSS and FSR outdid XeSS by a wide margin, like they usually do on non-Intel hardware.

XeSS was also struggling to match the visual quality of DLSS and FSR. In my opinion, the difference between DLSS and FSR was mostly a matter of taste. I preferred the slightly sharper look of FSR but DLSS seemed to have fewer artifacts. XeSS was blurrier and had some trouble managing antialiasing.

Igor's Lab has some great tools to help you inspect the difference between frames generated with each of the three upscalers. But, once again, the story here is pretty familiar: all three tools work the same as they usually do and treat the phony frames like engine-generated ones, so whatever you're already a fan of will probably be your favorite here, too.

It's great that Nvidia isn't locking frame generation to DLSS 3 and giving consumers some form choice, even if the feature is limited to the RTX 4000-series. There are suspicions that FSR 3 will have frame generation that works in a similar way to Nvidia's implementation and have much broader compatibility. It'll be interesting to see if that also works with DLSS and XeSS and which tool produces the best results when FSR 3 arrives next year.

Permalink to story.

 
I would argue yes, but I would also argue that you shouldn't.
I like this question a lot and realize I have no idea what the answer is. If the real world impact is that gaming looks & feels better, I wouldn't care that that it relied on optical illusions with garbage frames that I'd never actually see. Lots of pleasing visual effects have worked that way since the dawn of movies. On the other hand, if it is just artificial frame counts for the main purpose of inflating review scores with no actual increased enjoyment (or even decreased enjoyment) then it should be reported on as such, or just ignored.

In general I'm starting to worry about attention to review metrics driving manufacturers in the wrong direction. I like objective data as much as the next person but this recent CPU generation is an example of tech being shipped with default behaviors that is probably not what most people want in their homes all for the purpose of pushing a review number a few percent higher, when that performance gain is probably rarely visible in the real world.
 
Can you realy call garbage frames generation a technology ?
All frames are generated imagery, I find it highly amusing this is where people draw the line. I also find it fascinating that the most vocally opposed to it, typically have zero hands on experience with it, which seems to have become par for the course.

Actually many already drew the line at DLSS ("Native or bust!", "I've been able to reduce the render res for years", "tensor cores are useless") at least until FSR came out then upscaling was the best thing since sliced bread. Will have to check back in 2-3 years and see how this is all going.
 
All frames are generated imagery, I find it highly amusing this is where people draw the line. I also find it fascinating that the most vocally opposed to it, typically have zero hands on experience with it, which seems to have become par for the course.

Actually many already drew the line at DLSS ("Native or bust!", "I've been able to reduce the render res for years", "tensor cores are useless") at least until FSR came out then upscaling was the best thing since sliced bread. Will have to check back in 2-3 years and see how this is all going.
We might have a similar situation with dlss 3.0 in current state where no one really wants it. When fsr3 comes out next year and Nvidia might naturally improve on dlss 3.0 then it will be the second best thing since sliced bread. Until now the 4090 rasterization performance and rt performance pretty much marketed the card without dlss 3.0. While the 4080 and lower performance cards will need a better implantation of dlss 3.0 if Nvidia wants to sell those cards with the dlss 3.0 premium attached. Similar thing happened to dlss 2.0 Nvidia needed to fortify it's position in order for the lower tear performance Ampere cards to be more attractive. Nvidia will have no choice but to improve on dlss 3.0 if it wants to sell lower end 4000 series cards. Especially after all the market manipulation steam and Ampere stock dries up/runs out.
 
Any chance we can call a spade a spade, and just call this technology 'interpolation'?
I don't think so. Interpolation require two points to guess the middle value. Frame generation here is more like predicting the future based on the last few frames.
 
I remember a discussion here in the comments that AMD might create some kind of frame generation technology to catch up on Nvidia. Someone was arguing that this is impossible due to big overhead and stuff. Aaand here we are discussing frame generation technology
 
Nvidia: yes, we always intended this to be a feature available to others.

Also nvidia yelling at their drivers programmers : make sure that the next driver update does a better job in detecting anything from AMD present, so our tech can be disabled and we then blame AMD!
 
"There are suspicions that FSR 3 will have frame generation that works in a similar way to Nvidia's implementation and have much broader compatibility."

This was pretty much confirmed. AMD is working on something that will work on not just the latest GPUs, like with FSR 2.
 
Last edited:
Personally, DLSS 3 is a solution looking for a problem. Only if your framerate is so low that adding frames makes a meaningful difference to motion fluidity does it offer any value, but then the high latency (which is worse than the native framerate) would make it a horrible experience. If your frame rate is higher than say 100fps, then DLSS3.0 would give no noticeable improvement to fluidity even if it boosted it to 200fps, but would leave you stuck with latency that is worse than native 100fps. Given the whole point of higher FPS is reduced latency, boosting fps but worsening latency makes no sense.
 
I don’t mind “fake” frames, but what I do mind is the increase in latency. I tried this out and even though you get massive frame increase you still feel like something isn’t right with your input. Since higher frames lower input latency you feel the smoothness. This tech makes everything look smoother but feels sluggish. They have to get rid of the latency problem for me to even use this tech.
 
The vast majority of reviews concerning FGT tell me that I NEVER want to use it. It has been said that if you want to avoid noticeable input lag, you should have at least 60FPS in the first place. Well, if I have 60FPS to begin with, I'm not going to use a frame generator because that's fine for me just as it is. If I were to boost it to 120FPS (assuming that I had a 120Hz monitor), it would REALLY screw me up because I'd be seeing 120FPS and I'd have the input lag of 60FPS.

No thanks, I don't like handicapping myself in games that I play.
 
The vast majority of reviews concerning FGT tell me that I NEVER want to use it. It has been said that if you want to avoid noticeable input lag, you should have at least 60FPS in the first place. Well, if I have 60FPS to begin with, I'm not going to use a frame generator because that's fine for me just as it is. If I were to boost it to 120FPS (assuming that I had a 120Hz monitor), it would REALLY screw me up because I'd be seeing 120FPS and I'd have the input lag of 60FPS.

No thanks, I don't like handicapping myself in games that I play.
Looks like you havent been reading all the reviews about GPUs lately.

All they seem to care if how many FPS you get when using RT and everything else can pound sand.

There are nothing else important anymore.

That said, I agreed, you need either 60 fps or 120 fps, if your monitor/tv allows it, even though, I must admit, I cant really see a difference after 60 fps or perhaps I havent had the pleasure of observing a proper sample of something working correctly above 60.

What I do really hate and want that fixed are the damned jaggies.

How the heck you have a native image at 4K and still have jaggies??
 
Last edited by a moderator:
Looks like you havent been reading all the reviews about GPUs latetly.
You're joking, right? What I'm referring to has nothing to do with GPUs themselves, it's about FGT (frame-generation technology). You know very well that I'm always reading/watching tech reviews of ALL kinds. I think maybe you haven't seen FGT reviews but Daniel Owen explains it pretty well:
That said, I agreed, you need either 60 fps or 120 fps, if your monitor/tv allows it, even though, I must admit, I cant really see a difference after 60 fps or perhaps I havent had the pleasure of observing a proper sample of something working correctly above 60.
I was talking about needing at least 60FPS to use FGT (frame-generation technology) which renders it redundant because if I already have 60FPS, why would I want extra frames? Most people use 60Hz panels and I am one of those. The tech channel "Not an Apple Fan" put out a video awhile ago saying "If you've never had a 120Hz display, DON'T GET ONE!" because it's better to be happy with 60Hz than to crave the more expensive 120Hz panels. I agree with this and I just game happily as I always have at 60Hz.
 
Of course! Must be Monday where you are. :)



My bad, I was talking in general. Be with extra help or not, the idea is that at least 60 fps give you a smooth experience and the same for 120, but personally, my old eyes dont seem to be able to capture the difference after 60.
That's ok.. It's a brand-new tech so nobody's used to talking about it yet.

I don't see much difference beyond 60fps either and in non-multiplayer games it really doesn't matter. Linus proved that 120fps does help with aiming in CS:GO but that's because 120fps reduces input lag. With frame generation, lag input is increased and if you have to be getting 60fps to begin with, frame generation is literally useless.
 
VR0IAqX.png


Bonus points for calling out use cases it was never intended for, and regurgitating only the negative aspects reviewers have covered. This optional setting sure got a lot of panties in a twist, from people who won't buy a 40 series card anyway lol.
 
Last edited:
Back