History of the Modern Graphics Processor, Part 5

neeyik

Posts: 2,963   +3,630
Staff member
The tensor cores were promoted as being a key element behind this feature, but the first version of DLSS didn't use them in the consumer's graphics cards. Instead, this was all done by Nvidia's own computer grids, which were used to analyze each game, frame by frame, to work out what the upscaling routine would be.
Never heard of that and I couldn't find any source that supports this claim. While it's true that DLSS 1.0 required a per-game model trained on Nvidia's compute grid, this final model was still run on tensor cores. The only game that used DLSS without leveraging tensor cores was Control, which featured the prototype version of what became DLSS 2.0. DLSS 2.0 no longer requires a per-game model, but that has nothing to do with tensor cores - both 1.0 and 2.0 use tensor cores, only "1.9" did not. The model in DLSS 2.0 itself is still generated by Nvidia, not locally on GPU's tensor cores, it's just that the technique no longer requires a separate model for each game.
 
Never heard of that and I couldn't find any source that supports this claim.
The information was passed on to me by some game developers (I won't say who or which titles it was, as it would put them into a difficult position). I suppose I should have really said that DLSS 1.0 didn't necessarily use the tensor cores, I.e. it was dependent on the implementation. This is true of any tensor work being done on an RTX GPU. There is no code or flag to activate them: instead, the scheduler will use them to process operations as long as specific conditions are met. If not met, then the standard CUDA cores are issued the instructions.
 
The information was passed on to me by some game developers (I won't say who or which titles it was, as it would put them into a difficult position). I suppose I should have really said that DLSS 1.0 didn't necessarily use the tensor cores, I.e. it was dependent on the implementation. This is true of any tensor work being done on an RTX GPU. There is no code or flag to activate them: instead, the scheduler will use them to process operations as long as specific conditions are met. If not met, then the standard CUDA cores are issued the instructions.
Ok, that explains most of it. Still, the "Instead, this was all done by Nvidia's own computer grids" part seems wrong - computer grids trained per-game models, but those models were then run locally. This part has nothing to do with players' GPUs, whether early DLSS used tensor cores or not. The only thing that changed with DLSS 2.x is the model is now general, not prepared per game. It's basically a small wording issue, but the "Instead" at the beginning changes the meaning of the whole sentence.
 
Just to point out that nvidia didn't actually go to court, the cases were settled and as part of it, nvidia does not recognize any wrong doing.
 
Fury X with 4069 cores? DX11 with compute shaders in 2008? Proof reading? Compute shaders were already available in DX10 back in 2016, just very limited.
 
Fury X with 4069 cores? DX11 with compute shaders in 2008? Proof reading? Compute shaders were already available in DX10 back in 2016, just very limited.
Thanks for catching the Fury X typo. With regards to matter of Direct3D and compute shaders, they formally appeared with DirectX 11 in 2008 - they can be run on D3D10-compliant hardware, though, as Shader Model 5 added profiles backwards into SM4 (albeit in a limited form, as you pointed out).
 
Thanks for catching the Fury X typo. With regards to matter of Direct3D and compute shaders, they formally appeared with DirectX 11 in 2008 - they can be run on D3D10-compliant hardware, though, as Shader Model 5 added profiles backwards into SM4 (albeit in a limited form, as you pointed out).

Yeah, I also wrote a typo like 2016 which in reality was 2006-2008 lol , good article which brought me back memories.
 
Back