Yes, but that's simply not the purpose of this article. Another way of looking at it is how much visual quality is sacrificed, if any, when using upscaling.
Not in the same GPU generation but as DSirius has pointed out, there's scope for better results with Ada Lovelace chips compared to Turing ones due to better hardware. For example,
this document from Nvidia highlights the differences relating to optical flow acceleration between the three architectures. Whether there are any visible differences in output, though, is another matter, as Nvidia may well just utilize the better capabilities for performance only.
Edit: Having done a brief scan through the DLSS SDK documentation, there's nothing inherently different between the three architectures when it comes to quality -- I.e. a Turing chip can produce the same quality of output as an Ada Lovelace one. The biggest factors that do affect quality are how developers implement it and what the rest of the rendering pipeline is like. For example, DLSS is affected by the precision and resolution of the motion vector buffer.