DLSS adds TAA and can sometimes make textures look sharper and that can result in slightly better image quality on some game assets, but overall DLSS tends to only slightly degrade image quality, but is a much better solution than simply turning down the resolution. Overall, the tech is impressive, I almost always turn it on if its available and the game is averaging under 90 fps.Even with the best machine learning algorithms you can't upscale a lower res image to have more information that the native res image.
Why? Information theory. The information gained (I: quality of upscale) is limited by the quality of the input dataset: I(X,Y)≤min[H(X),H(Y)]. It's always less than or equal to the input dataset to train the upscaler. Right now even the best upscaler using tech like tensor cores (oooh sounds fancy) is nowhere close to the upper limit (where upscale is equal to original).
Simple put you can not create more information than what's in the native image.
The tensor cores are simply better than the FP32 cores at the tensor math doing the grunt work of upscaling and they are important because they also allow the FP32 cores to keep doing their primary function. Any solution that AMD comes up with to rival DLSS, at least with the current generation of GPUs, will put extra workload on the FP32 cores which more than likely will mean the returns are not nearly as big as they are on Nvidia cards. It will be somewhat of a balancing act. I don't think that AMD will be able to compete with DLSS with RDNA2. Ampere > RDNA2, but we'll see with RDNA3 and Nvidia's next gen. AMD made up a country mile on Nvidia this go around, there's no denying that, but are still behind.