So if one knew what a 'better looking' 4K set of data should look like, it's possible to replicate this from a smaller set of data. This 'upscaled' data set won't be the same as the 'native' but that's not important - after all, the set produced from sampling the larger data set isn't the same either.
This example demonstrates it quite nicely. The following image is the original 'native' one:
View attachment 87739
The upscaler is given the following to work with:
View attachment 87740
And the result is:
View attachment 87741
If one subtracts the original from the DNN-upscaled image, it becomes clear that the data sets are not the same:
View attachment 87742
However, the DNN one is visually better than the original, and that's what the likes of DLSS, or any other DNN-based upscale algorithm, aims to achieve. They're not designed to reproduce a bit-by-bit exact replication of a native data set (although, in theory, the DNN could be trained to do so); instead, the goal is to obtain as visually pleasing a result as possible.