Well, I've been doing a little bit of exploratory coding with DirectML for a month now, so I'm well aware that it's perfectly possible to do temporal upscaling via methods other than DLSS. At the moment, since I can't use Tensorflow through DML on a GeForce RTX (until Nvidia sort it out in the drivers), I can't judge the relative speed-up that the tensor cores offer, over doing the scaling via the CUDA cores (although my bigger problem is trying to stop Nsight from crashing whilst trying to GPU profiling when running the code).And you are un-informed... if you don't think directML can do what DLSS is attempting to do.
DirectML isn't automatically 'better' than DLSS, because it's just not the same thing - the former is an API, whereas the latter is a very specific compute routine. If one knew the exact neural network performed in DLSS 2.0, then it would be possible to do it on any DX12 graphics card, but obviously Nvidia are never going to release that information. Even if one did know it, the performance wouldn't be as good as the tensor operations would be done on the FP32 SIMD units that all GPUs have, rather than dedicated units.
Unfortunately, unlike RT cores which are automatically utilised in any DXR that involves acceleration structures (the API leaves it entirely to the GPU and its drivers), use of the tensor cores does require a flag to be enabled (and the data in a set format) that's not part of DirectML. Not yet, at least.