Nvidia launches RTX Video Super Resolution to boost streaming video quality

AI-based video upscaling using certain Intel GPUs has been in Chrome since last April but hasn't made any noticeable effort to finalize the feature and support it officially, due to power concerns and other matters. If AMD is planning to do likewise, they've made no indication yet.


Yes, I read so 3 days ago....

Microsoft said they will be using Ai for many Windows features in the future... upscaling is just one of them. In theory, GPU manufacturers don't have to do anything, Windows will just utilize what hardware you have.


Modern TVs all do this... so it's prudent for your PC's OS to do it too.
 
In theory, GPU manufacturers don't have to do anything, Windows will just utilize what hardware you have.
Unfortunately, that's not the case. DirectML, for example, can't tell what tensor structure best suits a given GPU and uses the TensorFlow default for Nvidia processors but a more CPU-favored one for AMD, Intel, etc. So either one has manually configure this for every D3D12-compatible GPU that one wishes to support or one gets the GPU vendor to manage this in the drivers.

Modern TVs all do this... so it's prudent for your PC's OS to do it too.
It's certainly long overdue for PCs to have better upscaling systems available for video streams, but TV vendors have it easy. They only have to stick a single chip into the display chain to do this (e.g. Sony uses its X1 and XR processors for this task) and then hike the price up to cover the cost.

Microsoft will need to work with AMD, Intel, and Nvidia to figure out the best way of making this function on as many GPUs as possible, because the only compatibility requirement for Windows 11 in terms of graphics cards, is that they need to be DirectX12 compliant and have a WDDM 2.0 driver. That covers well over two hundred different GPUs, some of which date back over 7 years.

Of course, they could only offer it for Arc, RDNA 2 or newer, and Turing or newer, to take advantage of those processors' superior tensor handling, compared to the older chips. But even then, one is back to the problem at the start of this reply -- different GPUs handle tensors in different ways.
 
Does directML need to know what GPU...?

Or is that just for an efficiency thing... so u don't fire up the whole card while running Ai matrix.

So when u say the best way... do you mean most efficient way.
 
Does directML need to know what GPU...?

Or is that just for an efficiency thing... so u don't fire up the whole card while running Ai matrix.

So when u say the best way... do you mean most efficient way.
Yes, in terms of compute efficiency, but there’s a big difference in efficiency if the tensor/matrix formats aren’t correct for the GPU. In some cases, it may even not work. Microsoft is likely to restrict the use of AI scaling to recent GPUs only to make the problem concerning efficiency as small as possible.

In terms of power efficiency, tensor calculations will fire up the whole GPU for AMD cards, as all that math is done using the SPs. For Intel Arc and Nvidia Turing and newer, less of the GPU will get used, as the major of the work gets done on the those GPU’s tensor cores; older Intel and Nvidia GPUs will be like AMDs.

Nvidia’s RTX Super Resolution seems to have so-so efficiency — at the lowest quality settings it bumps my AD103 up by 20-30W, which isn’t so bad, but max quality sometimes pushes it to just over 100W. To me, that’s not great at all but I’ve not had chance to properly profile the GPU when it’s doing the upscaling to see what the shader and video loads are actually like.
 
Re: the before/after photo, I'm looking at the building with the green line down the center of it, and I see no real difference between the left and right sides. Am I missing something or is the issue just that Techspot shrank the provided image down so much that you can no longer see the benefits?
Watch the video. Major difference, especially the gloved hand in the foreground.
 
Yes, in terms of compute efficiency, but there’s a big difference in efficiency if the tensor/matrix formats aren’t correct for the GPU. In some cases, it may even not work. Microsoft is likely to restrict the use of AI scaling to recent GPUs only to make the problem concerning efficiency as small as possible.

In terms of power efficiency, tensor calculations will fire up the whole GPU for AMD cards, as all that math is done using the SPs. For Intel Arc and Nvidia Turing and newer, less of the GPU will get used, as the major of the work gets done on the those GPU’s tensor cores; older Intel and Nvidia GPUs will be like AMDs.

Nvidia’s RTX Super Resolution seems to have so-so efficiency — at the lowest quality settings it bumps my AD103 up by 20-30W, which isn’t so bad, but max quality sometimes pushes it to just over 100W. To me, that’s not great at all but I’ve not had chance to properly profile the GPU when it’s doing the upscaling to see what the shader and video loads are actually like.

So essentially a moot point over-all.

Sounds like no dGPU will be efficient enough to make this viable and most likely specialized CPUs and APUs in the near future will be used by Windows to Ai upscale (agnostically).
 
Sounds like no dGPU will be efficient enough to make this viable and most likely specialized CPUs and APUs in the near future will be used by Windows to Ai upscale (agnostically).
Efficiency is going to be a subjective matter in this scenario, though, as one has to determine whether the improvements to the video quality are worth the additional energy requirements.

When I first checked out RTX SR, I didn't do a particularly thorough job of it, especially when recording the power usage. So I made a quick and ugly 480p test video (here) and recorded %GPU Load and Average Board Power Consumption on a 4070 Ti:

Desktop idle = 15W
No upscaling = 17W avg, 9% GPU load
RTX SR quality 1 = 20W avg, 25% GPU load
RTX SR quality 4 = 30W avg, 36% GPU load

While it looks a lot better at the highest quality setting, I don't think it's worth an increase of 76% in energy consumption to do this. Microsoft is adding a similar system to Edge, that uses DirectML, so it will be interesting to see what the energy figures are like for that, once it's publicly available.
 
Edge://flags/#edge-video-super-resolution
Only in the Canary Beta release of Edge and even if you enable it in the settings, it’s only going to work for a certain number of users - I.e. you have to be part of the test sample, otherwise nothing will actually happen.
 
Back