Explainer: What Are Tensor Cores?

Julio Franco

Posts: 9,099   +2,049
Staff member
I'm just a dumb engineer (as in not a mathematician, so I might be totally off) , but in the math I know the dimensionality of matrices is determined by their variable (column) count, and I've never seen a matrix stacked on another matrix, you would just add more rows or column.
 
I honestly see no difference between RTX on and RTX off with my 2080Ti considering the games I play don't support RTX (DCS World, CS GO, Chess.com) but CoD MW did look really good.

I hope the lowest 3000 series GPUis powerful enough to actually justify the ray tracing difference.
 
Im just a dumb engineer (as in not a mathematician so I might be totally off) , but in the math I know the dimensionality of matrices is determined by their variable (column) count, and ive never seen a matrix stacked on another matrix, you would just add more rows or column.
Matrices are a specific type of 1D x 1D tensor, so the 3D representation isn’t really a bunch of matrices stacked on top of each other - it was an attempt to visualise it, rather than formally state ‘this is what a 3D tensor looks like.’ Depicting multidimensional arrays isn’t easy ?

Edit: In hindsight, order would probably be a better term to have used rather than dimensions - the order is determined by the sum of the dimensions of each index which forms the array, so in the case of a matrix, it’s a 2nd order tensor because no matter how rows and columns you have, they’re always 1D each.

So the tensor cores in Nvidia’s GPUs do GEMM on 4x4 2nd order tensors.
 
Last edited:
DLSS 2.0 basically means that 4K gaming at high refresh rates is a reality, while before the hit of just rending 4K natively would lower performance below the standard 60hz refresh rate of just a bog standard 4K monitor. DLSS 2.0 and a high variable refresh 4K monitor is definitely the future for PC gaming, it does make me wonder if the next gen consoles will be able to do other 'cheats' to boost performance. I remember back in the PS3 days and a lot of the bigger better looking first party titles moves a lot of other tasks onto the Cell away from the GPU as it was faster at rendering those specific tasks, such as AA and Audio.
 
They are server gpu for scientists to run calculations on.
the fact they came up with this funky up scaling is pure luck.
You would never of built this gpu with this up scale in mind it just random luck they found a use for it on desktop.
 
I honestly see no difference between RTX on and RTX off with my 2080Ti considering the games I play don't support RTX (DCS World, CS GO, Chess.com) but CoD MW did look really good.

I hope the lowest 3000 series GPUis powerful enough to actually justify the ray tracing difference.
How do you expect to see a difference in games that do not have ray tracing?
 
They are server gpu for scientists to run calculations on.
the fact they came up with this funky up scaling is pure luck.
You would never of built this gpu with this up scale in mind it just random luck they found a use for it on desktop.
‘Random luck’ is highly unlikely, given that large scale data handling was a fundamental part of Volta’s design from the very start. It’s not clear when Nvidia chose to involved dedicated units for GEMMs but it wouldn’t have been any later than 2016. Turing appeared two years after this and again, the decision for the architecture to sport tensor cores would have been made earlier than this. So that means Nvidia would have years in which to research applications of the capabilities across all of the platforms the hardware was intended for. Both versions of DLSS are heavily based around inference, an area that Nvidia have invested considerable sums of R&D budget.
 
An excellent article for a novice like me, these explainers as well as procrastination from my actual work, are the reasons I come to Techspot.
 
It's always amazing to see mathematics found its way into real world applications.
Now the CSI Miami "can you enhance that" meme can rest in peace

EnhancedImage.gif
 
Given that there are TV sets that upscale TV programs, what AMD is doing, making it a "toggle in the drivers" that can be turned on for any game seems to be a better way to go. Although Nvidia's approach may yield better results, but in that case, why not offer both choices?
 
Given that there are TV sets that upscale TV programs, what AMD is doing, making it a "toggle in the drivers" that can be turned on for any game seems to be a better way to go. Although Nvidia's approach may yield better results, but in that case, why not offer both choices?

What AMD is doing ( like TV's are doing) causes the image to have worse quality, and the pefroamnce gain is not that big ( in order to dont be horrible it can only reduce que resolution a bit, to like 1800p or 1600p to upscale to 4K; and then , it still steals some of the power from the shaders to do the upscale).

Nvidia DLSS no only gets better quality than AMD solution, gives better quality than native resolution ( let that sink in) and with much bigger performance gains ( it can lower the resolution to 1440p or even 1080p and upscales well to 4k).
And by using dedicated hardware, the performance is much better doing that upscale than using general shaders to do a normal upscale.
 
I'm just a dumb engineer (as in not a mathematician, so I might be totally off) , but in the math I know the dimensionality of matrices is determined by their variable (column) count, and I've never seen a matrix stacked on another matrix, you would just add more rows or column.

scaler in motion...
 
DLSS 2.0 basically means that 4K gaming at high refresh rates is a reality, while before the hit of just rending 4K natively would lower performance below the standard 60hz refresh rate of just a bog standard 4K monitor. DLSS 2.0 and a high variable refresh 4K monitor is definitely the future for PC gaming, it does make me wonder if the next gen consoles will be able to do other 'cheats' to boost performance. I remember back in the PS3 days and a lot of the bigger better looking first party titles moves a lot of other tasks onto the Cell away from the GPU as it was faster at rendering those specific tasks, such as AA and Audio.

Yes, and so does Microsoft's directML.... which is an Industry Standard. DirectML is what the Series X will be using... thus most Game Developer's. (clicky) (clicky)



DLSS is nVidia's attempt to get Game Developer's to use nVidia's "special hardware" that is left over from nVidia using their Enterprise architecture (Turing, Pascal, Ampere) as a gaming architecture.

But with AMD's RDNA, these Game Developers no longer have to cater to nVidia, that is why Jensen's RTX & DLSS promise has never taken off. Devs were already looking ahead, to directML. As proof, some 20 months after RTX release, we only have 6 RTX games and 4 DLSS games. The Industry has spoken...


It's because nVidia's Gaming cards are going to have to support directML anyways (ie: DX12 ULTIMATE), so there is not much of a market for DLSS only games or spending EXTRA MONEY/TIME on nVidia's DLSS... when it is inferior to directML anyways.
 
Last edited:
Yes, and so does Microsoft's directML.... which is an Industry Standard. DirectML is what the Series X will be using... thus most Game Developer's. (clicky) (clicky)



DLSS is nVidia's attempt to get Game Developer's to use nVidia's "special hardware" that is left over from nVidia using their Enterprise architecture (Turing, Pascal, Ampere) as a gaming architecture.

But with AMD's RDNA, these Game Developers no longer have to cater to nVidia, that is why Jensen's RTX & DLSS promise has never taken off. Devs were already looking ahead, to directML. As proof, some 20 months after RTX release, we only have 6 RTX games and 4 DLSS games. The Industry has spoken...


It's because nVidia's Gaming cards are going to have to support directML anyways (ie: DX12 ULTIMATE), so there is not much of a market for DLSS only games or spending EXTRA MONEY/TIME on nVidia's DLSS... when it is inferior to directML anyways.
is this the world seen through red casting glasses ? DLSS 2.0 is much better than the first iteration and we will see if it will be implemented on future AAA titles
 
DLSS 2.0 is insanely good

Far better than all this Ray Tracing crap.

I hope DLSS 3.0 will support all TAA games like rumours are saying. Gamechanger then.
 
nVidia's Gaming cards are going to have to support directML
They already do:


What’s yet to be implemented in Nvidia drivers is support for TensorFlow with DirectML.

nVidia's DLSS... when it is inferior to directML anyways.
They’re completely different things - it’s like saying AMD’s VSR is inferior to Vulkan.
 
Nowadays it is impossible to speak about technology with AMD supporters. :dizzy:
Everything made by intel or nvidia must be bad according to their narrative.

I am an AMD user since K6 200 and I'm planning to upgrade my 9600K to a Ryzen as soon as Zen3 will be available, but they are making me hate AMD nevertheless due to this attitude
 
Last edited:
This is interesting and all, but do I really need to know what goes on inside a processor? I built countless systems and troubleshoot all kinds of computer issues, and I don't see a point in knowing this, is it because I'm not a gamer? How does understanding what Tensors are figure in improving the readers' gaming experience?
 
This is interesting and all, but do I really need to know what goes on inside a processor? I built countless systems and troubleshoot all kinds of computer issues, and I don't see a point in knowing this, is it because I'm not a gamer? How does understanding what Tensors are figure in improving the readers' gaming experience?
It is called general knowledge and it has nothing to do with gaming.
This is one of the things that makes this website better than most.
But you don’t have to read it if you don’t see the point. The title is a good indication about what’s inside. Skip it if you wish
 
They already do:


What’s yet to be implemented in Nvidia drivers is support for TensorFlow with DirectML.


They’re completely different things - it’s like saying AMD’s VSR is inferior to Vulkan.


Yes, we know they already do. I meant, in upcoming games... and, in that Game Developer's won't bother with dlss, when RDNA is what the consoles will be using... therefore rdna = gaming standard.

Again, nobody cares that nVidia supports their own Tensors... but Dev's don't. And won't waste time on such a small portion of the gaming market.


post script:
And you are un-informed... if you don't think directML can do what DLSS is attempting to do. Even a Microsoft Jr exec laughed at dlss and said jensen is trying too hard... knowing he got his idea for dlss from early Microsoft's directML testing (on nV hardware). But over the last 20 months since Turing's RTX release, dlss has flopped.

And now directML is here. It was just implemented within the latest build of Windows. And games are coming in 2021...

Obvious is obvious...
 
Last edited:
Back