The State of Nvidia RTX Ray Tracing: One Year Later

Funny that COD modern warfare support DXR on Pascal on day 1 without any patch, and the performance on 1080Ti with DXR is horrible

Without DXR
201906248cfc-b02d-43ee-9b01-9f8c26afe8b8.jpg


With DXR
2019eb32500c-c3a0-4721-9ed0-2f179bad58db.jpg


Source https://www.pcgameshardware.de/Call...tracing-Test-Release-Anforderungen-1335580/2/

Yeah 1080Ti is pretty much slower or equal to 2060 on any game with DXR on. Let say AMD has the resource the support DXR, without the dedicated hardwares there is just no hope for Navi to compete with RTX cards on DXR performance. That's why AMD is ignoring DXR for now and hope that people like cheap rasterization performance better. Good luck spending more money for higher end Navi that can't use DXR though.


What does any of that, have to do with what I said..?

I am not refuting DXR in games..! I am laughing at how no games (except a few) make actual use of Turing's RT cores (ie: "RTX On"). As most games are just strait DXR code... and do not use the RT cores found in RTX Cards. Developers are not going to write for RTX ray tracing and DXR ray tracing for each game... (Nvidia was paying engineers to work with these Dev's, many opted out... leaving egg on Jensen's face)

Again, there is no such thing as RTX On when using DXR... because you are not turning on the RT Cores found within RTX cards. Ironically, GTX cards can do ray tracing (DXR), but they don't have RT cores therefore can't turn them on, so they don't have "RTX On".

Nvidia is using marketing, to hide the fact Turing isn't much more powerful than Pascal.. and that these datacenter AI chip's RT cores are a hoax. Because they require special software to turn them on and make use of them and Developers are not going to spend their resources doing that, when they have DXR that works on all brands (Navi, Pascal and Turing.)




Lastly, Navi can do DXR.

And better than pascal, because Navi (& Turing) have async compute, while Pascal does not. There are benchmarks supporting everything I have said....and there mere fact there are GTX cards are doing raytracing proves my point to what a gimmick Nvidia's "RTX On" moniker really is, regarding Turing's RT cores.

 
What does any of that, have to do with what I said..?

I am not refuting DXR in games..! I am laughing at how no games (except a few) make actual use of Turing's RT cores (ie: "RTX On"). As most games are just strait DXR code... and do not use the RT cores found in RTX Cards. Developers are not going to write for RTX ray tracing and DXR ray tracing for each game... (Nvidia was paying engineers to work with these Dev's, many opted out... leaving egg on Jensen's face)

Again, there is no such thing as RTX On when using DXR... because you are not turning on the RT Cores found within RTX cards. Ironically, GTX cards can do ray tracing (DXR), but they don't have RT cores therefore can't turn them on, so they don't have "RTX On".

Nvidia is using marketing, to hide the fact Turing isn't much more powerful than Pascal.. and that these datacenter AI chip's RT cores are a hoax. Because they require special software to turn them on and make use of them and Developers are not going to spend their resources doing that, when they have DXR that works on all brands (Navi, Pascal and Turing.)




Lastly, Navi can do DXR.

And better than pascal, because Navi (& Turing) have async compute, while Pascal does not. There are benchmarks supporting everything I have said....and there mere fact there are GTX cards are doing raytracing proves my point to what a gimmick Nvidia's "RTX On" moniker really is, regarding Turing's RT cores.

So have a look at this article
https://www.techpowerup.com/254528/...ing-dxr-support-to-many-geforce-gtx-gpus?cp=5
DXR calculations can be done via CUDA cores (Shaders) or RT cores. It's true Turing and Navi shaders are more advanced than Pascal but not having RT cores is like playing slide shows. If you are familiar with bitcoin mining, ASICs basically killed off gpu mining, RT core is the asic for DXR calculations.

2019d0d75e22-a222-4037-8ef3-120e4aeeeb1f.jpg

(dark green is DLSS on so ignore that)

if DXR on Navi can be easily done AMD has done so already, or maybe because the performance numbers are not worthy enough to be disclosed anyways.
 
Last edited:
I have so much pity for those who believe that ray tracing is some kind of gimmick or that it can be mimicked with rasterisation tricks.

If you’re a 3D gaming enthusiast of the PC variety this is the most awesome thing to come out since widescreen monitors came along. I don’t think first gen is a smart buy but then that’s always the case. In a few years time this will be normal and we will all be better off for it. Personally I can’t wait for minecraft RTX support, it’s the most popular game of all time and should make ray tracing a lot more mainstream. Also I can’t wait to see my server with RTX on!
 
"RTX is a lie" (C) HardOCP

I agree with the article "the ugly" part, I think right now RTX/DXR and DLSS are more of nGREEDia marketing BS than anything else. The current state of RTX proves that by the following:

  • only 2 out of 5 released out of promised 11 RTX titles have any (hardly visible) DXR ON advantage after more than 1 year of DXR exists
  • nVidia is selling GTX 16X cards without RTX - proves that even NV treats RTX as optional
  • no other GPU vendor supports RTX/DXR by hardware so far
  • RTX support gets cancelled in several new/unreleased AAA titles: Assetto Corsa Competizione, RDR5 PC, Atomic Heart will actually never be released

So right now it is true that RTRT is "a demo and the future of gaming graphics". Regarding the (nearest) future I also have some doubts: we have not seen any evidence of RT visual quality could be so massively superior to traditional rasterization, that anyone could say "rasterized graphics is obsolete". Some believe that RT is more convenient for developers than manual rasterisation tricks ("it just works"): I suspect it's not true, partial/hybrid RT/r-n is very non-trivial to make right, the reason for so many issues in BF5 for example. Pure RT rendering engine maybe really more straightforward to implement, see Q2 RTX - here we have ~20-100X FPS drop because of the full scene(every pixel) is rendered using RT, no r-n used at all. But for that to become useable in games we have to wait another 10 years when 10X-100X more powerful than Turing GPUs arrive.
 
As most games are just strait DXR code... and do not use the RT cores found in RTX Cards...
Again, there is no such thing as RTX On when using DXR...
because you are not turning on the RT Cores found within RTX cards....
Direct3D is effectively blind, with regards to the physical structure of the processor carrying out the necessary operations; the only thing it knows about is what the processor is capable of doing/supporting (and this is done by the programmer writing suitable queries). The API knows nothing about the differences in the shader units in an AMD GPU compared to an Nvidia GPU, nor how they function nor how they process instructions. This is the job of the GPU drivers - they take incoming commands and decode them for the relevant hardware processing them.

The same is all true for DXR - the pipeline is called as part of the game's code, then various shaders are run to do the triangle intersection checks, and build the acceleration structures to maximise performance. The GPU drivers on RTX graphics cards will take some of the instructions being sent to the processor and distribute them to RT cores, because this portion of the hardware is designed to acceleration those specific calculations; the rest, as normal, are distributed across the shader clusters in the rest of the GPU.

In short, one doesn't code for RT cores in DXR, as the drivers handle that automatically. Other than calling to the specific VK_NV_ray_tracing extension in Vulkan, there are no commands specific to the use of RT cores in that API either.
 
Direct3D is effectively blind, with regards to the physical structure of the processor carrying out the necessary operations; the only thing it knows about is what the processor is capable of doing/supporting (and this is done by the programmer writing suitable queries). The API knows nothing about the differences in the shader units in an AMD GPU compared to an Nvidia GPU, nor how they function nor how they process instructions. This is the job of the GPU drivers - they take incoming commands and decode them for the relevant hardware processing them.

The same is all true for DXR - the pipeline is called as part of the game's code, then various shaders are run to do the triangle intersection checks, and build the acceleration structures to maximise performance. The GPU drivers on RTX graphics cards will take some of the instructions being sent to the processor and distribute them to RT cores, because this portion of the hardware is designed to acceleration those specific calculations; the rest, as normal, are distributed across the shader clusters in the rest of the GPU.

In short, one doesn't code for RT cores in DXR, as the drivers handle that automatically. Other than calling to the specific VK_NV_ray_tracing extension in Vulkan, there are no commands specific to the use of RT cores in that API either.

So, you know as well as I do, that Nvidia needs to work with EACH game developer, because that "Driver" you speak of, needs to be specifically coded for. Nvidia's own drivers don't decode DXR and make RT cores work... because DXR doesn't work that way.

That is why Nvidia themselves worked directly with certain Developer's.
 
Nvidia's own drivers don't decode DXR and make RT cores work... because DXR doesn't work that way.
That's how all drivers work - they decode the commands issued via the API, be it Direct3D, OpenGL, OpenCL, CUDA, Vulkan, etc. It makes no differences as to what pipeline D3D is running (graphics, compute, DXR), it cannot make a graphics card use any specific part of its hardware. For example, just read through the code in these DXR examples:


There is no RTX or RT core-specific commands within the code. Strictly speaking, the instructions that are accelerated by the RT cores are identified and sent to those hardware parts by the streaming multiprocessor decode and dispatch unit, rather than the GPU drivers; however, it's still the latter that decodes the API instructions in the first place.

That is why Nvidia themselves worked directly with certain Developer's.
This was necessary because at the time of release of the RTX line of cards, DXR was still in beta (as, of course, were the GPU drivers, effectively). One can hardly expect a processor manufacturer release a product that accelerates specific instructions, without offering some guidance on how to program for it via the use of an abstraction layer. Now that DXR is fully out, as so to speak, there's no need for the layer, as the core API and GPU drivers handle everything else.
 
Well Crytek came out with the Neon Noir Ray Tracing benchmark and Wccftech bench them, here are the result
20198b7a00b0-4791-4a12-9c88-b9a1e8deb283.jpg

Source
Watch the clip I think this is only ray traced reflections, Global Illumination and Ambient Occlusion still use the traditional method. Had it use ray trace GI the 5700XT's performance would have tanked even harder.
 
Last edited:
Them results look pretty low @krizby
I get 69 fps on average over 3 runs using remote actions average fps counter.
on 1920x1080 ultra. Them scores for the vega 64 are 17% wrong compared to mine.
 
Them results look pretty low @krizby
I get 69 fps on average over 3 runs using remote actions average fps counter.
on 1920x1080 ultra. Them scores for the vega 64 are 17% wrong compared to mine.

I get 162 fps avg and 118 fps for 1% low at 1080p Ultra, the benchmark is not intensive at all since my 2080 Ti could reach the highest clock without being power limit constrained which doesn't happen with 3D Mark Port Royal benchmark

 
Last edited:
Strange you seem to be missing the rain particles @ 1 min 12
it might be you had lower rendering quality selected and the detail been missed.
 
Strange you seem to be missing the rain particles @ 1 min 12
it might be you had lower rendering quality selected and the detail been missed.

Good eyes there, the rain in my clip is right to left while in yours it's left to right. With Youtube compression it's harder to spot but in my original clip the rain drops are very obvious.
 
Last edited:
Good eyes there, the rain in my clip is right to left while in yours it's left to right. With Youtube compression it's harder to spot but in my original clip the rain drops are very obvious.
Did you see the difference in the rendering.
 
That's how all drivers work - they decode the commands issued via the API, be it Direct3D, OpenGL, OpenCL, CUDA, Vulkan, etc. It makes no differences as to what pipeline D3D is running (graphics, compute, DXR), it cannot make a graphics card use any specific part of its hardware. For example, just read through the code in these DXR examples:


There is no RTX or RT core-specific commands within the code. Strictly speaking, the instructions that are accelerated by the RT cores are identified and sent to those hardware parts by the streaming multiprocessor decode and dispatch unit, rather than the GPU drivers; however, it's still the latter that decodes the API instructions in the first place.

This was necessary because at the time of release of the RTX line of cards, DXR was still in beta (as, of course, were the GPU drivers, effectively). One can hardly expect a processor manufacturer release a product that accelerates specific instructions, without offering some guidance on how to program for it via the use of an abstraction layer. Now that DXR is fully out, as so to speak, there's no need for the layer, as the core API and GPU drivers handle everything else.

You are explaining how it all works….(*sigh). I am telling you that is not how Nvidia marketed it, nor what they said... "It just works".

You are going the very long way, to try and dismiss what Jensen and Nvidia insinuated, about RTX. That the very expensive RTX graphic cards with the VERY EXPENSIVE RT CORES are needed for ray tracing and to accelerate games. (ie: RTX premium.)

When in fact, those "expensive" RT Cores are not needed, and in most of the DXR games, RT cores are not even used...!



No need to keep going on and defending Nvidia, Jensen isn't even defending himself on this. Matter of fact, Jensen has not been seen in public since he made those false statements, that "it just works"...
 
Last edited:
Jup seems like Nvidia reflections are rendered in higher quality there (just play the clips at 0.25 speed), thus the bullet casing reflection are sharper and more distorted due to the water surface's imperfections. While the water puddles in AMD are just flat and no impurity at all.
Furthermore
AMD


Nvidia


AMD's reflections are less detailed and suffer from some quite jarring render error there. I also check the rain part, every Nvidia gpu it's right to left and for AMD it's left to right.
 
Last edited:
Jup seems like Nvidia reflections are rendered in higher quality there (just play the clips at 0.25 speed), thus the bullet casing reflection are sharper and more distorted due to the water surface's imperfections. While the water puddles in AMD are just flat and no impurity at all.
Furthermore
AMD


Nvidia


AMD's reflections are less detailed and suffer from some quite jarring render error there. I also check the rain part, every Nvidia gpu it's right to left and for AMD it's left to right.

I dont see it.
What I do see is Nvidia loosing in the quality of the reflections.
Even the rain is missing and the reflections clicker real bad.
 
Jup seems like Nvidia reflections are rendered in higher quality there (just play the clips at 0.25 speed), thus the bullet casing reflection are sharper and more distorted due to the water surface's imperfections. While the water puddles in AMD are just flat and no impurity at all.
Furthermore
AMD


Nvidia


AMD's reflections are less detailed and suffer from some quite jarring render error there. I also check the rain part, every Nvidia gpu it's right to left and for AMD it's left to right.
who isnt to say its meant to be that way considering its a radar returning info.
 
Back