The State of Nvidia RTX Ray Tracing: One Year Later

Jup seems like Nvidia reflections are rendered in higher quality there (just play the clips at 0.25 speed), thus the bullet casing reflection are sharper and more distorted due to the water surface's imperfections. While the water puddles in AMD are just flat and no impurity at all.
Furthermore
AMD


Nvidia


AMD's reflections are less detailed and suffer from some quite jarring render error there. I also check the rain part, every Nvidia gpu it's right to left and for AMD it's left to right.

glad you pointed that out because it only did it in 1080p not 2560x1440.
sooooooo yeahhhhhhh
 
Good eyes there, the rain in my clip is right to left while in yours it's left to right. With Youtube compression it's harder to spot but in my original clip the rain drops are very obvious.
Busted, im so glad I checked made me think you had the upper hand when you didnt.
nice try tho keep it up
 
Preamble: I've worked with numerous 3d graphics apps, 3dsmax, modo, terragen, v-ray etc. So I know a thing or two about ray tracing in its various forms.

First, I'll post a very general and positive thought about this. I'm glad that *some form* of raytracing is being pushed forth as tech used in game development. And these "fringe/pioneer" technologies have to start somewhere, if it should emerge at all. And yes, only the very wealthy will buy this tech, in the beginning of the tech being adopted. Just a general thought.

Now the negatives, which are more numerous:
1.As almost everyone else has already mentioned, they aren't utilizing full ray tracing approach to the entire render pipeline. All of the games are using a quasi-ray tracing method, blended with traditional rendering methods. So, we are not seeing the benefits that a holistic approach to ray tracing would grant. You have to squint your eyes to tell the differences at times. Mostly you can tell by shadows and light reflections of diffuse surfaces (radiosity), especially in corners. But due to the quality of standard rendering methods when RTX is off, the difference in experience is negligible.
2.The high price compared to the negligible experiential difference, combined with lower performance, is not conducive to mass appeal and adoption of the product.
3.The above shows us the equations of "tech-to-vision" and "tech-to-market" are premature.

I still have a regular Geforce 1080. And what I personally would want in a GPU, is *still* raw power. I won't care about ray tracing until it becomes widely adopted everywhere. And before such time that ray tracing becomes important to me, I'll care far more about being able to render +80fps in 2x4K VR screens (Pimax 8KX). I have no trouble being motivated to buy a very expensive high end card, but I would want a card *without* the RTX with the same raw power - at a lower price. Nvidia should make a version of the next 3080 Ti that does not contain the RTX core, at a lower price. I also hope AMD straddles on the stage with a killer GPU to beat the 3080 Ti, as they are killing it with their high-end CPUs recently (despite high watt usage). If AMD is able to deliver a cheaper competitor to the 3080 Ti in terms of raw power, without the Ray Tracing, I'll go for AMD.
 
I found this article very negative towards ray tracing and I think that’s quite unfair. Nvidia have clearly invested a lot into this technology and because of this developers are beginning to. And let’s face it, this is no gimmick, ray traced 3D rendered images are superior to rasterisation or anything before. AMD need to get something out soon before their cards get classed second class in image quality as well as frame rates and power efficiency.

My opinion is that it’s been quite a good first year for a new tech. Oh and that people who whine about it being too expensive must be new here. New techs always cost loads, what we’re people expecting, ray tracing on a budget GPU? Come on..
 
...
My opinion is that it’s been quite a good first year for a new tech.

What good things happened during the year for RTRT? Was there any single title looking way better(produced 'wow' effect) with RT on than off? I don't see any: you have to carefully examine every pixel in most scenes to tell if this is RT on or off. But you immediately notice the performance drop even using the most advanced 2080 Ti.

Oh and that people who whine about it being too expensive must be new here. New techs always cost loads, what we’re people expecting, ray tracing on a budget GPU?

This is OK, but almost never it was so expensive, even 20 years ago GeForce 256 was not that much(even if you take the $ inflation into account). And since then there was no single new consumer-grade GPU to cost more than $1000. Of course full RT is a whole different rendering method but we have to wait about 10 years (or more) to make it practically useful. The uplift in rendering quality is absolutely not worth it for full RT. Hybrid rendering shows mixed results so far and is not that impressive.
 
What good things happened during the year for RTRT? Was there any single title looking way better(produced 'wow' effect) with RT on than off? I don't see any: you have to carefully examine every pixel in most scenes to tell if this is RT on or off. But you immediately notice the performance drop even using the most advanced 2080 Ti.



This is OK, but almost never it was so expensive, even 20 years ago GeForce 256 was not that much(even if you take the $ inflation into account). And since then there was no single new consumer-grade GPU to cost more than $1000. Of course full RT is a whole different rendering method but we have to wait about 10 years (or more) to make it practically useful. The uplift in rendering quality is absolutely not worth it for full RT. Hybrid rendering shows mixed results so far and is not that impressive.
What good things happened? We actually got real time live ray tracing in games!

I think it’s a case of managing expectation. What do you expect? A complete library of games to support this very expensive bleeding edge technology and maximise its potential in its first year or two? If so then it’s very unrealistic. If however you aren’t quite so delusional then you might expect developer support to start slow and then as the cards get cheaper developers will start announcing more and more ray tracing games. And that’s what we are getting more or less.

In a few years time it will be normal for a new AAA game to support ray tracing and developers will get better and better at using it.

The real positive is the news that next gen consoles are getting it. If this isn’t an endorsement of the technology then I don’t know what is?
 
This is OK, but almost never it was so expensive, even 20 years ago GeForce 256 was not that much(even if you take the $ inflation into account).
The NV10 was a relatively basic chip, in terms of manufacturing, for the time of its launch (October 1999) having just 17 million transistors, built on a standard 220 nm stop-gap process node and with a chunky 140 mm2 die size. The SDR version launched at $250, with the DDR and Quadro versions appearing later. The relative lack of SKUs off that single chip is indicative of how little binning the manufacturing went through (this did change with the GeForce 2).

Compare that to CPUs of the same period - for example, Coppermine Pentium IIIs at the time were 28 million transistors, 180 nm and 90 mm2; the 733MHz version (released same time as the GeForce 256) cost $775. There were 9 SKUs off this one chip in October, with the cheapest being just $250. A further 16 appeared over the next 12 months, with the most expensive being just under $1000.

Fast forward to September 2018 and the tables have somewhat turned. The GeForce RTX 2080 Ti has 18600 million transistors, built on a custom TSMC 12nm process, with a 754 mm2 die - all for $1000 MRSP launch price (some of which has to cover 11 GiB of GDDR6). The TU102 chip is found in 6 SKUs.

Compare that to the Intel Core i9-9900K: more than 2500 million transistors (Intel are shy about exact figures these days), Intel's own refined 14++ process, 175 mm2 die. Launch price around $490. The 8 core Coffee Lake Refresh appears in 10 SKUs.

Certainly the very best top/high end GPUs are now very expensive (and Nvidia have clearly played a game of "people will still pay silly prices, if there is no competition"), but they're also utterly massive chips, and the likes of the TU102 still only appears in a relatively small number of SKUs compared to high end desktop CPUs.
 
What do you expect?

1-3 games with nextgen RT graphics of quality that once you look at it you say to yourself: I don't want to play this rasterized ... any more :) And without 50-1000% perf. drop. I don't see any single title of such quality right now or even expect it in the near future. Today to me RT looks like tech used by some not very successful projects(Control, to some extent Metro Exodus) to get some attention (and ... money from nvidia).
 
1-3 games with nextgen RT graphics of quality that once you look at it you say to yourself: I don't want to play this rasterized ... any more :) And without 50-1000% perf. drop. I don't see any single title of such quality right now or even expect it in the near future. Today to me RT looks like tech used by some not very successful projects(Control, to some extent Metro Exodus) to get some attention (and ... money from nvidia).
There are more than 3 games that are playable at above 60fps with RTX on.

I advise doing some more research on this subject as you clearly don’t know anything about it if you think we don’t even have 1-3 games with RTX features.
 
There are more than 3 games that are playable at above 60fps with RTX on.

Of course there are. But not in 4K, you're limited to 1440p at best, also 60 fps is not enough for every situation(some own 144-240 Hz monitors) . The main problem is: you have to carefully examine each frame to notice the better quality of "RTX on". We don't find any striking uplift in visual quality when turning RTX ON in any of the 5-7 titles we have right now. Maybe some obvious differences we notice only in reflections, but other details(light/shadows) you have to carefully explore or it looks a bit different but not super realistic(metro exodus for example). The original article is exactly about this issue. Even Quake II RTX with full RT is not day/night compared to the original: RTX is not able to fix ugly models (like 3-5 polygons each), it also ruins the dark/grim atmosphere of the original with adding more light to some locations. So I find nothing so exciting about RTX which would justify the price $1200-1500 (2080 ti), compared to my (used) $350-400 1080 Ti. Performance uplift for non-RTX is also not quite there: + 25-35% FPS in 1440p. So I'm skipping Turing and waiting for the next gen of RTX hardware.
 
Certainly the very best top/high end GPUs are now very expensive (and Nvidia have clearly played a game of "people will still pay silly prices, if there is no competition"), but they're also utterly massive chips

I agree that manufacturing costs are very high, but I'm looking from the perspective of consumer, not manufacturer. I don't care how much BNs of transistors are there if they not bring any noticeable value. RTX is example of that: why waste transistor budget on RT/tensor cores if they're not (properly) utilized by 90% of gaming software. Tensor cores could be potentially useful in some machine learning tasks, but we don't buy gaming GPU for that (DLSS I consider failed and not generally useful, could be replaced by conventional AA/SS techniques). Nvidia partially confirmed the problem of RT/tensor by adding Turing GTX chips: no RT/tensor cores, but reasonable price. It's OK but main feature of Turing is dropped from GTX 16XX :(
 
The tensor cores are used for denoising in RT, but you're right in that they're otherwise underutilised in games. However, this isn't the fault of the hardware: the games industry has always been slow to adopt developing technologies, other than token implementations (the reason being is that if only a small percentage of the potential userbase for game has hardware to support such features, then the cost in developing them isn't going to be recouped). Game AI/pathfinding and procedural generation of landscapes, textures, sounds, etc are prime areas for ML development.

I'd also agree that for a good number of people, the tensor and RT cores in RTX models might seem to be a waste of transistors, but without knowing exactly how many are used for each structure, it's hard to judge how costly they are (especially since Nvidia altered the structures of the SM units for the TU116, compared to the other TU chips).

But just in terms of pure transistor count (which I know is a silly thing to use in any kind of analysis), AMD's Radeon VII is a 13.23 billion transistor chip; Nvidia's GeForce RTX 2080 is a 13.6 billion one. They perform pretty similarly, so just on that basis, the addition of the tensor and RT cores is certainly no loss.

Designing a single architecture for multiple market sectors, that have wildly different needs, is always going to be a risk, and both AMD and Nvidia have taken this path. For Nvidia, it's worked out alright, but somewhat less so for AMD. Intel will not be following suit, with the Xe range, as they're going with 3 different architectures altogether.
 
There are more than 3 games that are playable at above 60fps with RTX on.

I advise doing some more research on this subject as you clearly don’t know anything about it if you think we don’t even have 1-3 games with RTX features.


Again, posters like you clearly show you have not figured out Jensen lied to you.

There is only 6 "RTX On" game in existence. Most games and Gamers will be using DXR, which is not "RTX On"... like Jensen mislead a good many people to believe. As neeyik just mentioned, RTX Turing chips have special cores, called RT cores. Those "special" cores are not used in DXR games.

We all know this, you have been duped. It is what it is... and Nvidia tried to pull another PhysX thing, using slight of hand marketing hoax, imo.
 
As neeyik just mentioned, RTX Turing chips have special cores, called RT cores. Those "special" cores are not used in DXR games
RT cores are used in code done in the DirectX Raytracing (DXR) pipeline, if that code is doing ray-triangle intersection calculations and BVH acceleration structures. The drivers compile those shaders for the GPU, but it’s the SM scheduler and dispatch units that determine what parts of the GPU process the instructions. For Volta, Pascal, and Turing GTX, it’s the normal shader cores; for Turing RTX, it’s the shader and RT cores.
 
RT cores are used in code done in the DirectX Raytracing (DXR) pipeline, if that code is doing ray-triangle intersection calculations and BVH acceleration structures. The drivers compile those shaders for the GPU, but it’s the SM scheduler and dispatch units that determine what parts of the GPU process the instructions. For Volta, Pascal, and Turing GTX, it’s the normal shader cores; for Turing RTX, it’s the shader and RT cores.

RT cores are not front end. That is why you take such a massive hit in performance, waiting for RT cores to speed things up in certain games, because it's not native.
 
Last edited:
RT cores are not front end. That is why you take such a massive hit in performance, waiting for RT cores to speed things up in certain games, because it's not native.
In Nvidia GPUs, the Gigathread Engine is the front end, so no - the RT cores aren't front end. But the same is true for the shader units, triangle setup, ROPs, TMUs, etc. The fact that the RT cores aren't responsible for warp schedule and dispatch has nothing to do with the performance, nor even relevant to the point as to whether or not they're supported in DXR.
 
There is another DXR + DLSS title on the block: Deliver us the moon. It's an indie game and not well known but the DXR and DLSS implementation here is comparable to Control. I just tried it and it even has 3 levels of RTX (Medium, High and Epic) and 3 levels of DLSS (Performance, Balanced and Quality). So far the Ray Traced Reflection and Transparent Reflections are more discernible than shadows. DLSS can go from 50% FPS improvement in Quality mode to 100% improvement in Performance mode. RTX and DLSS is progressing nicely, hopefully Cyberpunk 2077 can make a great use of RTX.
 
Of course there are. But not in 4K, you're limited to 1440p at best, also 60 fps is not enough for every situation(some own 144-240 Hz monitors) . The main problem is: you have to carefully examine each frame to notice the better quality of "RTX on". We don't find any striking uplift in visual quality when turning RTX ON in any of the 5-7 titles we have right now. Maybe some obvious differences we notice only in reflections, but other details(light/shadows) you have to carefully explore or it looks a bit different but not super realistic(metro exodus for example). The original article is exactly about this issue. Even Quake II RTX with full RT is not day/night compared to the original: RTX is not able to fix ugly models (like 3-5 polygons each), it also ruins the dark/grim atmosphere of the original with adding more light to some locations. So I find nothing so exciting about RTX which would justify the price $1200-1500 (2080 ti), compared to my (used) $350-400 1080 Ti. Performance uplift for non-RTX is also not quite there: + 25-35% FPS in 1440p. So I'm skipping Turing and waiting for the next gen of RTX hardware.
You really ought to actually try using RTX before writing essays on tech forums slamming it. You clearly haven’t, it’s quite obvious when it’s turned on or off. Either that or you need glasses.

I don’t understand why you would come on here writing essays on how awful RTX is when you quite blatantly have never even tried it.
 
In Nvidia GPUs, the Gigathread Engine is the front end, so no - the RT cores aren't front end. But the same is true for the shader units, triangle setup, ROPs, TMUs, etc. The fact that the RT cores aren't responsible for warp schedule and dispatch has nothing to do with the performance, nor even relevant to the point as to whether or not they're supported in DXR.

RT cores are not natively used, & they can't be used in real time, software driven which causes latencies. RT cores are excellent on Workstations, because each frame is not rendered in real time, it renders the image fully, over a few seconds of moving an image, etc.

Games are different and Jensen tried to make use of Turing's RT cores. Jensen has been trying to masquerade 6 "RTX On" games, as some industry standard (& it has failed). Developers are not working with Nvidia on RTX On games. Nvidia's next approach will be much different. Turing is broken for DXR gaming.
 
RT cores are not natively used, & they can't be used in real time
For OpenGL and Vulkan, an extension has to be used (but this is true for lots of GPU functions). However, Direct 12 has an integrated pipeline that has specific hardware requirements for it to be used - however, the actual implementation of the architecture is transparent to the pipeline. For example, the shader architecture employed by Nvidia is different to that used by AMD, but vertex, pixel, compute, etc shaders programmed in D3D12 are oblivious to this difference. The reason being is that the GPU drivers compile the code for the hardware, not the API. So when using the DXR pipeline, the respective GPU running the code will implement whatever hardware is available to process the instructions; in the case of RTX graphics cards, that means the RT cores are used for processing triangle-ray intersection calculations and BVH algorithms, and the tensor cores are used for denoising calculations. For GTX cards running the code, the shader cores are used for all such calculations and since they're not specialised for such work, the performance is clearly a lot slower.

Now if you want to believe something entirely different, then go ahead and think that way; the reality of the situation will remain the same, regardless of your thoughts and statements on the matter.
 
"that means the RT cores are used for processing triangle-ray intersection calculations and BVH algorithms, and the tensor cores are used for denoising calculations."

I think you are incorrect. That is how it works for RTX games.
 
The RT cores contain two specific units - one determines what bounding volume a triangle resides in, within a bounding volume hierarchy; the second determines exactly where in screen space a ray intersects with the triangle in question. Now if the DXR doesn't use BVHs to accelerate the ray tracing process (and Microsoft strongly suggest that you do), then that aspect of the RT core won't do anything. Similar with the tensor cores and denoising: these units do nothing more than multiple and accumulate 4x4 matrices. Since denoising generally performs such calculations, the drivers for the GPU will pick up the D3D12 instructions and issue the calculation to the tensor cores; however if no such operations are requested, then they won't get used.
 
A bit late but Ray Tracing reminds me of 32-bit color back in the late '90s. You had to stay still and focus on the right parts of the screen to see the results, the game had to be specifically programmed to use it, and once you started playing you are very unlikely to get even mildly enjoyable frame rates.
 
A bit late but Ray Tracing reminds me of 32-bit color back in the late '90s.

Yes, to some extent: the difference between

4 bit -> 8 bit color is huge,
8 -> 15/16 bit is also very evident,
16 -> 24/32 bit more limited, but clearly noticeable.

I think 32 bit color was not so aggressively marketed as the only "big thing", there was much more than that, remember Riva TNT/TNT2 vs 3Dfx for example: resolution increase, megapixel/triangle power increase, higher quality textures, (limited) OpenGL support, AGP, 2x more VRAM, etc. It was very innovative product for its time. On the contrary RTRT by itself was too much promise from NV when Turing started, but even now (1.5 years later!!) how many titles with proper RTX/DXR implementation we have? Only Control, Metro Exodus, QIIRTX(not new)? Upcoming titles: CP 2077 and ...? BTW you could use 32 bit color in almost every title. I think maybe when Ampere is delivered we'll get more value in RT. But Turing will be obsolete then.

And my GTX 1080 Ti is still fine in 2020, but RTX 20X is marketing BS still - don't waste your money.
 
Last edited:
Back