The Ray Tracing Slideshow: DXR on Nvidia Pascal Tested

Julio Franco

Posts: 9,199   +2,119
Staff member
And again we see why Techspot is awesome, getting us the actual benchmarks. Thanks for doing this article!


So long as you have an i9 9900k...

I kinda think the i7 8700 would have made more sense... most people can't afford the i9.

I have the i9 7980 Ex. I'm not sure if the 9900 is better or not. It is so much newer.
 
The Titan model is always at the forefront of the present technology.

I had the Titan and the Titan XP before getting the 2080Ti. Many people claimed the Titan wasn't worth the money and we didn't need the extra memory, but I looked upon it as future proofing.
 
The Titan model is always at the forefront of the present technology.

I had the Titan and the Titan XP before getting the 2080Ti. Many people claimed the Titan wasn't worth the money and we didn't need the extra memory, but I looked upon it as future proofing.

Its really only future proofing if you dont get every new card that comes out.

LOL I was thinking exactly the same.
 
Titan v should have been tested since it has lots of tensor cores.
I'm thinking of testing the Titan V but I only own Battlefield V - not the other games that currently support the technology.
 
Nvidia only released RT on pascal to show that it only really works on RTX cards in the hopes it would push users to upgrade to RTX. Given how much an RTX card with sufficient RT performance costs and how little adoption to this new tech there has been from game devs, this seems like a pretty desparate move on Nvidia's part.
 
What would the real-world case scenario for enabling this? If they added ray tracing to older games. I don't really see this happening, at least not games in enough quantity to justify buying one of these cards. What company would go back to an old game and add this?
 
Honestly, not question TS methods, but these numbers don't really make all that much sense to me. I am thinking the nVidia gimped the ray tracing on Pascal cards - it wouldn't be in their interest for GTX cards to be able to keep up with even the low-end of the RTX cards.
 
Honestly, not question TS methods, but these numbers don't really make all that much sense to me. I am thinking the nVidia gimped the ray tracing on Pascal cards - it wouldn't be in their interest for GTX cards to be able to keep up with even the low-end of the RTX cards.

Not really. As briefly mentioned in this article (& mentioned to death in the many other RTX-related articles), ray tracing is primarily designed to work with RT cores, not the mainstream CUDA cores, because the RT cores are optimized for that particular type of work. CUDA cores can ray-trace -- just like CPUs can also ray-trace -- but it requires a more brute-force method...& dedicating CUDA cores just to the ray-tracing portion means that the CUDAs aren't available for the rest of the rendering workload. Whether the ray-tracing implementation for the GTX cards is a "we allocate X% of the CUDA cores to do the ray-tracing workload, leaving fewer cores for the rest of it", or is a "we added the ray-tracing workload on top of the rest of the rendering workload the CUDA cores have to complete", it's basically increasing the CUDA workload without providing additional resources. Hence the performance hit they take. And as this article pointed out, one of the likely reasons for nVidia to provide it is to allow developers to test ray-tracing with their existing GTX cards (as they pointed out, unless you're looking at actual gameplay experience testing, developers don't need the game to run at 30 FPS, let alone 60 FPS, when they're writing the code or verifying that the models & textures are built correctly), as the idea would be that the more games that come out/get updated with ray-tracing support, the more gamers will buy RTX products to take advantage of it.
 
SO this raises the question, if CUDA can do ray tracing, would two CUDA cards make a significant difference, as in, can one be "dedicated" in a sense to do ray tracing, or at least with double the CUDA count would it make a significant impact on the performance. Games themselves rarely benefit from SLI these days but I would be curious to see if ray tracing can take better advantage of the additional CUDA count?

Heck, how does SLI affect RTX cards in RT titles while at it?

Would especially like to see what a pair of 1660 series cards can do and maybe a pair of 1070s.
 
I tried the demos with a 1080 Ti SLI setup but they only ever used 1 GPU. :( I'll try forcing some AFR later to see if it's better.
 
So long as you have an i9 9900k...

I kinda think the i7 8700 would have made more sense... most people can't afford the i9.

I have the i9 7980 Ex. I'm not sure if the 9900 is better or not. It is so much newer.

When ray tracing is turned on, the results are most likely GPU bound, so we would get similar results with lesser CPUs. TechSpot probably used an i9-9900k just so no one could dispute that it was CPU bound (or because the previous RTX cards they already tested on the i9-9900k and didn’t want to retest everything again on a different CPU).
 
Now the i9-9900k f can be bought https://www.netonnet.no/art/datakomponenter/prosessor/intelsocket1151/intel-core-i9-9900k/1004994.11758/?utm_source=prisguide&utm_medium=cpc&utm_term=1004994+- Intel Core i9-9900K&utm_campaign=prospecting_conversion_prisguide-prisjamforelse_no&dclid=CN3Y752Q2eECFQOOsgodJwUA0w not to exspensive eather

for rts rtx support we have to wait atleast 1 - 4 gen to get good performance in games. and when 8k comes in we have to wait on nother 5-7 th gen. 1 gen are just a test on how good it run into the future. but getting more pcie 4-7 and faster cpu better ram cooling and pcie 3.0 will not be made anymore the we can talk about good FPS in games cad s and so on. 3dmark has now a pcie 4.0 test.
 
Last edited:
TSMC is on track for 2020 production @ 5nm

Translation:
NVidia could massively outperform AMD and skip 7nm next year with greatly improved 2nd gen Ray Tracing @ 5nm

But, does anyone really want RTX2080-Ti performance and the "equivalent" of an i9-9900K next year running comfortably on a 450-500 Watt STX power supply in a tiny case?

Oh he11 yeah

and throw in some HBM memory while yer at it
 
SO this raises the question, if CUDA can do ray tracing, would two CUDA cards make a significant difference, as in, can one be "dedicated" in a sense to do ray tracing, or at least with double the CUDA count would it make a significant impact on the performance. Games themselves rarely benefit from SLI these days but I would be curious to see if ray tracing can take better advantage of the additional CUDA count?

You're still brute forcing Ray Tracing in that case, and will still lose performance relative to using RT cores.

Heck, how does SLI affect RTX cards in RT titles while at it?

I would imagine; the default is for each card in SLI to create alternate frames, so I'd expect it to scale. That being said, SLI is basically dead now that each developer is responsible for implementing it (Thanks Vulkan/DX12!).

Would especially like to see what a pair of 1660 series cards can do and maybe a pair of 1070s.

Putting aside the recent lack of SLI support, you still aren't going to get acceptable FPS. Let me be clear: Ray Tracing is *very* hard to compute in a traditional manner; that's why NVIDIA is adding specialized HW on to their cards to try and speed up the workload at a minimum of die space.
 
You're still brute forcing Ray Tracing in that case, and will still lose performance relative to using RT cores.

Yes I know, I am very much aware of this...

I would imagine; the default is for each card in SLI to create alternate frames, so I'd expect it to scale. That being said, SLI is basically dead now that each developer is responsible for implementing it (Thanks Vulkan/DX12!).

I am also aware of SLI being a dead thing, I'd still be curious to see what would happen if it was used, and more so with RTX cards with actual RT cores, does ray tracing itself scale, and how well if at all.

Putting aside the recent lack of SLI support, you still aren't going to get acceptable FPS. Let me be clear: Ray Tracing is *very* hard to compute in a traditional manner; that's why NVIDIA is adding specialized HW on to their cards to try and speed up the workload at a minimum of die space.

Dude, I know RT is hard for none RTX cards that do not have the hardware dedicated to calculating it. You aren't clarifying anything, you're stating the obvious which everyone already knows...

The fact of the matter is, CUDA cores can do it, let me be clear: More GPUs with More CUDA cores could eventually do it at an acceptable frame rate, I also understand the concept of diminishing returns, but as an experiment I think it would be interesting none the less to see what would happen.
 
I tried the demos with a 1080 Ti SLI setup but they only ever used 1 GPU. :( I'll try forcing some AFR later to see if it's better.
Unfortunately it's highly possible nVidia will not allow you to use a 2nd card with any DXR features intentionally because it would hurt sales of their RTX cards, it would essentially make in meaningless for someone in your situation to upgrade if acceptable frame rates can be achieved. It could be a limitation in the demos but I find that equally unlikely.
 
720P tho, surely they could make a great showing or ven 900p ...not too shabby.
Still haven't seen any difference RT vs Non-RT. And could these tensor cores, DLSS be re-purposed to give a) better non-RT performance ?
b) some mining functions (ethereum etc)
c) some AI / ML functions ?
 
Back