Shadow of the Tomb Raider: A Ray Tracing Investigation

Why do I get a feeling people just can't accept that hardware acceleration of ray tracing is a good thing?

The Anti-RTX people want to pound RTX with logical reasons on why it isn't good or worth it (such as saying it slows down the framerate), but the truth is software based ray tracing would drop frame rates by 75-90% as you can see in this video where they attempt it on the GTX 1080 and GTX 1080 Ti running at less than 10FPS --> https://www.reddit.com/r/nvidia/comments/9h62xl/ray_tracing_comparison_gtx_1080_1080_ti_2080_2080/

The truth is that the Anti-RTX movement is not based in logic, but one of emotion. NVidia really messed up the marketing. It was exactly this Star Wars real-time ray tracing demo they showed to introduce their RTX ray tracing technology...except it was originally being rendered on 4 x TESLA V100 cards (which cost more than $10,000 each). NVidia set the expectations so high for RTX people could only be very disappointed when Battlefield V's reflections finally came out in Nov 2018. We obviously know that Battlefield V's ray tracing doesn't look anything like the Star Wars demo, so lots of haters were created. They wildly-improperly set expectations on what 1 Turing based GPU could actually render.

Then there is the price. Had RTX 2080 Ti cost the same as GTX 1080 Ti at launch ($699) instead of $1199, I believe there would also be a lot less haters. The price has made the RTX generation become a luxury item, just like the iPhone X when it came out and people will emotionally hate anything that is priced out of their reach and also start doing cost/benefit analysis. You can blame AMD for this one; AMD has nothing that would force the price down, so NVidia will charge $1199 for their latest and greatest, just like Mercedes will charge two arms and a leg for the S-series with their latest and greatest technology.

So I understand why there are so many people Anti-RTX. But don't buy any logic based arguments such as it cutting the framerate, since a software-based solution drops the frame rates 75-90% into the low single digits even on a GTX 1080 Ti. This one is purely emotional due to NVidia's marketing team and pricing messing up.
 
The Anti-RTX people want to pound RTX with logical reasons on why it isn't good or worth it (such as saying it slows down the framerate), but the truth is software based ray tracing would drop frame rates by 75-90% as you can see in this video where they attempt it on the GTX 1080 and GTX 1080 Ti running at less than 10FPS --> https://www.reddit.com/r/nvidia/comments/9h62xl/ray_tracing_comparison_gtx_1080_1080_ti_2080_2080/

The truth is that the Anti-RTX movement is not based in logic, but one of emotion. NVidia really messed up the marketing. It was exactly this Star Wars real-time ray tracing demo they showed to introduce their RTX ray tracing technology...except it was originally being rendered on 4 x TESLA V100 cards (which cost more than $10,000 each). NVidia set the expectations so high for RTX people could only be very disappointed when Battlefield V's reflections finally came out in Nov 2018. We obviously know that Battlefield V's ray tracing doesn't look anything like the Star Wars demo, so lots of haters were created. They wildly-improperly set expectations on what 1 Turing based GPU could actually render.

Then there is the price. Had RTX 2080 Ti cost the same as GTX 1080 Ti at launch ($699) instead of $1199, I believe there would also be a lot less haters. The price has made the RTX generation become a luxury item, just like the iPhone X when it came out and people will emotionally hate anything that is priced out of their reach and also start doing cost/benefit analysis. You can blame AMD for this one; AMD has nothing that would force the price down, so NVidia will charge $1199 for their latest and greatest, just like Mercedes will charge two arms and a leg for the S-series with their latest and greatest technology.

So I understand why there are so many people Anti-RTX. But don't buy any logic based arguments such as it cutting the framerate, since a software-based solution drops the frame rates 75-90% into the low single digits even on a GTX 1080 Ti. This one is purely emotional due to NVidia's marketing team and pricing messing up.

Saying that software solutions would have a 75-90% drop in frame rates is factually incorrect. The software implementations we've see have in fact been far more efficient.
 

This is for a demo. Not an actual game. You and I can't play this tech demo. In the article you posted it even says "In all likelihood, it will be a long time before anyone can play around with Crytek’s ray tracing and unfortunately the tech demo shown in the video cannot be downloaded so we can try it on our own hardware, but we’re all waiting with bated breath."

Techspot has actual data of frame rates for hardware accelerated ray tracing through NVidia RTX like in this article we are commenting in. I can play all the games that Techspot is benchmarking and verify the data on my own system.

Please post actual data and/or articles of games using software-based ray tracing (such as this Crytek engine) and how using software-based ray tracing doesn't cause a 75-90% drop in performance, all the implementations that you said "The software implementations we've see have in fact been far more efficient." that is what I am asking.
 
This is for a demo. Not an actual game. You and I can't play this tech demo. In the article you posted it even says "In all likelihood, it will be a long time before anyone can play around with Crytek’s ray tracing and unfortunately the tech demo shown in the video cannot be downloaded so we can try it on our own hardware, but we’re all waiting with bated breath."

Techspot has actual data of frame rates for hardware accelerated ray tracing through NVidia RTX like in this article we are commenting in. I can play all the games that Techspot is benchmarking and verify the data on my own system.

Please post actual data and/or articles of games using software-based ray tracing (such as this Crytek engine) and how using software-based ray tracing doesn't cause a 75-90% drop in performance, all the implementations that you said "The software implementations we've see have in fact been far more efficient." that is what I am asking.

That's not how this works buddy. You made the claim that software based ray-tracing would drop frames by 75-90%. You didn't stipulate "in games" or anything. I disproved that and now, unsurprisingly, you are trying to move the bar.

Second, your claim of a 75-90% drop in performance is based off Nvidia marketing material with unaccelerated pascal cards vs accelerated Turing cards. If you are going to cry about a demo disproving your claim then I'm of course going to point out that not only is the comparison you cited unfair, it's also made specifically to sell the new cards, if the big Nvidia logo on it didn't give that away already.

I find it hard to believe you are being objective when you have a problem with a demo yet are completely fine citing Nvidia marketing material as legitimate "data". In addition, just like you criticized the linked CryTek demo for, Nvidia have still not released the Star Wars demo.

https://www.nvidia.com/coolstuff/demos

At best you are being hypocritical. The basis of your argument has the very flaws you are criticizing the CryTek demo for.

In addition and like I pointed out earlier, we don't need to compare just Ray Tracing to Ray Tracing. There are rasterization based techniques that can achieve the same visual effect with a fraction of the performance hit. I'd recommend you read over prior comments you skimmed over. There's no point in excluding rasterization simply because Nvidia says real time ray tracing is the next big thing.
 
That's not how this works buddy. You made the claim that software based ray-tracing would drop frames by 75-90%. You didn't stipulate "in games" or anything. I disproved that and now, unsurprisingly, you are trying to move the bar.

Second, your claim of a 75-90% drop in performance is based off Nvidia marketing material with unaccelerated pascal cards vs accelerated Turing cards. If you are going to cry about a demo disproving your claim then I'm of course going to point out that not only is the comparison you cited unfair, it's also made specifically to sell the new cards, if the big Nvidia logo on it didn't give that away already.

I find it hard to believe you are being objective when you have a problem with a demo yet are completely fine citing Nvidia marketing material as legitimate "data". In addition, just like you criticized the linked CryTek demo for, Nvidia have still not released the Star Wars demo.

https://www.nvidia.com/coolstuff/demos

At best you are being hypocritical. The basis of your argument has the very flaws you are criticizing the CryTek demo for.

In addition and like I pointed out earlier, we don't need to compare just Ray Tracing to Ray Tracing. There are rasterization based techniques that can achieve the same visual effect with a fraction of the performance hit. I'd recommend you read over prior comments you skimmed over. There's no point in excluding rasterization simply because Nvidia says real time ray tracing is the next big thing.

Actually, this is what you said "The software implementations we've see have in fact been far more efficient.". You also sent me a link to a CryTek engine ray tracing algorithm from 2009, 10 years ago now. Yet, there are no games that I've yet to see (or play) that feature such "far more efficient" software-based implementations. Why is this? Something just doesn't make sense with what you are arguing if CryTek already had the algorithm ready in 2009.

I posted the NVidia Star Wars demo because that was the ONLY data I could find on software-based real-time ray tracing after searching Google for over an hour. You posted that Crytek demo most likely because that is all you can find as well and then prance it around it as if it were actual proof, yet it doesn't even have a FPS counter. It is even weaker data than the NVidia demo I posted. For both our sakes, let us just discard both tech demos as they aren't valid evidence, especially since NVidia refuses to release the tech demo and who knows when CryTek will actually release this ray tracing API.

In April, though, NVidia will open all GTX cards from the GTX 1060 to ray tracing through DirectX12's DXR. We only have to wait a little bit of time and then real data will come out and we will see how the performance of a software-based algorithm runs on non-RTX hardware.

https://www.techspot.com/news/79256-nvidia-adding-ray-tracing-support-gtx-cards.html

We only have to wait until April for Techspot to run the numbers, which I guarantee they will be one of the first to give us an article of DirectX12 DXR on GTX hardware. Or, if you have a GTX card like I do, we can do our own benchmarking and post our own results.
 
Actually, this is what you said "The software implementations we've see have in fact been far more efficient.". You also sent me a link to a CryTek engine ray tracing algorithm from 2009, 10 years ago now. Yet, there are no games that I've yet to see (or play) that feature such "far more efficient" software-based implementations. Why is this? Something just doesn't make sense with what you are arguing if CryTek already had the algorithm ready in 2009.

You didn't follow that link to the 2009 paper did you? It was for soft shadows, not ray traced shadows.

Here's my comment


"Here's a link to a 2009 tech document showing that CryEngine added Soft Shadows way back then.

http://www.klayge.org/material/4_1/SSR/S2011_SecretsCryENGINE3Tech_0.pdf"

No idea where you got the idea that this was ray tracing but either you never actually followed the link or failed to grasp the subject matter. Soft shadows emulate the effect of a shadow appearing less sharp the further from the caster.


I posted the NVidia Star Wars demo because that was the ONLY data I could find on software-based real-time ray tracing after searching Google for over an hour. You posted that Crytek demo most likely because that is all you can find as well and then prance it around it as if it were actual proof, yet it doesn't even have a FPS counter. It is even weaker data than the NVidia demo I posted. For both our sakes, let us just discard both tech demos as they aren't valid evidence, especially since NVidia refuses to release the tech demo and who knows when CryTek will actually release this ray tracing API.

A marketing demo is weaker then a tech demo by a 3rd party? No, there is no doubt in my mind that the Nvidia demo is flawed for what should be obvious reasons

1. The Pascal cards had zero RT acceleration. They weren't showing off how software based RT worked because RTX is hardware based ray tracing acceleration. It provides exactly zero data on how Pascal or other Nvidia GPUs would perform under a software based solution. If you were searching for a software based solution then you complete missed the mark as RTX is hardware based. This should be clear, as it requires specific video cards with specific hardware designed for it. Clear cut case.

2. Nvidia created the absolute worst scenario to trump up and sell their latest cards plain and simple.

In April, though, NVidia will open all GTX cards from the GTX 1060 to ray tracing through DirectX12's DXR. We only have to wait a little bit of time and then real data will come out and we will see how the performance of a software-based algorithm runs on non-RTX hardware.

https://www.techspot.com/news/79256-nvidia-adding-ray-tracing-support-gtx-cards.html

DX12 DXR is an API, not a solution. Developers still need to implement their own solution that uses DXR for their game. It's essentially "Hey we let you guys use Ray Tracing but we aren't putting any work into it like what we did with our RTX cards!". Nvidia spent a lot of time creating the RTX API and developers spent months implementing solutions on their end. A fair comparison would be if Nvidia provides a CUDA accelerated GTX API for ray tracing in addition to allowing developers to create solutions to best take advantage of that API. After all, that exactly the benefits that RTX has gotten. There is no "real data" to be had comparing an RTX card that runs on an API and game optimized for it compared to a GTX card that runs on a API that provides it zero acceleration and zero game level optimizations. The clear difference in optimization only serves as marketing material for Nvidia.
 
You didn't follow that link to the 2009 paper did you? It was for soft shadows, not ray traced shadows.

Here's my comment


"Here's a link to a 2009 tech document showing that CryEngine added Soft Shadows way back then.

http://www.klayge.org/material/4_1/SSR/S2011_SecretsCryENGINE3Tech_0.pdf"

No idea where you got the idea that this was ray tracing but either you never actually followed the link or failed to grasp the subject matter. Soft shadows emulate the effect of a shadow appearing less sharp the further from the caster.




A marketing demo is weaker then a tech demo by a 3rd party? No, there is no doubt in my mind that the Nvidia demo is flawed for what should be obvious reasons

1. The Pascal cards had zero RT acceleration. They weren't showing off how software based RT worked because RTX is hardware based ray tracing acceleration. It provides exactly zero data on how Pascal or other Nvidia GPUs would perform under a software based solution. If you were searching for a software based solution then you complete missed the mark as RTX is hardware based. This should be clear, as it requires specific video cards with specific hardware designed for it. Clear cut case.

2. Nvidia created the absolute worst scenario to trump up and sell their latest cards plain and simple.



DX12 DXR is an API, not a solution. Developers still need to implement their own solution that uses DXR for their game. It's essentially "Hey we let you guys use Ray Tracing but we aren't putting any work into it like what we did with our RTX cards!". Nvidia spent a lot of time creating the RTX API and developers spent months implementing solutions on their end. A fair comparison would be if Nvidia provides a CUDA accelerated GTX API for ray tracing in addition to allowing developers to create solutions to best take advantage of that API. After all, that exactly the benefits that RTX has gotten. There is no "real data" to be had comparing an RTX card that runs on an API and game optimized for it compared to a GTX card that runs on a API that provides it zero acceleration and zero game level optimizations. The clear difference in optimization only serves as marketing material for Nvidia.

Anyways, maybe hardware accelerated ray tracing is the future. Maybe NVidia is wrong and it isn't and rasterization continues onward with better software-based algorithms using computing cores on AMD and NVidia GPUs.

Regardless, all I know is I am enjoying the ray traced reflections, global lighting and shadow effects that RTX are making possible right at this moment in time, in the games I am currently playing right now. That is the only point I came here to make.
 
The giant blur is actually more accurately portrayed. Look at the shadows around you when out doors. Tr shadows that are further from the ground, like tall trees, are blurry, while close ones are sharper. Sharper does not always mean more life-like.
Have you been outside in the park lately? Leaf shadows look exactly like in OFF version, back to reality.
 
Why do I get a feeling people just can't accept that hardware acceleration of ray tracing is a good thing?

No we think its awesome. What we *don't* think is awesome is paying 500$ extra for half the framerates. I own a 1440p 144hz monitor and want to buy a 3440x1440 120hz ultrawide. RTX is useless on those monitors and likely will be for at least 2 more generations (at least using these implementations).

I like many didn't see 4k as worth the frame loss and consider 60fps an absolute minimum for anything other than turnbased games. RTX is in the same boat. But its even worse because many of us see those RT cores (and especially the Turing cores) as wasted die space that could have gone towards cores that speed up everything, instead of just working on a handful of games with marginal visual improvements.

I own a 1080ti and am sitting this one out. I suspect the new player and new hardware arriving in 2020 will end up changing the playing field and likely driving prices back down to sanity.
 
You didn't follow that link to the 2009 paper did you? It was for soft shadows, not ray traced shadows.

Here's my comment


"Here's a link to a 2009 tech document showing that CryEngine added Soft Shadows way back then.

http://www.klayge.org/material/4_1/SSR/S2011_SecretsCryENGINE3Tech_0.pdf"

No idea where you got the idea that this was ray tracing but either you never actually followed the link or failed to grasp the subject matter. Soft shadows emulate the effect of a shadow appearing less sharp the further from the caster.




A marketing demo is weaker then a tech demo by a 3rd party? No, there is no doubt in my mind that the Nvidia demo is flawed for what should be obvious reasons

1. The Pascal cards had zero RT acceleration. They weren't showing off how software based RT worked because RTX is hardware based ray tracing acceleration. It provides exactly zero data on how Pascal or other Nvidia GPUs would perform under a software based solution. If you were searching for a software based solution then you complete missed the mark as RTX is hardware based. This should be clear, as it requires specific video cards with specific hardware designed for it. Clear cut case.

2. Nvidia created the absolute worst scenario to trump up and sell their latest cards plain and simple.



DX12 DXR is an API, not a solution. Developers still need to implement their own solution that uses DXR for their game. It's essentially "Hey we let you guys use Ray Tracing but we aren't putting any work into it like what we did with our RTX cards!". Nvidia spent a lot of time creating the RTX API and developers spent months implementing solutions on their end. A fair comparison would be if Nvidia provides a CUDA accelerated GTX API for ray tracing in addition to allowing developers to create solutions to best take advantage of that API. After all, that exactly the benefits that RTX has gotten. There is no "real data" to be had comparing an RTX card that runs on an API and game optimized for it compared to a GTX card that runs on a API that provides it zero acceleration and zero game level optimizations. The clear difference in optimization only serves as marketing material for Nvidia.

Anyways, maybe hardware accelerated ray tracing is the future. Maybe NVidia is wrong and it isn't and rasterization continues onward with better software-based algorithms using computing cores on AMD and NVidia GPUs.

Regardless, all I know is I am enjoying the ray traced reflections, global lighting and shadow effects that RTX are making possible right at this moment in time, in the games I am currently playing right now. That is the only point I came here to make.

I mean, you also spent 1200-1300$ on a card to play those 3 games with, so I'd want to argue for its superiority too! Id be quite concerned with the crytek demo showing similar visuals at similar framerates on a card costing 1/3rd as much.

Honestly the worst part isn't the RT cores, those will at least have some (possibly significant) use in the future, though by then it will be a gen or two old. No what would concern me is those Tensor cores. So far DLSS has looked no better, and in many cases *worse*, than ingame resolution scaling. And so far nvidia has no other applications for those cores. Nothing. How much of the die of that GPU is being consumed by math ASICs that are not even relevant to gaming?

Turing is a repurposed enterprise core being marketed to gamers.

If intel or AMD pushes hard into nvidias performance lead in 2020 expect to see nvidia respond with a GPU die with far less Tensor cores and possibly far less RT cores.

Just look at the cards they are marketing towards the majority of their gamer base (90% of their gamer customers) the 1660ti and below. Notice what those dies are missing? The 1660ti is a 284mm2 die and the 2060 is a 445mm2 die! My 1080ti is a 471mm2 die. So without any of the 20 series architectural improvements and just using the 12nm die shrink invidia could have given the 2060 1080ti levels of performance for the same 350$ price point!! With those architecture improvements it would have been another 10% at least. Instead they filled all that die space with Tensor cores and RT cores. Is that because they strongly believe in the tech or because that happens to be what they had already developed for AI and enterprise workloads? Who knows. I strongly suspect it was the latter. But what I do know is right now you cN have high resolutions and frame rates, or RTX. But not both. And given that Ill choose resolution and framerates over RTX from what I've seen the 20 series is a bad buy for me.

Maybe next gen.
 
I mean, you also spent 1200-1300$ on a card to play those 3 games with, so I'd want to argue for its superiority too! Id be quite concerned with the crytek demo showing similar visuals at similar framerates on a card costing 1/3rd as much.

Honestly the worst part isn't the RT cores, those will at least have some (possibly significant) use in the future, though by then it will be a gen or two old. No what would concern me is those Tensor cores. So far DLSS has looked no better, and in many cases *worse*, than ingame resolution scaling. And so far nvidia has no other applications for those cores. Nothing. How much of the die of that GPU is being consumed by math ASICs that are not even relevant to gaming?

Turing is a repurposed enterprise core being marketed to gamers.

If intel or AMD pushes hard into nvidias performance lead in 2020 expect to see nvidia respond with a GPU die with far less Tensor cores and possibly far less RT cores.

Just look at the cards they are marketing towards the majority of their gamer base (90% of their gamer customers) the 1660ti and below. Notice what those dies are missing? The 1660ti is a 284mm2 die and the 2060 is a 445mm2 die! My 1080ti is a 471mm2 die. So without any of the 20 series architectural improvements and just using the 12nm die shrink invidia could have given the 2060 1080ti levels of performance for the same 350$ price point!! With those architecture improvements it would have been another 10% at least. Instead they filled all that die space with Tensor cores and RT cores. Is that because they strongly believe in the tech or because that happens to be what they had already developed for AI and enterprise workloads? Who knows. I strongly suspect it was the latter. But what I do know is right now you cN have high resolutions and frame rates, or RTX. But not both. And given that Ill choose resolution and framerates over RTX from what I've seen the 20 series is a bad buy for me.

Maybe next gen.

Your analysis is pretty spot on. It really is my belief that because AMD fell so far behind, NVidia decided to take a chance and gamble with combining workstation/data center tech like Tensor cores and then "engineering dream" tech like RT cores. As Jen-Hsun Huang, your engineering team gets to finally play with some really cool ideas such as hardware accelerated ray tracing that everyone has been dreaming of since the first 3D graphics were drawn on a GPU, instead of just fighting for survival. At worst, NVidia will have to release a souped up GTX 1080 Ti variant using the Turing optimizations in Instructions Per Cycle (IPC), like you are saying with the die shrink to 12nm, if the gamble doesn't work and AMD does catch up if you lose the bet that people will embrace hardware accelerated ray tracing. But the time was now for NVidia to try some "pie in the sky" ideas with AMD so far behind them.
 
Guy who wrote this article are you sure you didn't mixed up pictures? Cause RTX OFF shows no difference or in some cases look much more natural, like this one:
http://puu.sh/D4Kse/86456a423b.jpg
With OFF shadows of leaf looks like shadows of leaf, but with ULTRA it's just big blur.

The giant blur is actually more accurately portrayed. Look at the shadows around you when out doors. Tr shadows that are further from the ground, like tall trees, are blurry, while close ones are sharper. Sharper does not always mean more life-like.

True. Raytracing means that light will bounce off objects, providing more ambient lighting from other areas that can soften a shadow. It still looks like crap here, though.
 
True. Raytracing means that light will bounce off objects, providing more ambient lighting from other areas that can soften a shadow. It still looks like crap here, though.

Just like to point out that what you are describing here, as in light bouncing off objects, is called indirect lighting. It isn't the effect responsible for softer shadows and Tomb Raider doesn't even feature indirect lighting. For example, look at the tree branches on the ground. Notice how the side facing the ground is very dark? If Ray Tracing was being used for indirect lighting here the side of the branch facing the ground would be receiving light from the ground, just like in real life.
 
Have you been outside in the park lately? Leaf shadows look exactly like in OFF version, back to reality.
Actually, it depends a lot on atmospheric conditions. If it's a cloudy, overcast day, shadows will tend to be blurry and indistinct, since sunlight is getting bounced around by the clouds, creating a more diffuse light source. On a clear, sunny day, light from the sun hits objects more directly though, resulting in shadows being much sharper. So, either way could be considered "realistic", depending on what the weather is like. Rainforests also tend to have a lot of water vapor in the air though, similarly reducing the sharpness of shadows, along with a dense canopy that blocks most direct sunlight. Try doing an image search for "rainforest" and note how few images show distinct shadows on the forest floor.

Id be quite concerned with the crytek demo showing similar visuals at similar framerates on a card costing 1/3rd as much.
It's a bit difficult to say from a tech demo that's been purpose-built to show off their engine. It's also likely that their game engine will utilize dedicated raytracing hardware when available to boost performance further. Perhaps raytraced effects can be done nearly as well without dedicated raytracing hardware, but I suspect that having that hardware should provide at least some performance benefit for those effects.

If intel or AMD pushes hard into nvidias performance lead in 2020 expect to see nvidia respond with a GPU die with far less Tensor cores and possibly far less RT cores.
Any cards Nvidia releases in 2020 would likely be built on the 7nm node, so they wouldn't need to reduce the number of cores to reduce the amount of space they require. I suspect they will retain at least the same number of RT cores, and will probably increase their number, if anything. As for the Tensor cores, considering DLSS is more or less useless in its current implementation, unless they find some way to better utilize them, I could see them potentially getting rid of those for their consumer cards, and just performing upscaling on the regular cores instead.

Notice what those dies are missing? The 1660ti is a 284mm2 die and the 2060 is a 445mm2 die! My 1080ti is a 471mm2 die. So without any of the 20 series architectural improvements and just using the 12nm die shrink invidia could have given the 2060 1080ti levels of performance for the same 350$ price point!!
In addition to the Tensor and RT cores, it should be noted that the 2060 also has 25% more traditional graphics cores compared to the 1660 Ti. And the 2060 is actually a cut-down 2070, which has 50% more cores and 33% more ROPs than a 1660 Ti, which likely amount to most of that size difference. If a 2070 didn't have RT or Tensor cores, it would still likely be nearly 50% larger than a 1660 Ti, or more than 400mm2. So, I would estimate that the 2070's Tensor and RT cores combined don't take up much more than 10% of the chip. Adding the same number of RTX and Tensor cores to the 1660 Ti probably wouldn't have increased its core size by much more than 15%. In terms of cost added to the card as a whole, they could have probably given the 1660 Ti the same level of raytracing performance as a 2070 without increasing the card's cost by more than $20. And with a process shrink, the additional cost for that level of raytracing performance could be even smaller.

I agree that the performance gains at a given price level for Nvidia's new cards are rather mediocre given the length of time since the 10-series launched, but I think that's only partially due to the specialized cores that were added, and more down to them taking advantage of the current lack of competition at the high end. And even the 16-series cards are a little underwhelming, despite lacking those new cores, since the existing Vega cards are too expensive to manufacture for more mid-range price points, and it will likely be some months before Navi is out.
 
The biggest draw in DLSS is not the graphics but the performance. What framerate improvements are had by enabling it? Obviously it's not as enhanced as it could be as we've seen from the shadow quality using ray tracing, but again the big benefit of using DLSS is that you should be able to get better frame rates while preserving some graphical enhancements like ray tracing.
 
You didn't follow that link to the 2009 paper did you? It was for soft shadows, not ray traced shadows.

Here's my comment


"Here's a link to a 2009 tech document showing that CryEngine added Soft Shadows way back then.

http://www.klayge.org/material/4_1/SSR/S2011_SecretsCryENGINE3Tech_0.pdf"

No idea where you got the idea that this was ray tracing but either you never actually followed the link or failed to grasp the subject matter. Soft shadows emulate the effect of a shadow appearing less sharp the further from the caster.




A marketing demo is weaker then a tech demo by a 3rd party? No, there is no doubt in my mind that the Nvidia demo is flawed for what should be obvious reasons

1. The Pascal cards had zero RT acceleration. They weren't showing off how software based RT worked because RTX is hardware based ray tracing acceleration. It provides exactly zero data on how Pascal or other Nvidia GPUs would perform under a software based solution. If you were searching for a software based solution then you complete missed the mark as RTX is hardware based. This should be clear, as it requires specific video cards with specific hardware designed for it. Clear cut case.

2. Nvidia created the absolute worst scenario to trump up and sell their latest cards plain and simple.



DX12 DXR is an API, not a solution. Developers still need to implement their own solution that uses DXR for their game. It's essentially "Hey we let you guys use Ray Tracing but we aren't putting any work into it like what we did with our RTX cards!". Nvidia spent a lot of time creating the RTX API and developers spent months implementing solutions on their end. A fair comparison would be if Nvidia provides a CUDA accelerated GTX API for ray tracing in addition to allowing developers to create solutions to best take advantage of that API. After all, that exactly the benefits that RTX has gotten. There is no "real data" to be had comparing an RTX card that runs on an API and game optimized for it compared to a GTX card that runs on a API that provides it zero acceleration and zero game level optimizations. The clear difference in optimization only serves as marketing material for Nvidia.

Anyways, maybe hardware accelerated ray tracing is the future. Maybe NVidia is wrong and it isn't and rasterization continues onward with better software-based algorithms using computing cores on AMD and NVidia GPUs.

Regardless, all I know is I am enjoying the ray traced reflections, global lighting and shadow effects that RTX are making possible right at this moment in time, in the games I am currently playing right now. That is the only point I came here to make.

I mean, you also spent 1200-1300$ on a card to play those 3 games with, so I'd want to argue for its superiority too! Id be quite concerned with the crytek demo showing similar visuals at similar framerates on a card costing 1/3rd as much.

Honestly the worst part isn't the RT cores, those will at least have some (possibly significant) use in the future, though by then it will be a gen or two old. No what would concern me is those Tensor cores. So far DLSS has looked no better, and in many cases *worse*, than ingame resolution scaling. And so far nvidia has no other applications for those cores. Nothing. How much of the die of that GPU is being consumed by math ASICs that are not even relevant to gaming?

Turing is a repurposed enterprise core being marketed to gamers.

If intel or AMD pushes hard into nvidias performance lead in 2020 expect to see nvidia respond with a GPU die with far less Tensor cores and possibly far less RT cores.

Just look at the cards they are marketing towards the majority of their gamer base (90% of their gamer customers) the 1660ti and below. Notice what those dies are missing? The 1660ti is a 284mm2 die and the 2060 is a 445mm2 die! My 1080ti is a 471mm2 die. So without any of the 20 series architectural improvements and just using the 12nm die shrink invidia could have given the 2060 1080ti levels of performance for the same 350$ price point!! With those architecture improvements it would have been another 10% at least. Instead they filled all that die space with Tensor cores and RT cores. Is that because they strongly believe in the tech or because that happens to be what they had already developed for AI and enterprise workloads? Who knows. I strongly suspect it was the latter. But what I do know is right now you cN have high resolutions and frame rates, or RTX. But not both. And given that Ill choose resolution and framerates over RTX from what I've seen the 20 series is a bad buy for me.

Maybe next gen.
Good points. By all accounts these are early days for RTX and developers are just starting to take advantage of it's features. I have an RTX 2080 and have been been most impressed by Metro Exodus so far. I was able to buy in at launch for $750 and considering that was close to the going price of a 1080 Ti I am okay with the cost considering the performance is equivalent with ray tracing options and the (potential) promise of DLSS enhancements. From a price point and performance perspective I get why anyone that currently owns a 1080 Ti would hold off, but to each his own.

As you mentioned you can currently go for performance or quality and with Exodus I love the ray tracing, even though it's not a smooth 4K or 60fps experience. DLSS was the feature promised to help mesh both performance and quality, yet so far most experiences just come off with a softer image. Both Tomb Raider and Metro Exodus just recently released either their game or their ray tracing support, so again I believe there are improvements to be made. It's just too bad that Tomb Raider's initial RTX support offering is not that great at all. I find it strange that even though the article mentions the DLSS support that it doesn't mention what it does to framerate. I get that the image is a little softer, but you should get a performance gain in using it over leaving it off. Ideally you wouldn't get a softer image from it's use, but perhaps that can be addressed in future updates.
 
Good points. By all accounts these are early days for RTX and developers are just starting to take advantage of it's features. I have an RTX 2080 and have been been most impressed by Metro Exodus so far. I was able to buy in at launch for $750 and considering that was close to the going price of a 1080 Ti I am okay with the cost considering the performance is equivalent with ray tracing options and the (potential) promise of DLSS enhancements. From a price point and performance perspective I get why anyone that currently owns a 1080 Ti would hold off, but to each his own.

As you mentioned you can currently go for performance or quality and with Exodus I love the ray tracing, even though it's not a smooth 4K or 60fps experience. DLSS was the feature promised to help mesh both performance and quality, yet so far most experiences just come off with a softer image. Both Tomb Raider and Metro Exodus just recently released either their game or their ray tracing support, so again I believe there are improvements to be made. It's just too bad that Tomb Raider's initial RTX support offering is not that great at all. I find it strange that even though the article mentions the DLSS support that it doesn't mention what it does to framerate. I get that the image is a little softer, but you should get a performance gain in using it over leaving it off. Ideally you wouldn't get a softer image from it's use, but perhaps that can be addressed in future updates.

The big problem with DLSS (aside from the drop in image quality) is it doesn't work well when you are already getting 60 FPS.

https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/

" DLSS is designed to boost frame rates at high GPU workloads (I.e. when your framerate is low and your GPU is working to its full capacity without bottlenecks or other limitations). If your game is already running at high frame rates, your GPU’s frame rendering time may be shorter than the DLSS execution time. In this case, DLSS is not available because it would not improve your framerate. However, if your game is heavily utilizing the GPU (e.g. FPS is below ~60), DLSS provides an optimal performance boost. You can crank up your settings to maximize your gains. (Note: 60 FPS is an approximation -- the exact number varies by game and what graphics settings are enabled)"

Just a heads up, it's impossible for DLSS to fix the softer image issue. The image appears softer due to the way DLSS works.

https://www.tomshardware.com/reviews/dlss-upscaling-nvidia-rtx,5870-5.html

"One observation is clear, though: DLSS starts by rendering at a lower resolution and then upscaling by 150% to reach its target output"

During that up-scaling process, you are taking a lower resolution frame and adding a bunch of pixels into it. You obviously cannot add detail that's not there and so you get that softer look as a result. The only way Nvidia could fix this is by initially rendering at the full resolution so that all the detail is in place before performing the AA. Nvidia would have to add more Tensor cores to handle the additional load and at that point they are investing way too much into DLSS when other AA methods produce just as good results. That die space would be better utilized by CUDA cores.
 
A comment from the future!! Raytracing Ultra is now great in Shadow of the Tomb Raider on the more powerful RTX 3070 - and I got the entire Definitive Edition game for FREE on some Epic Games giveaway!
 
Back