Nvidia Turing is here: Next-gen architecture is the first real-time ray tracing GPU

LemmingOverlrd

Posts: 86   +40
Forward-looking: Nvidia CEO Jen-Hsun Huang has demoed Turing and brought ray tracing graphics processing into real-time. Turing is Nvidia's new GPU architecture for AI, Deep Learning and Ray Tracing.

SIGGRAPH has long lost its exclusively "cinematic arts" roots and evolved into a mix of industry, prosumer and consumer technologies. While last year saw AMD make all the noise, this year... well, this year it sounds like the trophy goes to Nvidia, if CEO Jen-Hsun Huang is to be believed.

Taking to the stage in traditional garb (I.e. leather jacket), Jen-Hsun casually introduced the world to its next-generation of Quadro graphics cards, the Quadro RTX, powered by Nvidia's next-gen GPU architecture, Turing. It took just a few minutes to build up the narrative, the one about lighting being key to photorealism, from the early attempts of J. Turner Whitted at Ray Tracing on a VAX-11/780 to the holy grail that is what Nvidia is calling real-time Global Illumination, or the ability to accurately replicate light effects in an environment to the point where it is photorealistic.

This is something which, until today, was considered server farm material for processing over hours or even days. Then the Nvidia CEO whipped out the goods.

Nvidia formally introduced three new Quadro cards, the Quadro RTX 5000, RTX 6000 and RTX 8000, summarised below:

Jen-Hsun hailed it as the biggest achievement since the introduction of CUDA, ten years ago. And possibly with good reason. It is the first commercially available real-time ray tracing graphics processor.

At the core of these cards, Jen-Hsun told, is the Turing GPU consisting of:

  • The Streaming Multiprocessor (SM) core which provides compute and shading power, all in one;
  • The Real Time Raytracing (RTRT) core, which provides, well... real-time ray tracing;
  • The Tensor Core for Deep Learning and AI;
  • Video subsystem, which provides HEVC 8K Encode;
  • Memory subsystem, with a 384-bit bus and GDDR6 running at 14Gb/sec;
  • NVLink subsystem which now shares framebuffer across all cards at 100GB/sec;
  • Display subsystem which powers 4 displays + VirtualLink.

So what does Turing bring to the table? Well, if you know Volta, some parts are familiar, while one major architectural change stands out: what Nvidia is calling the Real-time Ray Tracing (RTRT) core.

Very little is known about it thus far -- except maybe a metric Nvidia is trying to establish, 'Gigarays' -- but we assume a lot more will be known over the coming days. Nvidia did discuss some techniques it is using in ray tracing, and how it is leveraging Deep Learning to teach the GPU to do better lighting effects. Jen-Hsun claims the real-time ray tracing performance on Turing is 6x that of Pascal (although we'd hardly use Pascal for comparison).

For actual CUDA performance we won't risk calculating much of what's going to happen at the Streaming Multiprocessor core, as it is a re-engineerd pipeline of FP and INT. Apart from this, Turing also introduces a new NVLink, which blows away the looming limitation of framebuffer sharing.

Yes. With the new NVLink, framebuffers are now cumulative instead of mirrored. This means two RTX 8000 cards are effectively sharing 96GB of GDDR6.

The Tensor cores are rated at 500 trillion operations per second, however, we've noticed that Nvidia seems to have 'dumbed down' from Volta. Jen-Hsun made no reference to single-precision (FP32) or double-precision (FP64) operations, focusing only on FP16 (125 TFLOPS), INT8 (250 TOPS) and INT4 (500 TOPS). This may provide a matter for controversy. Nvidia will take flak for its design, we're sure.

Out of curiousity, and as a matter of comparison, Turing has a massive die size of 741mm2 (Volta was 815mm2), and packs 18.6 billion transistors under the hood (Volta was 21.1 billion). These remain massive GPUs, and we cannot help but imagine what happens with defective dies.

Availability for Turing will be limited during Q4 2018, but is expected to be generally available going into 2019.

Permalink to story.

 
Damn! $10K? Although anyone with a need for this card, will have the $10K. One thing is for sure, I don't in either case.
 
Ultra realistic realtime raytraced VR ARCADE games. Would love to pay to experience it.

I'm not entirely sure we'll see many games utilize realtime ray tracing this GPU generation. Given that Nvidia is going the proprietary approach again and with dedicated hardware for it (at least on these non-consumer cards), it isn't really an appealing option to me as a consumer as it means game devs will be forced to code for only one set of graphics cards. Just to be clear, Nvidia's real time ray tracing is a combination of rasterization and ray tracing. It makes quality trade offs to speed things up.

Only time will tell if the die space trade-off is worth it, especially since AMD are using Async compute for their realtime ray tracing which doesn't require any additional hardware.
 
Only time will tell if the die space trade-off is worth it, especially since AMD are using Async compute for their realtime ray tracing which doesn't require any additional hardware.

I remember when PhysX was released as a standalone card. I'm surprised this hasn't been done for ray tracing.
 
I remember when PhysX was released as a standalone card. I'm surprised this hasn't been done for ray tracing.

PhysX really isn't a good example given that Nvidia still can't sort out it's performance issues nor does it really seem to care. If Nvidia's Ray Tracing turns out to be like any of Nvidia's GameWorks APIs, it will be as much of an issue as it is every time GameWorks is included in a game. Rarely does the performance hit from those justify the cost, especially considering other games have demonstrated the same effects with a smaller performance impact.
 
Only time will tell if the die space trade-off is worth it, especially since AMD are using Async compute for their realtime ray tracing which doesn't require any additional hardware.

I remember when PhysX was released as a standalone card. I'm surprised this hasn't been done for ray tracing.

There's been dedicated ray tracing hardware available for decades, but it's always been offline processing. Real-time ray tracing was going to be "the next big thing" 10+ years ago, but the graphics card makers diverged a bit and started concentrating on traditional rasterization GPUs for gaming - mostly because it was far easier to accomplish, I'm sure. But it's taken this long to come back around and get to this point now. Real-time ray tracing could be as disruptive for CGI and film as things like the VideoToaster were for video editing and production, back in the day.

And just FYI, PhysX started out as a standalone card developed by a company called Ageia. Then Ageia was bought by Nvidia, Physx hardware was taken off the market and incorporated directly into their GPUs, forcing you to buy Nvidia if you wanted to use the power of Physx. So it's not surprising to me that they wouldn't do a standalone ray tracing card, it's the exact opposite direction of their normal marketing behavior. Why sell the milk separately when you can make somebody buy the whole cow if they want milk?
 
Nvidia's proprietary ray tracing hardware acceleration, included no doubt in its next gen consumer cards this year. Interesting, it has to be supported in software widely or it's a big ol' waste of die space. It could have serious implications and segment the GPU market dramatically. It's a bold move but Nvidia must feel they are in a position to take an even bigger market share and hold it, tempting developers to its standard and potentially isolating AMD GPUs from the mainstream. If you buy AMD you forgo the tech.

Unless of course AMD have a resurgence with something better, manage to make market share gains or simply cave and pay Nvidia to license the technology. The latter would be the last thing AMD will want, they will fight to the death over that.

Then there is Intel, who are building their own GPU at the moment that we could see in the next couple of years. I am curious to see how this all pans out. It's a fairly big gamble for Nvidia one would think. If it fails to take off quickly then they hand major die size and cost advantages back to AMD/Intel GPUs.
 
Last edited:
There's been dedicated ray tracing hardware available for decades, but it's always been offline processing. Real-time ray tracing was going to be "the next big thing" 10+ years ago, but the graphics card makers diverged a bit and started concentrating on traditional rasterization GPUs for gaming - mostly because it was far easier to accomplish, I'm sure. But it's taken this long to come back around and get to this point now. Real-time ray tracing could be as disruptive for CGI and film as things like the VideoToaster were for video editing and production, back in the day.

And just FYI, PhysX started out as a standalone card developed by a company called Ageia. Then Ageia was bought by Nvidia, Physx hardware was taken off the market and incorporated directly into their GPUs, forcing you to buy Nvidia if you wanted to use the power of Physx. So it's not surprising to me that they wouldn't do a standalone ray tracing card, it's the exact opposite direction of their normal marketing behavior. Why sell the milk separately when you can make somebody buy the whole cow if they want milk?

Well this isn't technically real time ray tracing. It's a rasterization hybrid. In addition, Nvidia didn't incorporate PhysX into the hardware, it just made it so it can run on CUDA cores and crippled performance on any non-Nvidia card by forcing PhysX to run on an extremely un-optimized CPU code path. I would not be surprised if they pull the same thing for any GameWorks title that decides to use Nvidia ray tracing as well. Nvidia will roll up to development studios and offer "help" with optimizing their code, Nvidia engineers will implement an Nvidia black box in the game's code that no one but Nvidia can touch, and boom you have poor performance on previous gen Nvidia cards and even worse performance on AMD cards. Another bonus is that there is nothing AMD can do to optimize for that Black box either, other then like what they did for Crysis 2 where they simply forced a lower tessellation level.

Don't expect that Nvidia will advance computer graphics out of their own good will.
 
Only time will tell if the die space trade-off is worth it, especially since AMD are using Async compute for their realtime ray tracing which doesn't require any additional hardware.

I remember when PhysX was released as a standalone card. I'm surprised this hasn't been done for ray tracing.

There's been dedicated ray tracing hardware available for decades, but it's always been offline processing. Real-time ray tracing was going to be "the next big thing" 10+ years ago, but the graphics card makers diverged a bit and started concentrating on traditional rasterization GPUs for gaming - mostly because it was far easier to accomplish, I'm sure. But it's taken this long to come back around and get to this point now. Real-time ray tracing could be as disruptive for CGI and film as things like the VideoToaster were for video editing and production, back in the day.

And just FYI, PhysX started out as a standalone card developed by a company called Ageia. Then Ageia was bought by Nvidia, Physx hardware was taken off the market and incorporated directly into their GPUs, forcing you to buy Nvidia if you wanted to use the power of Physx. So it's not surprising to me that they wouldn't do a standalone ray tracing card, it's the exact opposite direction of their normal marketing behavior. Why sell the milk separately when you can make somebody buy the whole cow if they want milk?
Don't you mean Nvidia wants milk and we are the cash cows. lol
 
In addition, Nvidia didn't incorporate PhysX into the hardware, it just made it so it can run on CUDA cores and crippled performance on any non-Nvidia card by forcing PhysX to run on an extremely un-optimized CPU code path.

Well, I was over-simplifying, but the basic gist is there. PhysX hardware was deliberately phased out as the proprietary code was integrated into the Nvidia CUDA driver set. It only runs optimally on CUDA cores, so while it's not physically incorporated into the hardware, it's Nvidia hardware dependent. Semantics, but achieving the same goal - you need an Nvidia card to run PhysX well. It's too bad support for the old hardware fell off, I still have one of the original Ageia cards.

I would not be surprised if they pull the same thing for any GameWorks title that decides to use Nvidia ray tracing as well. Nvidia will roll up to development studios and offer "help" with optimizing their code, Nvidia engineers will implement an Nvidia black box in the game's code that no one but Nvidia can touch, and boom you have poor performance on previous gen Nvidia cards and even worse performance on AMD cards. Another bonus is that there is nothing AMD can do to optimize for that Black box either, other then like what they did for Crysis 2 where they simply forced a lower tessellation level.

Don't expect that Nvidia will advance computer graphics out of their own good will.

Exactly right. Nvidia are masters of forcing the competitive edge by cuddling up to game developers to whisper sweet nothings in their ears and promote incorporation of Nvidia exclusive features that will make other hardware appear crippled in comparison.
 
Real time raytracing is a fever dream. Even these cards can't push 30-60fps at the quality every modern rendering engine outputs, even using GPUs (Redshift, iRay) along with the CPUs (V-ray RT, for example). This new Nvidia chip might add in a nice dithered raytracing calculation for say Unreal Engine or CryEngine, but the technology is narrowly focused and is nowhere near the quality of actual raytraced or path-traced solutions.

So what does "real time" mean to Nvidia in this case? For every 30 frames rendered, one is raytraced and blended in to enhance lighting and reflections, refractions and caustics? At what resolution? Using what engine?

Raytracing is a backwards method to begin with - our own eyes don't shoot OUT rays. These guys keep putting the cart before the horse.
 
I can't wait. Ive been waiting to upgrade my workstation for an age now.
For my workload, I will probably go for 2 or 4 of the RTX 5000 as 16GB is easily enough and I prefer to get more CUDA cores for the money. It all depends on how Octane render and Vray use the cards.
I may just get the Geforce Equivalent versions to save even more money.
 
Raytracing is a backwards method to begin with - our own eyes don't shoot OUT rays. These guys keep putting the cart before the horse.
Oh this had me pat my legs hard.

Indeed, the real future is "charge tracing", where every shader emits a certain amount of light - or sucks it in. Processing the equivalent of real photons, not the GI-style photons used in mental ray, Vray, Mantra, Arnold, Maxwell, Redshift, Octane, and any other current rendering solution.

So the photons themselves will come TO the camera just like our eyes or real camera. The emission/reflection/refraction will be physically accurate this way, not the mere approximation raytracing does, which is effectively the reverse, with no physical foundations at all.

I know you were trying to be funny, but this is how raytracing currently works. The "rays" are data-rays only, since real light isn't a ray at all. The data-rays return color/value based on the shader on the geometry they hit (or particles/vertices/fluids, etc.), and that color/value is saved as a raster pixel. It's inefficient, inaccurate, and physically wrong as well. Thus all the constant Global Illumination, caustics, Final Gather, and path-tracing "solutions" to attempt to make things look more realistic. If the rendering engine were already physically accurate, we wouldn't need all that additional calculation overhead.
 
It's a damned shame that Nvidia and their fearless leader Jen-Hsun Huang don't pay any attention at all to getting rid of the bugs in their existing crappy hardware. Specifically, laptop GPU's! I've got a Dell E6530 with an I-7 processor, 16 GB ram, and a dual GPU setup of an Intel 4000 and a Nvidia NVS5200 M. The Nvidia is as worthless as frog boobs. Nvidia's alleged "updates" are ALWAYS incompatible, despite THEIR G force "experience" bs updater putting the drivers in; the control panel NEVER works post installation; and the stupid chip, despite having 2 GB vid ram can't run anything- if it runs at all! And I mean ANYTHING!!! It can't even tie into VLC! I'm a repair tech- and I DO know what I'm doing, so don't go there, ever since the DOS/pre Windows days! I will never get anything Nvidia ever again, and will seriously let my customers know to go with AMD every time. Nvidia's apathetic support is pure unadulterated excrement!
 
Back