AMD patent describes 'hybrid' approach to real-time ray tracing

Polycount

Posts: 3,017   +590
Staff
In context: Ray tracing has often been referred to as the holy grail of video game graphics, and for good reason. The rendering technology is capable of simulating lifelike reflections, shadows, and lighting -- and recently, it's finally arrived on consumer PCs, courtesy of Nvidia's powerful (but expensive) RTX 20-series GPUs.

The video cards use onboard hardware acceleration, in addition to regular software updates to accurately display ray traced effects in games. Unfortunately, as we've noted many times in the past, Nvidia's implementation of real-time ray tracing still needs quite a bit of work before it will be practical, affordable, and performance-friendly enough for the average user.

For now, Nvidia is the only company in the GPU industry who has actually shipped consumer-grade cards with ray tracing support. However, that may not be the case for long. As reported by Tom's Hardware, AMD filed a patent application back in 2017 which details a "hybrid" ray tracing solution that wouldn't rely quite as heavily on hardware acceleration to function (compared to Nvidia's newest GPUs).

It utilizes both existing shader units and "fixed function" hardware to provide users with improved performance, while also ensuring "flexibility is preserved" for developers. According to AMD, their idea could solve the performance and processing issues associated with both hardware and software-based ray tracing systems.

The full explanation is quite technical in nature, but if you feel equipped to dive into the nitty-gritty of hardware and software design, you can read AMD's full patent for yourself or take a look at the image above for a brief summary.

Either way, we'd advise our readers not to get too excited about anything they see in this document. A hybrid approach to real time ray tracing sounds exciting on paper, but patents like these are usually early drafts of ideas; often intended as roadblocks to any competitors that may wish to swipe the concept before it's been fully developed.

With that said, we know both Microsoft and Sony's upcoming gaming consoles will feature AMD hardware with ray tracing support, so perhaps we won't have to wait long to find out what the hardware maker has up its sleeve.

Permalink to story.

 
This may turn into a zoo, since ray tracing has no standard behind it. Especially with Intel likely to parade something of their own later this or next year.
 
AMD is really picking up the pace lately. I am pretty sure it was back in the early 2000's when AMD and Intel were neck and neck through tech advancements. I'm really curious to see how all of this pans out before I build my next gaming rig.
 
DXR (DirectX Raytracing) is a standard.
A Microsoft-invented extension on top of their own DirectX 12, specifically for Windows 10, is hardly a standard. OpenCL is a standard, DirectShow is a standard, to name a few, and they work across platforms, supported on the hardware level and not tied to one specific OS.
 
Ray Tracing is a rendering method like rasterisation not just lights shadows and such, in that method that is hybrid graphics like using vectors for images and still using sprites for text as many early console games did. Right now all we can do is hybrid Ray Tracing because the computational power of current GPUs is no where near what is needed for a complete Ray traced real time imaging in games.NVidia recognized that but wanted to get the ball rolling and added specialized workstation clusters they call tensor cores to consumer GPUs in order to bridge the gap, is it perfect no, is it cheap, no. But they were able to do it despite what detractors would believe. AMD has no hardware implemented and thus is only relying on software emulation of using the DX standard of unified shaders in order to process the image which is a poor way of currently doing it.
 
Nvidia miss sold their server gpu saying the tensor cores are for ray tracing when in fact its server market hardware reconstituted to run on desktop.
why do you think the cuda core performance tanks when the tensor cores ray trace.
Because it wasnt designed for it.
 
AMD has no hardware implemented and thus is only relying on software emulation of using the DX standard of unified shaders in order to process the image which is a poor way of currently doing it.
Both are hardware implementations - the difference lies in that AMD's GPUs have no units that are specifically for ray intersection tests, so the routines are run on the 'shader cores' within the GPU. Naturally, while those cores are being used for those tests, they can't be used for any other calculations required in the pixel processing stage. By having separate dedicate sections for those tests, NVIDIA's RTX GPU shaders can be used to carry out other pixel shader routines while the intersection tests are being processed.
 
DXR (DirectX Raytracing) is a standard.
A Microsoft-invented extension on top of their own DirectX 12, specifically for Windows 10, is hardly a standard. OpenCL is a standard, DirectShow is a standard, to name a few, and they work across platforms, supported on the hardware level and not tied to one specific OS.

DirectShow is also developed by Microsoft, is part of Windows SDK and actually requires DirectX to run. OpenCL, as the name suggest - Open Computing Language - is "just" a programming language used to take advantage of different computing units - GPU+CPU for example. Hardly an industry standard since CUDA is more widely used in GPGPU environments due to more mature tools and better support. It would be like calling OpenGL a standard but not DirectX even though DX is more widely used.

As for DXR it's too early to say if it becomes the industry standard in regards to RT. We shall see. But to simply dismiss it as some MS "extension" while calling DirectShow a standard is just weird.

I guess it depends on the definition of a standard one might use but your statement gives the impression that you have some beef with MS (which makes your DS statement all the more puzzling).
 
Nvidia miss sold their server gpu saying the tensor cores are for ray tracing when in fact its server market hardware reconstituted to run on desktop.
why do you think the cuda core performance tanks when the tensor cores ray trace.
Because it wasnt designed for it.

Have they (NV) actually said that tensor cores are used for RT though ? I don't think so. Tensor cores are used for DLSS and other post-procesing techniques not RT. Straight from the horse's mouth:

"Tensor Cores for AI Acceleration

Turing features new Tensor Cores, processors that accelerate deep learning training and inference, providing up to 500 trillion tensor operations per second. This level of performance dramatically accelerates AI-enhanced features—such as denoising, resolution scaling, and video re-timing—creating applications with powerful new capabilities."
 
This is very interesting. Back when DX11 was something new we had 2 competing solutions: AMD with their fixed-function tesselator and NVIDIA with their approach to do tesselation on the compute cores. I remember heated arguments on the Net about which implementation was the "correct" one. Some even said that AMD used "real" tesselation :)

We now have a reverse where NV is trying to use special-function HW and AMD trying what seems to be a more general-purpose approach to doing RT.

It's goin to be interesting how this technology evolves. Fun times ahead :)
 
DXR (DirectX Raytracing) is a standard.
A Microsoft-invented extension on top of their own DirectX 12, specifically for Windows 10, is hardly a standard. OpenCL is a standard, DirectShow is a standard, to name a few, and they work across platforms, supported on the hardware level and not tied to one specific OS.
And RTX NVIDIA made for them self is more a "Standard" to you? we all have DirectX by default, RTX only a few.
 
AMD is really picking up the pace lately. I am pretty sure it was back in the early 2000's when AMD and Intel were neck and neck through tech advancements. I'm really curious to see how all of this pans out before I build my next gaming rig.

And they would probably never stop being neck and neck if Intel didn't bribe everyone and their dog to stop buying AMD :)
 
DXR (DirectX Raytracing) is a standard.
A Microsoft-invented extension on top of their own DirectX 12, specifically for Windows 10, is hardly a standard. OpenCL is a standard, DirectShow is a standard, to name a few, and they work across platforms, supported on the hardware level and not tied to one specific OS.
And RTX NVIDIA made for them self is more a "Standard" to you? we all have DirectX by default, RTX only a few.
RTX works with DXR, not solo.

And yes, DXR can be viewed as a standard.
 
Back