Microsoft DirectX Raytracing 1.2 update promises to boost game performance significantly

Daniel Sims

Posts: 1,875   +49
Staff
Forward-looking: Recent big game releases appear to confirm that ray tracing and path tracing are the future of graphics rendering. However, these techniques remain prohibitively computationally expensive. The next major update to Microsoft's DirectX API aims to address this and facilitate broader adoption of the technology.

Microsoft's GDC presentation this week offered a glimpse into the future of DirectX ray tracing support. The company claims that DirectX Raytracing 1.2 could help developers double the performance of ray tracing and path tracing.

Two key features underpin the update: opacity micromaps (OMM) and shader execution reordering (SER). OMM can help path-traced games run up to 2.3 times faster by reducing shader invocations and optimizing opacity data to enhance rendering efficiency. Meanwhile, SER intelligently groups shader execution to minimize divergence, improving performance by up to 2 times.

Although ray tracing first emerged as a flashy extra feature in games like Battlefield V and Cyberpunk 2077, recent titles have made it mandatory, indicating that it will soon become standard. Examples include Star Wars Outlaws, Indiana Jones and the Great Circle, and Assassin's Creed Shadows.

Path tracing is a more advanced form of ray tracing that significantly improves the accuracy of dynamic lighting and shadows but comes with substantial performance costs. It typically requires high-end GPUs to run effectively in games such as Cyberpunk 2077, Alan Wake 2, Black Myth: Wukong, Indiana Jones and the Great Circle, and the recently released Half-Life 2 RTX demo. Doom: The Dark Ages, launching in May, will also require ray tracing and offer optional path tracing.

Also see: Path Tracing vs. Ray Tracing, Explained

DirectX Raytracing 1.2 will become available to developers starting next month. As a result, ray tracing and path tracing could become significantly less demanding in new games over the next few years. Unsurprisingly, Nvidia's RTX graphics cards will support the API update first, while Microsoft is collaborating with AMD, Intel, and Qualcomm to expand support.

Microsoft also shared more details on cooperative vectors and neural rendering, which aim to integrate AI workloads into real-time graphics rendering. A new foundational feature, Neural Block Texture Compression, significantly reduces memory usage – potentially benefiting users with GPUs that have 12 GB of VRAM or less. Additionally, neural supersampling and denoising are expected to enhance image quality in path-traced games.

DirectX neural rendering and cooperative vector support first appeared in January. Microsoft explained that neural rendering optimizes matrix-vector operations for AI training and enables smaller neural networks to run efficiently during GPU shading processes.

Permalink to story:

 
Lol isn't optimizing raytracing bad for Nvidia upscaling business?
They will just move the goal post by adding more bouncing lights and rays. For example Cyberpunk 2077 uses 2 rays and 2 bounces of lights for the default path tracing and their are mods that increase it 4 rays and 4 bounces of lights for a more realistic image but at an added cost. From what I read their is a plautau effect so they know they can't move the goal post forever and maybe this is exactly why they are milking their gravy run imo.
 
I feel existing hardware are clearly not ready for ray tracing even though it’s been around for some years now. Current implementation is getting increasingly tough for the hardware to run, and Nvidia keeps pushing ray tracing harder just to give their hardware an “advantage” over competitors. But in reality, even top end RTX 5090 struggles with path tracing at higher settings without resorting to software to circumvent the challenge. Anything below the RTX 5090 generally means cutbacks in ray tracing and graphical settings.
 
I feel existing hardware are clearly not ready for ray tracing even though it’s been around for some years now. Current implementation is getting increasingly tough for the hardware to run, and Nvidia keeps pushing ray tracing harder just to give their hardware an “advantage” over competitors. But in reality, even top end RTX 5090 struggles with path tracing at higher settings without resorting to software to circumvent the challenge. Anything below the RTX 5090 generally means cutbacks in ray tracing and graphical settings.

I think latest unreal engine demo, with it's mega lights for shadows, lumen and nanite ran on playstation 5.
Afaik AMD and Nvidia latest GPU's can run any game now with ray tracing ultra, but resolution has to be lower, 1440p with some upscaler.

 
Last edited:
I feel existing hardware are clearly not ready for ray tracing even though it’s been around for some years now. Current implementation is getting increasingly tough for the hardware to run, and Nvidia keeps pushing ray tracing harder just to give their hardware an “advantage” over competitors. But in reality, even top end RTX 5090 struggles with path tracing at higher settings without resorting to software to circumvent the challenge. Anything below the RTX 5090 generally means cutbacks in ray tracing and graphical settings.
I don't think that Ray-tracing is the issue. Frankly, I think it's lazy devs and the cost of hardware due to greed and inflation outpacing wages.

Before people thought the idea of Titan cards was absurd, but many people accepted that "those are top teir card and onyl the absurdly rich will buy them." now we live in an era where you basically need a 90 class card and if you don't get one, whatever card you DID get is engineered to be obsolete in less than 2 years of the purchase. The idea that the 5080 is a 16gig card and we're already having issues with 12gb cards being throttled is wild. I think people are salty about the idea paying $1300 for a 5080 and it will start to be throttled by VRAM starvation in new games while the GPU itself still has plenty of horsepower to run them.

Atleast AMD has done an OKAY job of keeping the price of the 9070XT around $750 with all the market nonsense that's goind on.

Frankly, I think we need to bring the cost of hardware down to levels were "compromise" is acceptable. Currently, even "luxury" items aren't providing a luxury experience so what are people buying the 5090 actually receiving? Most of them are getting stamped with a big fat "you payed scalper prices" label. One prices reach a "compromise is acceptable level" then I don't think people will mind paying RT titles at 60hz with upscaling, turning down the settings or, god forbid, frame gen.
 
I'm ok holding out for Proton and Mesa etc to add it to their translation layer and experience the benefits on Linux with a slight delay.
As I understand it, they're just using AI to help render the original image, pre-DLSS and frame gen. I can't wait until we're just playing 100% AI generated slop. This is closer to "AI Raytracing" than actual RT where an AI generates what it thinks the rays should look like rather than rendering actual rays.

But I'm with you, been daily driving Linux for 2 years now. Yeah, I miss out on some games, but it only makes me want to give devs who support Linux my money more. Then there is the philosophical aspect to computing that I never thought about before. What is it that I want FROM a computer? What limits and I willing to work within? What compromises am I willing to make? Considering the only multiplayer games I play are ESO, EvE and SM2, I'm not really missing out on much.
 
As I understand it, they're just using AI to help render the original image, pre-DLSS and frame gen. I can't wait until we're just playing 100% AI generated slop. This is closer to "AI Raytracing" than actual RT where an AI generates what it thinks the rays should look like rather than rendering actual rays.
This article largely skips over any details -sigh- but TomsHW didn't. The Ray/Path tracing improvements come from more quickly determining which rays don't matter so that you don't waste compute on them and other efficiencies from grouping certain things together. The numbers all strongly "up to" because it depends on the scene on how much efficiency you gain. Still, 30-40% more efficient is great for future games that use this.
 
This article largely skips over any details -sigh- but TomsHW didn't. The Ray/Path tracing improvements come from more quickly determining which rays don't matter so that you don't waste compute on them and other efficiencies from grouping certain things together. The numbers all strongly "up to" because it depends on the scene on how much efficiency you gain. Still, 30-40% more efficient is great for future games that use this.
Thanks for the clarification. When the 40 series came out I thought we were still 2 generations away from practical raytracing. after seeing the 50 series, I think we're still another 2 generations away from practical ray tracing. however, it seems like lots of the problems in modern games isn't really the RT, it's developers rendering things that can't be seen by the player or having flat surfaces with way more polygons than, we'll, just 1. People like to put all the blame on RT, but there is no reason why a flat surface needs 10,000 polygons and be rendered when it can be scene.

I think a lot of poor optimization in games is what I'm going to call "secondary laziness". lots of these things would have been gone over by engine experts and eliminated, but since raytracing handles everything they just expect the engine to do everything for them.
 
Thanks for the clarification. When the 40 series came out I thought we were still 2 generations away from practical raytracing. after seeing the 50 series, I think we're still another 2 generations away from practical ray tracing. however, it seems like lots of the problems in modern games isn't really the RT, it's developers rendering things that can't be seen by the player or having flat surfaces with way more polygons than, we'll, just 1. People like to put all the blame on RT, but there is no reason why a flat surface needs 10,000 polygons and be rendered when it can be scene.

I think a lot of poor optimization in games is what I'm going to call "secondary laziness". lots of these things would have been gone over by engine experts and eliminated, but since raytracing handles everything they just expect the engine to do everything for them.
I agree with the 2 more generations assessment. It's unclear to me if we are hitting a bit of a technological wall for RT compute or if it is an architecture focus on AI (rather than RT) that is bringing such lack luster RT improvements.

There were some HW improvements to the 5000 series but they seem to have minimal impact on RT performance. Perhaps they were designed to work with these new APIs and performance will be better for the games that use them. Or perhaps those were really AI improvements that happen to improve part of RT but the other part of RT is bottlenecking increased performance. -shrug-
 
Back