Benchmarks of Wolfenstein II with RTX Adaptive Shading show performance boost

mongeese

Posts: 643   +123
Staff
Why it matters: In addition to Ray Tracing, Nvidia’s Turing architecture came with a bag of tricks for developers to play with, and Wolfenstein II: The New Colossus is the first to implement one such trick called Content Adaptive Shading (CAS). CAS is a post-processing step that analyzes the content of a frame, then focuses the graphics card’s power onto the areas of the frame that need it the most. By reducing the time spent rendering plain blue sky and fast-moving features of the environment, the frame rate can improve by as much as 7%.

After Wolfenstein II received the update enabling CAS last week, The Tech Report benchmarked the game using the three CAS presets, “Quality,” “Balanced” and “Performance.” They tested all three RTX cards using an i9-9980XE and 32GB of RAM at 3840 x 2160 and saw noticeable performance gains.

The RTX 2080 Ti achieves 106 fps with CAS off but reaches 107 (1% better), 109 (3% better) and 112 fps (6% better) at the different presets. Meanwhile, the RTX 2080 normally pulls in an average of 87 fps but gets 90 (3% better), 91 (5% better) and 93 fps (7% better).

The RTX 2070 also shows some performance gains, going from 72 fps to 74 (3% better), 75 (4% better) and 77 fps (7% better). Similar trends occur with the 99th percentile frames.

While this image is provided by Nvidia and thus isn’t entirely reliable, it does demonstrate minimal image quality loss. Despite his experience as a graphics reviewer, Tech Report's Jeff Kampman admitted he “saw practically no difference in image quality at 4K between the different presets. If there’s a catch to using CAS on at this resolution, I didn’t see it.” Of course, the difference might be more obvious at lower resolutions.

Rather than finely shading the whole frame, CAS lets developers pick sections to shade at a higher resolution based on the object, region or whatever the developer chooses. Less detailed regions will receive “coarse shading” where the pixel shader is executed per pixel, or once for a group of pixels next to each other. This is particularly useful for VR, where the same shader can be applied to the one object for both eyes, leading to substantial performance improvements according to Nvidia.

In detailed areas, the shader is applied more than once per pixel, which is known as supersampling. Supersampling is commonly used at 2x, 4x, 8x and 16x, while coarse shading can be applied to 1x1, 1x2, 2x1, 2x2 or 4x4 pixel regions. That means that some areas of a scene could receive only a few percent of what other areas get, making the sustained quality between CAS on and CAS off even more impressive.

Nvidia promised many improvements would come with the RTX graphics cards, but so far, they’ve mostly been disappointing in practical terms. Ray tracing does look good in Battlefield V, but the massive cost to performance is ridiculous.

DLSS doesn’t look too promising in the short term either, so it’s great to see that CAS can offer some tangible benefits. Like DLSS though, it will depend on developers' support to implement and take advantage of it, making these benchmarks more insight into the future than anything else.

Permalink to story.

 
Could this simply be because wolfenstein is actually using Nvidias ray tracing implementation versus battlefields DirectX ray tracing? From looks of both the articles it seems so.
 
There was already lots of reasons to buy RTX, DLSS and real time fricking actual ray tracing in games being two. Of course the writers at TechSpot have discarded RTX because it happens not to be very good value. Which is baffling as new techs such as real time ray tracing have always come in at poor value.
 
There was already lots of reasons to buy RTX, DLSS and real time fricking actual ray tracing in games being two. Of course the writers at TechSpot have discarded RTX because it happens not to be very good value. Which is baffling as new techs such as real time ray tracing have always come in at poor value.

Well both technologies need to prove their worth obviously before smart people jump on the bandwagon and empty their pockets.
 
Adaptive shading is and has been a staple in real CGI, so it's almost hilarious that they are finally bringing it to realtime. One might say the same for "raytracing" (just reflections? Seriously?) but the cards do almost nothing to actually accelerate raytracing. They should be matching the RT cores to the CUDA cores, and THEN charge the obscene prices. It's pretty sad.

But adaptive sampling is definitely a step forward.
 
There was already lots of reasons to buy RTX, DLSS and real time fricking actual ray tracing in games being two. Of course the writers at TechSpot have discarded RTX because it happens not to be very good value. Which is baffling as new techs such as real time ray tracing have always come in at poor value.

Reviewers make judgements based on products now. At the time of the launch there were zero games that used any of the new tech and there is only a single game right now.

Just like how AMD flubbed primitive shaders, cool tech in a press deck don't mean squat without real working implementations and good performance. RTX has neither of those.

In fact unless there are massive performance increases (400%+), the ray tracing ability of the cards is nothing more then a tech demo. Instead of adding more CUDA cores now every RTX owner has a section of the die that won't do squat for a vast majority of games. Not only do you get terrible performance with ray tracing on, you are now also taking away die space from what could be a ton of extra CUDA cores.

Real time ray tracing may one day come but this first attempt has got to be the worst possible implementation.
 
Adaptive shading is and has been a staple in real CGI, so it's almost hilarious that they are finally bringing it to realtime. One might say the same for "raytracing" (just reflections? Seriously?) but the cards do almost nothing to actually accelerate raytracing. They should be matching the RT cores to the CUDA cores, and THEN charge the obscene prices. It's pretty sad.

But adaptive sampling is definitely a step forward.

In this day and age you pay for the features not the actual value of the product. Apple has been doing that for a long time, Nvidia just noticed it recently.
 
Nvidia have invested billions of dollars into ray tracing technology and they are trying to recuperate that investment. I for one am glad that they are investing this money and selling on these products. If you personally can’t afford an RTX card then don’t buy one. If course if Nvidia don’t recuperate the money they spent then they probably won’t bother developing new tech in the future.

But just because you can’t afford an RTX card doesn’t mean that others can’t. People need to start looking at the bigger picture and not just their own bank balance.
 
Back