Crytek releases its impressive 'Neon Noir' ray tracing tech demo as a free benchmark tool

Polycount

Posts: 3,017   +590

When it comes to video game graphics, ray tracing is the current holy grail. It's a form of rendering technology that is arguably best known for its ability to significantly boost the quality and realism of reflections and global illumination (among other things).

However, the current implementation of this tech in consumer PCs has some drawbacks. For starters, you'll need a pricey RTX-series GPU from Nvidia to take advantage of it in the latest games, and even then, enabling RTX features often results in a nasty performance hit.

Not too long ago, Crytek sought to show the world that beautiful, real-time ray-traced content is possible without being locked into Nvidia's ecosystem, by way of its hardware-agnostic "Neon Noir" tech demo (capable of running on both AMD and Nvidia systems).

We've covered the demo in detail a couple times in the past, so we won't retread the specifics here. However, suffice to say that it was (and is) very impressive, and it nicely showcased a wide range of ray traced effects -- feel free to view it in 4K above.

Still, attractive or not, Neon Noir wasn't particularly functional. It ran well enough in Crytek's controlled testing environment, but what if an ordinary user attempted to run the same demo? Would the results look or feel just as smooth? Crytek wants you to answer those questions for yourself today, as the company has converted the demo into a real benchmark utility. The utility is available for download right now at no cost via the CryEngine Marketplace.

The benchmark lets you see how well your system might theoretically run ray traced content, and like many other similar tools out there, you can tweak a handful of settings to customize your experience. For now, there are two main ray tracing presets -- Ultra and Very High -- as well as the option to tweak your desired benchmark resolution.

Like the original Neon Noir tech demo, the ray tracing technology contained in this benchmark is hardware and API-agnostic. While this does mean you won't need an RTX card to run the demo effectively, you'll still want a reasonably-powerful system to get the most out of it. The minimum requirements for the Neon Noir benchmark can be found below:

  • AMD Ryzen 5 2500X or Intel Core i7-8700
  • AMD Vega 56 (8GB) or Nvidia GeForce GTX 1070 (8GB)
  • 16GB RAM
  • Windows 10 64-bit
  • DirectX 11

I ran the "Ultra" benchmark myself at 1920x1080p (equipped with a 1080 Ti and an i7 8700K), and my FPS managed to stay at a relatively-consistent 85 throughout the entire demo. It did drop to the low 70s once or twice, but given our previous experiences with ray-traced titles, these results are still fairly impressive. My final score came in at 7980 (all of the screenshots in this article were taken during the benchmark).

Comparing the benchmark to the official video above, the former doesn't look nearly as pretty overall. It's apparent that some compromises had to be made to get this demo to run on RTX-free systems. However, the reflections are still quite stunning, and if this technology were fully implemented in a modern game, I'm confident users would be pleased.

While we're still probably a ways off from truly performance-friendly, hardware-agnostic ray tracing effects in our AAA PC games, Crytek's benchmark does manage to make that future feel just a little bit closer.

If you want to test the demo out for yourself, visit Crytek's official website, sign up for a free account, snag the CryEngine launcher, return to the website and search for the Neon Noir benchmark, and click "Add to library." At this point, the demo should show up in your Launcher's library, allowing you to install it.

If you do decide to give the test a shot, feel free to sound off in the comments with your results, impressions, and system specs.

Permalink to story.

 
Before anyone say RIP RTX just know that the RTX Turing lineup destroy everything else in this benchmark where RTX 2060 is as fast or faster than 1080 Ti and 5700XT. RTX 2060 score around 7500 at 1080P Ultra. My 2080 Ti score 15400 at 1080P Ultra and 9900 at 1440P Ultra.

Also the reflections from the red team look less detail and just odd
AMD


Nvidia


5700XT

2080 Ti

Watch at 1:28 with 0.25 speed
 
Before anyone say RIP RTX just know that the RTX Turing lineup destroy everything else in this benchmark where RTX 2060 is as fast or faster than 1080 Ti and 5700XT. RTX 2060 score around 7500 at 1080P Ultra. My 2080 Ti score 15400 at 1080P Ultra and 9900 at 1440P Ultra.

Also the reflections from the red team look less detail and just odd
AMD


Nvidia


5700XT

2080 Ti

Watch at 1:28 with 0.25 speed
I think you mean that the 5700xt has a score around 7500. Found a youtube of a rtx 2060 that scored 4389.
 
Before anyone say RIP RTX just know that the RTX Turing lineup destroy everything else in this benchmark where RTX 2060 is as fast or faster than 1080 Ti and 5700XT. RTX 2060 score around 7500 at 1080P Ultra. My 2080 Ti score 15400 at 1080P Ultra and 9900 at 1440P Ultra.

Also the reflections from the red team look less detail and just odd
AMD


Nvidia


5700XT

2080 Ti

Watch at 1:28 with 0.25 speed
Im probably blind but I dont see that much of a difference lol
 
No, I'm pretty sure he meant RTX 2060. In the video you linked, at 1:44 the results for the RTX 2060@1080p Ultra is 7585. Your quoted 4389 score (at the end) is @1440p Ultra.

abracadaver is correct. Just to confirm, both of these are at 1080p ultra.

DJPJut5.jpg


vr8yR0Y.jpg


Before anyone say RIP RTX just know that the RTX Turing lineup destroy everything else in this benchmark where RTX 2060 is as fast or faster than 1080 Ti and 5700XT. RTX 2060 score around 7500 at 1080P Ultra. My 2080 Ti score 15400 at 1080P Ultra and 9900 at 1440P Ultra.

1. No, the numbers (as shown above) clearly show the 5700XT putting a whopping on the 2060.

2. Stop coming into threads preemptively making accusations. No one here is saying RTX is dead. This is s single benchmark.

3. The AMD reflections in the images you linked look more detailed. Take a second look, you can see all the details present from the Nvidia card with some extra reflects on top. If anything I'd say the AMD card is processing more reflections from off screen object or else you would not have those additional reflections.

If there is one thing you should gather from this benchmark it's that ray tracing is certainly possible without fixed function hardware.
 
The Video that abracadaver posted has two runs of the benchmark. The first one is labeled "Geforce RTX 2060 1920x1080" display resolution ultra settings, it says that in the beginning. That Benchmark ends at 1:43 with a score of 7585. Then at 1:50 a second run is labeled "Geforce RTX 2060 2650x1440 display resolution ultra settings". This run ends at the end of the video, with a score of 4389. Or am I watching this video wrong?

Untitled.png

Either way, like you said, "Ray tracing is certainly possible without fixed function hardware," and this is a great thing. I'm all for it for sure. Pushing graphics fidelity/performance farther without being forced to buy into a brand or technology like we've seen happen in the past. Can't wait to see this and other implementations mature.
 
specs:
Windows 10 pro 64-bit
AMD FX-8350 (I know it's old)
16GB DDR3
GTX 1070 8GB
512GB SSD

ultra 1920x1080: 5195
very high 1920x1080: 6151

Main thing I noticed, going from ultra to very high, is that the reflections are more pixelated on very high, than on ultra.
 
specs:
Windows 10 pro 64-bit November 1909
9900K @ 5.2
16GB DDR4 4000 @ c18
GTX 1080
1tB NVME

ultra 1920x1080: 7455
60-102 FPS
 
The Video that abracadaver posted has two runs of the benchmark. The first one is labeled "Geforce RTX 2060 1920x1080" display resolution ultra settings, it says that in the beginning. That Benchmark ends at 1:43 with a score of 7585. Then at 1:50 a second run is labeled "Geforce RTX 2060 2650x1440 display resolution ultra settings". This run ends at the end of the video, with a score of 4389. Or am I watching this video wrong?

View attachment 85762

Either way, like you said, "Ray tracing is certainly possible without fixed function hardware," and this is a great thing. I'm all for it for sure. Pushing graphics fidelity/performance farther without being forced to buy into a brand or technology like we've seen happen in the past. Can't wait to see this and other implementations mature.

Correct. I still have no idea where krizby got his information to make a statement like this:

"RTX Turing lineup destroy everything else in this benchmark"

given the 2060 is only performing on par with a 5700 XT and the AMD card has zero fixed function hardware for RT.
 
abracadaver is correct. Just to confirm, both of these are at 1080p ultra.

DJPJut5.jpg


vr8yR0Y.jpg




1. No, the numbers (as shown above) clearly show the 5700XT putting a whopping on the 2060.

2. Stop coming into threads preemptively making accusations. No one here is saying RTX is dead. This is s single benchmark.

3. The AMD reflections in the images you linked look more detailed. Take a second look, you can see all the details present from the Nvidia card with some extra reflects on top. If anything I'd say the AMD card is processing more reflections from off screen object or else you would not have those additional reflections.

If there is one thing you should gather from this benchmark it's that ray tracing is certainly possible without fixed function hardware.

The benchmark is 1:40 second long dude, the clip with 3:30 are 2 run, first run is 1080P, second run is 1440P, don't make a joke out of yourself please.

Nvidia has always said RT is possible on non RTX card, the first RT Star War demo was done on Volta GPU without RT cores, just that the RT performance would be unplayable on non RTX card. Pascal can use DXR since April 2019.

Anyways this benchmark cut many corners related to ray tracing to improve performance, here is a quote from the developer:

"
One of the key factors which helps us to run efficiently on non-RTX hardware is the ability to flexibly and dynamically switch from expensive mesh tracing to low-cost voxel tracing, without any loss in quality. Furthermore, whenever possible we still use all the established techniques like environment probes or SSAO. These two factors help to minimize how much true mesh ray tracing we need and means we can achieve good performance on mainstream GPUs. Another factor that helps us is that our SVOGI system has benefitted from five years of development.

However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations. Broadly speaking, RTX will not allow new features in CRYENGINE, but it will enable better performance and more details.
"
Source
Sounds like RTX cards getting more detailed reflections are true.
 
The benchmark is 1:40 second long dude, the clip with 3:30 are 2 run, first run is 1080P, second run is 1440P, don't make a joke out of yourself please.

Nvidia has always said RT is possible on non RTX card, the first RT Star War demo was done on Volta GPU without RT cores, just that the RT performance would be unplayable on non RTX card. Pascal can use DXR since April 2019.

Those who live in glasses houses should not cast stones

"RTX Turing lineup destroy everything else in this benchmark"
- krizby, Nov 14th 2019

A statement easily disproven.

I've already addressed your 1st paragraph in a prior post. I suggest you go back and read it.

Anyways this benchmark cut many corners related to ray tracing to improve performance, here is a quote from the developer:

"
One of the key factors which helps us to run efficiently on non-RTX hardware is the ability to flexibly and dynamically switch from expensive mesh tracing to low-cost voxel tracing, without any loss in quality. Furthermore, whenever possible we still use all the established techniques like environment probes or SSAO. These two factors help to minimize how much true mesh ray tracing we need and means we can achieve good performance on mainstream GPUs. Another factor that helps us is that our SVOGI system has benefitted from five years of development.

However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations. Broadly speaking, RTX will not allow new features in CRYENGINE, but it will enable better performance and more details.
"
Source
Sounds like RTX cards getting more detailed reflections are true.


"t's an entirely software-based solution that doesn't use DXR or the Vulkan API's ray tracing functions, so it cannot use any of the benefits of those APIs, such as the RT core in Nvidia's Turing architecture - or indeed whatever equivalent hardware AMD has in development."

This tech doesn't utilize Nvidia's RT cores, period. Your original comment:

Before anyone say RIP RTX just know that the RTX Turing lineup destroy everything else in this benchmark where RTX 2060 is as fast or faster than 1080 Ti and 5700XT. RTX 2060 score around 7500 at 1080P Ultra. My 2080 Ti score 15400 at 1080P Ultra and 9900 at 1440P Ultra.

Also the reflections from the red team look less detail and just odd
AMD


Nvidia

Is wrong on all accounts. Your preconceived bias made you believe RTX was responsible for increased detail when in fact RTX was never in play to begin with.

In addition, let me just point out all the parts of the article you quoted that you missed:

"However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations. Broadly speaking, RTX will not allow new features in CRYENGINE, but it will enable better performance and more details."

Clearly "will" does not mean now, which was the basis for your entire first comment.
 
Last edited:
Those who live in glasses houses should not cast stones

"RTX Turing lineup destroy everything else in this benchmark"
- krizby, Nov 24th 2019

A statement easily disproven.

I've already addressed your 1st paragraph in a prior post. I suggest you go back and read it.




"t's an entirely software-based solution that doesn't use DXR or the Vulkan API's ray tracing functions, so it cannot use any of the benefits of those APIs, such as the RT core in Nvidia's Turing architecture - or indeed whatever equivalent hardware AMD has in development."

This tech doesn't utilize Nvidia's RT cores, period. Your original comment:



Is wrong on all accounts. Your preconceived bias made you believe RTX was responsible for increased detail when in fact RTX was never in play to begin with.

In addition, let me just point out all the parts of the article you quoted that you missed:

"However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations. Broadly speaking, RTX will not allow new features in CRYENGINE, but it will enable better performance and more details."

Clearly "will" does not mean now, which was the basis for your entire first comment.

Okay so the slowest RTX 2060 is as fast as previous gen champion 1080 Ti and AMD current champ 5700XT (Radeon VII is slower), which I already mention in my first statement that took you half a day to figure out; My 2080Ti score 15400 at 1080P Ultra which is 2x that of 1080 Ti and 5700XT and that is not destroying ? sure bud.

Doesn't really make sense that 2080Ti can be 2x as fast as 1080Ti in any case if RT core is not utilized lol. The article I quoted was in May 2019 so now it's Nov 2019, it's the future by their reference you know.

Also my first statement was Nov 14th, not 24th, are you from the future ? and please don't quote the Bible unless you wanna get stoned for wear mix fabric clothing or working on Sunday. Yes I watched a lot of videos that those who quote the Bible for their conveniences should not even be listened to.
 
Last edited:
Okay so the slowest RTX 2060 is as fast as previous gen champion 1080 Ti and AMD current champ 5700XT, which I already mention in my first statement; My 2080Ti score 15400 at 1080P Ultra which is 2x that of 1080 Ti and 5700XT and that is not destroying ? sure bud.

You said:

"the RTX Turing lineup destroy everything else in this benchmark"

lineup, meaning the entire range of Turing based RTX cards. Don't make over-generalized statements if you know they aren't true.

My 2080Ti score 15400 at 1080P Ultra which is 2x that of 1080 Ti and 5700XT and that is not destroying ? sure bud.

Doesn't really make sense that 2080Ti can be 2x as fast as 1080Ti in any case if RT core is not utilized lol.


1080p, ultra

This 2080 Ti scores 13070, 17.8% lower then the one you linked.


1080p, ultra

This 1080 Ti scores 9272

Doing some basic math, that's a 40.9% increase. Nowhere near "2x as fast" (or 100%) as you claim and just about on par of what you'd expect going from a 1080 Ti to a 2080 Ti. Of course averaged over many games it's more like 32% but new titles have made use of the 2080 Ti's extra headroom.

Of course it didn't make sense, both the article I linked and the article you linked from the devs plainly spell out that RTX is not used by this demo. It should be 100% clear looking at those bench numbers.

Another thing to note, CPU appears to have a large impact here. I'm seeing gains going from a 7700K to a 9900K with the same GPU.

Yes I watched a lot of videos that those who quote the Bible for their conveniences should not even be listened to.

So you take advice from videos? That explains a lot. Do you often watch videos that tell you not to listen to select people you will coincidentally loose arguments to in threads?

Also my first statement was Nov 14th, not 24th, are you from the future ?

Fixed.
 
Last edited:
No, I'm pretty sure he meant RTX 2060. In the video you linked, at 1:44 the results for the RTX 2060@1080p Ultra is 7585. Your quoted 4389 score (at the end) is @1440p Ultra.
I confirm that, just ran the benchmark yesterday.
7700T, 16GB DDR4, and MSI RTX 2060 Gaming Z (curve OC with Afterburner, +97Mhz core, +900Mhz memory) and I got around 4800 in 1440p, Ultra settings. It was around 58fps avg.
 
Just been playing about with this too, at different resolutions. Core i7 9900K, 32 GiB DDR4, Titan X Pascal - all at default clocks but with heavy cooling to prevent throttling.

720p = 15680
1080p = 8715
1440p = 5260
4K = 2540

Interestingly, each resolution presented some oddities with regards to how certain visual effects looked. At 1440p, the volumetric lighting through the rain and steam was a little glitchy, but fine at other resolutions. At 1080p and 4k, some of the reflections displayed minor artifacts, which weren't present (or at the least, not noticeable) with the other settings.

It's an impressive piece of work, despite the obvious limitations that it's a tech demo and not a game.

Evernessince said:
Another thing to note, CPU appears to have a large impact here. I'm seeing gains going from a 7700K to a 9900K with the same GPU.
This might be due to the use of asynchronous shader compiling in CryEngine, which scales with the number of threads supported by the CPU. I'll test it out.

Edit: Well on my system, it's not that - changing the config file to limit it to two threads didn't alter the 1080p score.
 
Last edited:
You said:

"the RTX Turing lineup destroy everything else in this benchmark"

lineup, meaning the entire range of Turing based RTX cards. Don't make over-generalized statements if you know they aren't true.


1080p, ultra

This 2080 Ti scores 13070, 17.8% lower then the one you linked.


1080p, ultra

This 1080 Ti scores 9272

Doing some basic math, that's a 40.9% increase. Nowhere near "2x as fast" (or 100%) as you claim and just about on par of what you'd expect going from a 1080 Ti to a 2080 Ti. Of course averaged over many games it's more like 32% but new titles have made use of the 2080 Ti's extra headroom.

Of course it didn't make sense, both the article I linked and the article you linked from the devs plainly spell out that RTX is not used by this demo. It should be 100% clear looking at those bench numbers.

Another thing to note, CPU appears to have a large impact here. I'm seeing gains going from a 7700K to a 9900K with the same GPU.

So you take advice from videos? That explains a lot. Do you often watch videos that tell you not to listen to select people you will coincidentally loose arguments to in threads?

Fixed.

Well in all fairness let just use data from the Eurogamer website then
AVG FPS
2080 Ti 103fps
2060S 59.4fps
1080 Ti 58.7fps
5700XT 50.6fps
2060 50.1fps
source
So 2080 Ti is 75% faster than to 1080 Ti and 103% faster than 5700XT. 2060 is about the same as 5700XT and 15% slower than 1080Ti.
I don't know about you but 7 out of 8 RTX Turing (2060 Super and above) sit on top of everything else is pretty much "destroying".

Anyways those who selectively quote Bible verse sound something like this

Use a 2000 year old book to judge other people seems really out of date you know :).

Just been playing about with this too, at different resolutions. Core i7 9900K, 32 GiB DDR4, Titan X Pascal - all at default clocks but with heavy cooling to prevent throttling.

720p = 15680
1080p = 8715
1440p = 5260
4K = 2540

Interestingly, each resolution presented some oddities with regards to how certain visual effects looked. At 1440p, the volumetric lighting through the rain and steam was a little glitchy, but fine at other resolutions. At 1080p and 4k, some of the reflections displayed minor artifacts, which weren't present (or at the least, not noticeable) with the other settings.

It's an impressive piece of work, despite the obvious limitations that it's a tech demo and not a game.


This might be due to the use of asynchronous shader compiling in CryEngine, which scales with the number of threads supported by the CPU. I'll test it out.

Edit: Well on my system, it's not that - changing the config file to limit it to two threads didn't alter the 1080p score.

My 2080 Ti OCed with 8700K, 32GB Ram score:
Ultra RT
1080p 15400
1440p 9769
 
Last edited:
It's a Titan X (Pascal) - the first one released, rather than the later Titan Xp. Here's some results with core clock changes, tested at 1440p Ultra:

1658 MHz = 4900
1860 MHz = 5310
2050 MHz = 5620

So from the lowest to the highest clock, a change of 24%, results in a 15% change in score. Repeating the process with the RAM clock:

4810 MHz = 5200
5000 MHz = 5310
5200 MHz = 5350

Here a change of 8% in RAM speeds (4810 to 5200) results in a 3% score change. Interestingly, the test used 6.3 GiB of video memory (or more rather, that's how much is allocated by the application).
 


My 2080 Ti is a Non A chip so there certainly are higher clocked 2080Ti out there. A 75% improvement from Pascal to RTX Turing would be impossible without the usage of RT cores. Do you think that somehow the driver would delegate the ray tracing instructions to RT core even if that is not the way Crytek intent it to be anyway ?
Another example is RTX 2060 is 32% faster than 1660Ti, even though for non RT workload the difference between them is 20%.
 
My 2080 Ti is a Non A chip so there certainly are higher clocked 2080Ti out there. A 75% improvement from Pascal to RTX Turing would be impossible without the usage of RT cores.
Your 2080 Ti score is 89% higher than my Titan X's, but your card has a peak FP16 throughput 155% greater than mine, so if there is a lot of FP16 calculations going on, it would go a long way to explain the difference. FP32 performance is only 23% better, which will be the same for INT ops.

Do you think that somehow the driver would delegate the ray tracing instructions to RT core even if that is not the way Crytek intent it to be anyway ?
Yes, it's possible, but only if the shader routines that Crytek are using contain instructions that can be accelerated on the RT cores.

Another example is RTX 2060 is 32% faster than 1660Ti, even though for non RT workload the difference between them is 20%.
Some of this could be down to differences in SM count: RTX 2060 has 30 SM units, whereas the GTX 1660 Ti has 24; even accounting for differences in clock speeds, the 2060 still has 19% more ops rate.

That's well short of 32%, of course, but if one factors in the 16% difference in memory bandwidth, there's enough architectural difference between the cards to explain some of the advantage the 2060 has over the 1660 Ti.

There's one more difference, other than the above and RT cores, between the 2080 Ti and the Titan X Pascal (and the 2060 and the 1660 Ti) and that's Tensor cores. These accelerate matrix operations, so if there are significant number of those taking place in the benchmark, then the RTX models will have an additional advantage there.

In Eurogamer's analysis of the benchmark, the Radeon 5700 XT ran pretty close to a 1080 Ti:


It's only marginally behind the 1080 Ti in terms of FP32 performance but murders it for FP16; fill-rate, texturing, primitive throughout are all just a few % behind too. This leads weight to the idea that RTX cards, whether through the RT or Tensor cores are gaining an additional advantage.
 
Here’s my benchmark:


Amd athlon x4 845 (yeah, an excavator architecture cpu in 2019)
16 gb ddr3
256 gb ssd + 1 tb hdd
rx 560
600w non 80+ psu
 
Back