Cyberpunk 2077 DLSS + Ray Tracing Benchmark

Il reiterate. If you have eyes you can see a blatant and large improvement with RT turned on. Having working eyes does not make you an Nvidia fanboy.

Ive realised why there are so many people lying in the comments section here and falsely claiming that RT on doesnt make much difference. its because the multi-billion dollar corporation they fanboy over doesnt have this advantage yet.

The cancer of fanboyism has blinded people literally.

Thanks for assuming that I am an AMD fanboi, but the reality is, I am not.

What I am is someone that despise companies that abuse their own customers, like Nvidia has done over and over. And since people like you needs to be handheld to the info:

Physx blocked by Nvidia if ATI card found: https://hardforum.com/threads/physx-on-nvidia-cards-wont-work-if-an-ati-card-is-used.1451689/

Nvidia screws users that uses VFIO: https://heiko-sieger.info/graphics-cards-amd-vs-nvidia/

Besides that point, I was clear in the part as to why is not worth falling for the marketing hype, since the hardware is simply not there, as all these reviews keep pointing out.

Nobody is lying, but its clear that your fanboism and selfishness "I have the money, so I will pay for an overpriced GPU that doesnt makes sense" is a big part of the problem with today's world. Personal and immediate satisfaction above the rest.
 
Thanks for assuming that I am an AMD fanboi, but the reality is, I am not.

What I am is someone that despise companies that abuse their own customers, like Nvidia has done over and over. And since people like you needs to be handheld to the info:

Physx blocked by Nvidia if ATI card found: https://hardforum.com/threads/physx-on-nvidia-cards-wont-work-if-an-ati-card-is-used.1451689/

Nvidia screws users that uses VFIO: https://heiko-sieger.info/graphics-cards-amd-vs-nvidia/

Besides that point, I was clear in the part as to why is not worth falling for the marketing hype, since the hardware is simply not there, as all these reviews keep pointing out.

Nobody is lying, but its clear that your fanboism and selfishness "I have the money, so I will pay for an overpriced GPU that doesnt makes sense" is a big part of the problem with today's world. Personal and immediate satisfaction above the rest.
Just because you don’t place value in these products doesn’t make others stupid for doing so.

In fact calling other people stupid is usually the best way to alienate others and make yourself look desperate. And you really have.

I’m going to continue buying and enjoying cards for more money than you approve of and there isn’t a dam thing you can do about it.

But I do wonder why you bother coming here? Are you really that pathetic? Go to another forum and tell them they are stupid for spending more than you approve instead? Why not a car forum? You must be shaking with rage when you see how much they spend on things like a performance exhaust. Way more than these cards cost!

if you could be so gracious as to tell us how much we should spend for you not to think we are stupid then pleas enlighten me. I could use a good laugh.
 
^ It's truly ironic that you lack the self awareness to see exactly how much the 2nd paragraph contracts the first... ;)
There’s a difference between enjoying something and being emotionally invested. Perhaps you are too simple to grasp this though.
 
since the rtx 3090 gets 19 fps at max everything without dlss at 4k we will need more than triple the rt performance to get to 4k 60 fps, The best quality dlss setting get 35 fps at 4k so here we still need double that performance.
I don't see 2x performance coming until at least 2 to 3 more years especially when current generation leaps are around 50% gains. Eg. RDNA 3 will have similar performance improvement succession that rdna 2 has. This improvement has to be an annual thing to make it on the 2 to 3 year mark.

The issue is partially that you still need a tone of processing to do the non-RT lighting effects. That's why NVIDIA is gradually adding the HW and support for specific sub-sets of RT, rather then trying to replace everything at once (which as you noted, we are nowhere near doing yet).

To me, this is like the mid 90's where GPUs first started to be a thing, and you often had to choose what to sacrifice in-game based on whatever GPU you had. Same deal here; it'll take a good 5 years, but RT is on the way.
 

There was a reason for that. Multiple display drivers was always a bit on the finicky side, and ATI/AMD went out of their way to mention they weren't going to offer any support for such configurations. And guess what? Stuff broke in this config, and it all ended up on NVIDIA's plate.

And I again note (for the millionth time) that even though it is proprietary NVIDIA tech, PhsyX is an open standard that AMD is free to join and support any time they choose to do so.
 
There was a reason for that. Multiple display drivers was always a bit on the finicky side, and ATI/AMD went out of their way to mention they weren't going to offer any support for such configurations. And guess what? Stuff broke in this config, and it all ended up on NVIDIA's plate.

And I again note (for the millionth time) that even though it is proprietary NVIDIA tech, PhsyX is an open standard that AMD is free to join and support any time they choose to do so.

Sorry, but no and you know that.

Nvidia went out of their way to block their own cards and was proved by either rolling back to a previous driver or hacking the driver, which there was a guy doing every time that Nvidia released a new one.

And by the time that Nvidia made Physx an open standard, it was too late since the industry had already moved on, exactly because Nvidia greediness.

Dont make invalid excuses on nvidia behalf.
 
Gotta go big or go homeView attachment 87276

So... you leaving soon?

Seriously, to me Ray Tracing is a distraction in the games, not a benefit. I'm just not willing to put up with new technology for technology's sake if it doesn't have enough benefit. This is far from beneficial as it's expensive, both in dollars and frame rates. Plus, really I just don't see enough difference to make it worth my time or effort. If this is "big" count me out, and I'll stay put for now.
 
Well, from those images, it's very clear that I wouldn't notice ray-tracing (or a lack thereof) during gaming because I have to look pretty damn hard in order to see the difference in the first place. There are differences, but they're so slight that I'm definitely not going to care about it one way or the other. I'll probably get the game in about a month's time and hopefully by then, what I call "the early adopter headaches" (aka bugs) will be ironed out and it will just run correctly.

The game looks great with or without ray tracing (which is what I expected anyway) so, good enough.
 
Good thing about RT is that it won't get old. Even in 30 years graphics cards won't be able to do it properly. Because the pipeline will change and everything will be rendered through ray tracing. Not just shadows or reflections, but every single pixel in a picture. And RT won't be limited to just one bounce, like now, but maybe 5 ray bounces per pixel, plus extra bounces for high-quality anti-aliasing.

Considering that each ray per pixel requires exponentially more computation power, we can't expect to hear "this card is too fast" in the next 30 or more years. And after that.... full physics rendering is coming. Meaning, everything is rendered the same way. Graphics, sound, movement, everything is using the same formula. Just like in real world.
 
RT won't be limited to just one bounce, like now, but maybe 5 rays...considering that each ray per pixel requires exponentially more computation power...
I imagine you like the rolling-thunder sound of "exponentially more computation power", but that isn't the case at all: it's a simple linear function. And current raytracing isn't limited to "one bounce" (the recursion depth) now, either, but is software controllable. Finally, your statement about "full physics rendering" seems to indicate you're not exactly aware of what those terms mean, as optics and dynamics are both branches of physics ... yet there is no one all-encompassing formula that we can use to render "graphics, sound, movement, everything".
 
Good thing about RT is that it won't get old. Even in 30 years graphics cards won't be able to do it properly. Because the pipeline will change and everything will be rendered through ray tracing. Not just shadows or reflections, but every single pixel in a picture. And RT won't be limited to just one bounce, like now, but maybe 5 ray bounces per pixel, plus extra bounces for high-quality anti-aliasing.

Considering that each ray per pixel requires exponentially more computation power, we can't expect to hear "this card is too fast" in the next 30 or more years. And after that.... full physics rendering is coming. Meaning, everything is rendered the same way. Graphics, sound, movement, everything is using the same formula. Just like in real world.
I get what you're saying but I think that 30 years is too long. I foresee the Digital Age only lasting another 20 years AT MOST. The Quantum Age is at hand and comparing even a small Quantum Computer to a modern digital supercomputer is like comparing a smartphone to an analogue mainframe of the 1960s. Remember that the mainframe used by NASA to land on the moon only had 4kB of unbelievably-slow RAM but to the people at that time, it wasn't 4k, it was probably a "whopping" 4,096 bytes. The increase in computing capability is so gigantic that it's impossible to wrap our minds around it. We're talking a global paradigm shift here.

Quantum Computers are so powerful that a full virtual reality interface would be no more strenuous to them than the operation of a keyboard and mouse would be to a modern digital computer. Forget ray traced images, we're talking full ray-traced virtual 3D environments that are photorealistic.

In our lifetimes, we will see the precursor to the holodeck and the idea of using video cards for ray-tracing images on a screen will seem adorably quaint. Kind of like the way we look at an old typewriter now.
 
Okay, if I'm playing on a 4K television I have the option of playing the game at 1440p in Quality or I could play the game in 4K performance mode. I think, in both cases, I'm starting with a 1080p render resolution. Wouldn't that mean that the 4K/Performance should look better on a 4K television that 1440p Quality? If I do 1440p, the image is going to be upscaled again after DLSS does it thing. If DLSS is really better than just upscaling (and it is) then 4k performance should still look better than 1440p Quality on a 4K native screen.
 
A good video showing the enormous difference RT makes in Cyberpunk. To my eyes, the difference between low and high without RT is smaller than the difference between turning RT Ultra on from high settings.

If you still think RT makes no difference, you definitely need your eyes tested.

 
I get what you're saying but I think that 30 years is too long. I foresee the Digital Age only lasting another 20 years AT MOST. The Quantum Age is at hand and comparing even a small Quantum Computer to a modern digital supercomputer is like comparing a smartphone to an analogue mainframe of the 1960s.

I think I'm wrong too. Because 30 years is TOO SHORT. Several reasons:

1. Quantum computers rely on effects so volatile, they need to be in extremely isolated bubbles. Those special conditions are hard to maintain, making them very expensive.

2. To be practical enough for video games those future Q.C. will need whole lotta of quantum bits. Right now even the best Q.C. are very poor in qbits. Even at exponential growth, I don't think Q.C. will in 30 years have enough qbits for HQ gaming, at a price that a mortal could afford.

3. Quantum computers are very good in breaking encryption, generating cryptocoins and all kind of dangerous stuff. I doubt governments and private organizations that currently use Q.C. will allow anything very advanced to be available to everyone. Not that soon.

4. For a moment let's imagine all the above problems are solved and in 30 years everyone gets their own Q.C. supercomputer that has enough qbits for HQ computing and costs like a normal video card (doesn't sound convincing, right?). There's still no "fear" that such a computer will be able to realistically represent our actual world. Simply because our real world is way too big. And I don't mean the universe. I mean just one average city. It's too big. Every little thing in our world is made of zillions of real quantum particles. So, how do you think a computer with 10^9 qbits can simulate a world of 10^999 qbits? I can tell you how: With lots of workarounds and cheating, that sooner or later starts looking unconvincing.

So, no matter how powerful computers we make, they will never be able to realistically simulate our real world. If you think something is convincing, just zoom in, and you'll see degradation. Or just zoom out, and you'll see outer limits. It's the fundamental problem of simulation, regardless of the hardware we use.
 
Last edited:
I imagine you like the rolling-thunder sound of "exponentially more computation power", but that isn't the case at all: it's a simple linear function. And current raytracing isn't limited to "one bounce" (the recursion depth) now, either, but is software controllable. Finally, your statement about "full physics rendering" seems to indicate you're not exactly aware of what those terms mean, as optics and dynamics are both branches of physics ... yet there is no one all-encompassing formula that we can use to render "graphics, sound, movement, everything".

Actually, everything in real world is just movement and interaction of particles and EM waves. A small set of rules applied to a huge number of particles.

But running such a simulation on nowadays computers would be futile. Fortunately, humans have special receptors for a narrow range of air vibrations and EM waves, which makes it possible to cheat, by generating picture and audio separately, and using only a sub-range of that narrow range.

It works, but only as far as you're not modifying the environment. As soon as you start making particle-based environment, you notice how weak our computers are. Just look at the "particles" in Minecraft. 1 x 1 x 1 meter. That's hardly a particle, and yet, it's the smallest "particle" we can work with on our average gaming computers.

Same goes for RT. Our RT scenes aren't really fully RT rendered. Just some parts use ray-tracing. Rendering entire scene using RT or radiosity could take a few minutes, or even a few hours, per picture. That's not acceptable. We need the scene to render in 0.016 seconds, and not 1600 seconds.

So, first we need to conquer rendering entire scene in real time using high-quality ray-tracing. Then we can start switching to full simulation mode, which won't happen anytime soon.
 
Last edited:
Technology teaches us we should never say 'never'.

Well, in this case we can freely say never. Because in order to simulate world X using something that is inside of world X, means 1+2 should be 2. And I have a feeling that 1+2 will never be 2.
 
Well, in this case we can freely say never. Because in order to simulate world X using something that is inside of world X, means 1+2 should be 2.
Your premise has multiple flaws. Virtual reality doesn't require simulation of the entire universe, but only one tiny corner of it. Even that corner doesn't require a complete simulation, but simply what one observer can see and hear.

And even that doesn't require mathematically perfect results, but only less error than a human's sense organs can perceive. We're not that far off today.
 
Your premise has multiple flaws. Virtual reality doesn't require simulation of the entire universe, but only one tiny corner of it. Even that corner doesn't require a complete simulation, but simply what one observer can see and hear.

And even that doesn't require mathematically perfect results, but only less error than a human's sense organs can perceive. We're not that far off today.

Your premise has many flaws too:

1. MS Flight Simulator is trying to cover our entire planet. Is that the "little corner" you talked about? Imagine if you could step out of the airplane, enter all the houses, move objects in the houses, driver cars, etc. On the entire planet. Now... that would be the biggest free roaming game ever made. And I'm pretty sure it's out of the reach of our modern technology.

2. Now, let's expand of that. We'll create a sim world out of little spheres, which are much larger than atoms, molecules, viruses or even bacteria. Let's make them 0.1mm in diameter. That's barely visible to the naked eye. Now, imagine a medium-size city, like Stockholm, but without suburbs. That's approximately 28 km2. Let's say that we destroy all the houses, all the things in those houses, all the rocks and trees in parks, all the cars, all living creatures, everything in that city. We turn entire city into powder. It's safe to assume we would get at least 10cm layer of dust and material, covering those 28 km2. That's the stuff that makes entire city. Now, let's convert that stuff into our little spheres. 28 km2 x 0.0001 = 0.0028 km3. Which is 2,800,000 m3. Which is 2,800,000,000,000,000,000 little spheres (0.1mm in diameter).

Good luck finding a computer that can compute collisions between those spheres. Even just storing that city would be futile with our modern or near-future tech. For each little sphere you need to store its 3D position, speed vector, color, reflectivity, transparency, and various coefficients of the physical forces and properties supported by the sim engine. Let's assume that's just 64 bytes per particle.

To store Stockholm (without suburbs) you'd need 162,981,450 terabytes. And that's a working model, so this would be in RAM not on SSD.

Good luck rendering that model using ray-tracing, even using the best continuous LOD-reduction algorithms.

BTW, the above calculation didn't take ground into account. But the simulation should simulate at least 50cm of ground, so you can dig a little hole in it. Although graveyards wouldn't look good with only 50cm depth.

And of course, that was just Stockholm. But a massive multiplayer game should be covering entire Earth. Where hundreds of millions of people could be building or digging in any part of the world. In parallel. And the results would be visible to everyone else. Instantly. So, how many SSDs do you need to store the top layer of our planet with 0.1mm precision?

And how fast those SSDs need to be, considering that world is being constantly updated. How much RAM do you need for the working model?
 
Last edited:
1. MS Flight Simulator is trying to cover our entire planet. Is that the "little corner" you talked about?
Yes. Compared to the entire universe, it's an infinitesimal fraction.

We'll create a sim world out of little spheres....2,800,000,000,000,000,000 little spheres...good luck finding a computer that can compute collisions between those spheres.
If your premise had any validity, the entire concept of statistical mechanics would fail. Google the term, and you'll understand your error. We don't need to model every interaction in a macroscopic collection of objects to precisely simulate its dynamics.
 
Yes. Compared to the entire universe, it's an infinitesimal fraction.

If your premise had any validity, the entire concept of statistical mechanics would fail. Google the term, and you'll understand your error. We don't need to model every interaction in a macroscopic collection of objects to precisely simulate its dynamics.

Sure, but you don't understand that games are already doing almost every trick in the book to achieve acceptable frame rates, while simulating huge worlds with graphics that is easy on the eye, pushing the limits of our modern crap computers (because let's face it, they are still slow).

But you're trying to go beyond that. If what you claim is true, then why don't Microsoft simply release a new version of Minecraft where the world is made of 1mm particles? Instead of those horrible 1m3 cubes.

Come on, give them a few pointers from your vast knowledge of statistical mechanics on how to improve their horrible 1m resolution to 1mm. If Minecraft had terrain made of 1mm particles I'd install it immediately. I don't even like games of that type, but I'd enjoy the technology.
 
Back