So much for RTX: Crytek reveals real-time ray tracing demo for AMD and Nvidia hardware

No, you are wrong, that's what APIs are made for, abstract the hardware so the same software can be run on different hardware design. That's why the same game can run on both NVIDIA and AMD, or the first Unreal run on today's hardware (which are completely different from what was available at the time).
No company do anything for charity (except real charity initiatives ;)), do you think AMD work for charity?

You mean like how Gameworks is also compatible with AMD hardware but highly optimized for the Nvidia hardware? Or maybe how PhysX can also run on the CPU but produces slideshow when not run on Nvidia hardware? Maybe Hairworks is a better example?

AMD looks like charity these last years, not because it is charity, but because it lacks money and market share. Thanks to AMD you have Vulcan and DX12 and not an optimized DX11 that would kept favoring Intel(high IPC) and Nvidia(more optimized drivers for high IPC CPUs). Thanks to AMD you also have, finally, compatibility with Adaptive Sync and you don't have to pay a ridiculous amount of money for that GSync module. And let's not forget that AMD's TressFX was not killing performance when running on an Nvidia GPU.
 
Lol back to the conspiracy theory ? remember when AMD was banking on Mantle and DX12 since Nvidia didn't have async compute on Maxwell ? yeah Nvidia fixed that with Pascal and now Turing is getting even more efficient in AMD tech than AMD itself (Wolfenstein 2, Far Cry 5 and Rainbow six siege have AMD tech written all over them). If anything is to blame it's AMD and its stagnation, hopefully they can turn it around with Navi though...

And RTX is here to stay, if anything I think DXR is easier to implement correctly than the shitty DX12 which so far has 1 game that run better than DX11 (Strange Bridgade, **** game though)

It's not conspiracy theory. It's business decisions made from Nvidia 10+ years ago. Why do you think they bought Ageia in the first place? Why where they locking PhysX when it was proven to run perfectly with an AMD GPU as primary? Do you remember all that tessellation on Crysis 3(if I remember correctly the game). Maybe the removal of DX 10.1 from the first Assassin's Creed? Just a couple of examples.

And now that you mention it. Isn't funny how all Nvidia techs are proprietary and closed and highly UNoptimized for all other hardware, while AMD's techs are free and open and run as good or even better on other hardware?

You can't blame a company that is successfully fighting two huge companies having a fraction of their budget keeping at least a duopoly in CPUs and GPUs. But you can blame users who support monopolistic tactics, attack others who suggest AMD hardware, even when that hardware is perfect for the job, or keep asking AMD to produce something good, only with the hope that Intel and Nvidia hardware will become cheaper. You can blame users who will cut their hands if they give money to AMD, even if that company finally produces the best hardware. Because they love to go with the biggest brand. Because they feel that they pay and buy prestige, not just hardware.


P.S Just to be clear, I do understand Nvidia's monopolistic business practices and the need for them. Nvidia is a giant standing on one strong foot. Really strong foot, but just one. Cut it and Nvidia will fell on it's face. Nvidia tried to get another business in AI, but unfortunately for them, AI is becoming too crowded and GPUs are not the best hardware for the job. Buying Mellanox could help them make that single foot stronger, to withstand the pressure from the loss of the low end GPU market(today) and the mid range GPU market(tomorrow) from the integrated graphics.
 
Last edited:
A $300 gpu can do this today color me impressed. It took Dice and Nvidia months after launch to get a playable framerate in BFV on a $1200 gpu. Rtx is the modern day gameworks/Physx. Nvidia found a way to gimp us again. Fool me once shame on you fool me twice shame on me.

DICE didn't have RTX hardware til very late in development...


"It just works..." <--------
 
From my experience, cryengine sucks. Overbearing wasteful engine. I wouldn't believe anything these guys claim tell you actually play it. truth be told cryengine is a venomous memory sucker since it's creation. Good graphics but way too big of a power hog compared to your end result. So them saying they can pull off RTX with very little resources leaves me at the very least, skeptical. Especially in a early release video and very little actual information, or real in game footage. Sound like more BS from them, like crysis being anything more then eye candy. I think they are showing this long before it's finished because they are afraid Nvidia is going to get too big of a head start. Hoping wild promise will slow the implementation.

Well my experience is that most games these days are what you just described.
 
We've had GPU raytracing for years already (Vray RT, Octane, Redshift, etc.) and I've been saying so the entire time, since the RetarTX cards emerged. Sure, they are faster than previous GPUs and sure, the cores do accelerate some math. But they were never necessary for raytracing - something NO game still has in full, at all. Raytracing isn't just for reflections and refractions, nor for just GI, nor for shadows. A real Raytracer traces everything. Diffuse, specular, bump, reflect, refract, glossiness (for both reflections and refractions separately), and a dozen or more other channels. It traces depth. It traces motion blur, even. We've had this tech for ten years in the CGI world.

But just not in realtime. As GPUs get faster, it gets closer and closer, but then the speed-ups allow for more and more complex scenes, which slow things down again. Vray Next is pretty much the best implementation so far, though Redshift is also excellent. But it's really nice to see Crytek slap Nvidia around on this topic - they deserve the mockery, even if CryEngine's implementation is still young and rough.
 
Lol back to the conspiracy theory ? remember when AMD was banking on Mantle and DX12 since Nvidia didn't have async compute on Maxwell ? yeah Nvidia fixed that with Pascal and now Turing is getting even more efficient in AMD tech than AMD itself (Wolfenstein 2, Far Cry 5 and Rainbow six siege have AMD tech written all over them). If anything is to blame it's AMD and its stagnation, hopefully they can turn it around with Navi though...

And RTX is here to stay, if anything I think DXR is easier to implement correctly than the shitty DX12 which so far has 1 game that run better than DX11 (Strange Bridgade, **** game though)

Actually Nvidia did try to claim it had Async compute on Maxwell, only it failed to disclose that it was emulated with zero performance boost, in typical Nvidia fashion. Thanks for reminding me of that. It's honestly hard to keep track of all the times Nvidia has made misleading statements.
 
I mean, Do they have a download for the demo we can run ourselves? Something I like about Nvidia demo's and Epics Unreal Engine demo's are that you can download them and run them on your own hardware.

Would be kinda cool to download this and see how well it runs on Hardware that's faster than the Vega 56.

Also, I wonder if they could accelerate Ray-Tracing using Nvidia's dedicated hardware instead? When an RTX card is detected, move Ray-Tracing pipeline to etc...

Kinda cool if they really have pulled this off though! I'd buy the original Crysis again if they updated it with Ray-Tracing and into a newer CryEngine that uses more than two threads.
Maybe Techspot added this to the article after seeing your post, but the final line in the article states: "In all likelihood, it will be a long time before anyone can play around with Crytek’s ray tracing and unfortunately the tech demo shown in the video cannot be downloaded so we can try it on our own hardware, but we’re all waiting with bated breath."
 
Lol back to the conspiracy theory ? remember when AMD was banking on Mantle and DX12 since Nvidia didn't have async compute on Maxwell ? yeah Nvidia fixed that with Pascal and now Turing is getting even more efficient in AMD tech than AMD itself (Wolfenstein 2, Far Cry 5 and Rainbow six siege have AMD tech written all over them). If anything is to blame it's AMD and its stagnation, hopefully they can turn it around with Navi though...

And RTX is here to stay, if anything I think DXR is easier to implement correctly than the shitty DX12 which so far has 1 game that run better than DX11 (Strange Bridgade, **** game though)

Actually Nvidia did try to claim it had Async compute on Maxwell, only it failed to disclose that it was emulated with zero performance boost, in typical Nvidia fashion. Thanks for reminding me of that. It's honestly hard to keep track of all the times Nvidia has made misleading statements.

I agree 100% Also anyone remember when Nvidia used wood screws? Also anyone remember some of their video cards melting and some caught on fire? Pepperidge Farms remembers. How about a driver that bricked them and some drivers that fried them by disabling the fans? I remember being a kid and heard so many complaints when I worked at STAPLES & BESTBUY.
 
Maybe Techspot added this to the article after seeing your post, but the final line in the article states: "In all likelihood, it will be a long time before anyone can play around with Crytek’s ray tracing and unfortunately the tech demo shown in the video cannot be downloaded so we can try it on our own hardware, but we’re all waiting with bated breath."

Except the example files are already up, in the CryEngine launcher. ;)
 
Well you can clearly see its fake ray tracing because they made a simple mistake.
Instead of mirroring the reflection they flipped it.
I dont expect anyone to understand because up to now no one has spotted the error and fakeness
https://I.imgur.com/JtAOGH0.png

So wait you scanned through the video and found a single mistake by the devs for an experimental upcoming feature? Color me surprised. If that's how low your bar is, I'm sorry to say that Nvidia's implementation would not meet your standard.

The only thing you've proven here is that they made a single mistake. The video card still had to render that reflection no matter the orientation. The reflection is still clearly high quality and being done in real time.

So please enlighten everyone here as to why incorrect orientation of a single reflection invalidates everything else?
 
Well you can clearly see its fake ray tracing because they made a simple mistake.
Instead of mirroring the reflection they flipped it.
I dont expect anyone to understand because up to now no one has spotted the error and fakeness
https://I.imgur.com/JtAOGH0.png

So wait you scanned through the video and found a single mistake by the devs for an experimental upcoming feature? Color me surprised. If that's how low your bar is, I'm sorry to say that Nvidia's implementation would not meet your standard.

The only thing you've proven here is that they made a single mistake. The video card still had to render that reflection no matter the orientation. The reflection is still clearly high quality and being done in real time.

So please enlighten everyone here as to why incorrect orientation of a single reflection invalidates everything else?

it shows it was user error so not a automated process they would have us believe
 
Its manually done and not ray traced at all.
If it was then it would be 100% correct 100% of the time.
But it only takes a single mistake to prove their method isn't ray traced at all its manually configured.
The hole point is to remove the need for cubemap and planar reflections.
 
it shows it was user error so not a automated process they would have us believe

Or, like I pointed out, not final technology.

"If it was then it would be 100% correct 100% of the time."

You must have missed the part where Nvidia's RT tech had issues as well. It didn't project correctly on certain surfaces because the material properties was set incorrectly and sometimes it failed to project reflections at all.

Does this make Nvidia's technology "fake"? Nope. Your point is disproven.
 
Probably some optimization error, ray tracing is computational expensive after all. However as gamersnexus demonstated with his video, a demo with static objects can have fake GI and reflections, only with interactive objects that show the real benefit of Global ilumination and reflections.

 
No, you are wrong, that's what APIs are made for, abstract the hardware so the same software can be run on different hardware design. That's why the same game can run on both NVIDIA and AMD, or the first Unreal run on today's hardware (which are completely different from what was available at the time).
No company do anything for charity (except real charity initiatives ;)), do you think AMD work for charity?

You mean like how Gameworks is also compatible with AMD hardware but highly optimized for the Nvidia hardware? Or maybe how PhysX can also run on the CPU but produces slideshow when not run on Nvidia hardware? Maybe Hairworks is a better example?

AMD looks like charity these last years, not because it is charity, but because it lacks money and market share. Thanks to AMD you have Vulcan and DX12 and not an optimized DX11 that would kept favoring Intel(high IPC) and Nvidia(more optimized drivers for high IPC CPUs). Thanks to AMD you also have, finally, compatibility with Adaptive Sync and you don't have to pay a ridiculous amount of money for that GSync module. And let's not forget that AMD's TressFX was not killing performance when running on an Nvidia GPU.

AMD TressFX was literally unplayable on 680gtx until Nvidia optimized it on their own you had framedrops to 15fps, 680gtx /highend/ was slower than 7870 /mid tier card/

https://www.techspot.com/review/645-tomb-raider-performance/page4.html


Do I need to address your other tales or was that enough for you ?

Do I have to remind you first AMD DX11 games, like Dragon Age II where 580gtx was slower than piss poor AMD 5770HD, which is even slower than the oldy GTX260 at FullHD?

DX11 is pretty much optimized and runs better than DX9 and DX10, it is AMD fault their driver has serious overhead issues under DX11.

As always, AMD is never to blame, it is always the others.
 
Last edited:
We've had GPU raytracing for years already (Vray RT, Octane, Redshift, etc.) and I've been saying so the entire time, since the RetarTX cards emerged. Sure, they are faster than previous GPUs and sure, the cores do accelerate some math. But they were never necessary for raytracing - something NO game still has in full, at all. Raytracing isn't just for reflections and refractions, nor for just GI, nor for shadows. A real Raytracer traces everything. Diffuse, specular, bump, reflect, refract, glossiness (for both reflections and refractions separately), and a dozen or more other channels. It traces depth. It traces motion blur, even. We've had this tech for ten years in the CGI world.

But just not in realtime. As GPUs get faster, it gets closer and closer, but then the speed-ups allow for more and more complex scenes, which slow things down again. Vray Next is pretty much the best implementation so far, though Redshift is also excellent. But it's really nice to see Crytek slap Nvidia around on this topic - they deserve the mockery, even if CryEngine's implementation is still young and rough.

NVIDIA never claimed RTX was "necessary" for Ray Tracing, they claimed RTX would make some Ray Tracing tasks faster. Given this demo lacks RTX at all, it's impossible to prove or disprove NVIDIA's claim from it.
 
No, you are wrong, that's what APIs are made for, abstract the hardware so the same software can be run on different hardware design. That's why the same game can run on both NVIDIA and AMD, or the first Unreal run on today's hardware (which are completely different from what was available at the time).
No company do anything for charity (except real charity initiatives ;)), do you think AMD work for charity?

You mean like how Gameworks is also compatible with AMD hardware but highly optimized for the Nvidia hardware? Or maybe how PhysX can also run on the CPU but produces slideshow when not run on Nvidia hardware? Maybe Hairworks is a better example?

AMD looks like charity these last years, not because it is charity, but because it lacks money and market share. Thanks to AMD you have Vulcan and DX12 and not an optimized DX11 that would kept favoring Intel(high IPC) and Nvidia(more optimized drivers for high IPC CPUs). Thanks to AMD you also have, finally, compatibility with Adaptive Sync and you don't have to pay a ridiculous amount of money for that GSync module. And let's not forget that AMD's TressFX was not killing performance when running on an Nvidia GPU.

AMD TressFX was literally unplayable on 680gtx until Nvidia optimized it on their own you had framedrops to 15fps, 680gtx /highend/ was slower than 7870 /mid tier card/

https://www.techspot.com/review/645-tomb-raider-performance/page4.html


Do I need to address your other tales or was that enough for you ?

Do I have to remind you first AMD DX11 games, like Dragon Age II where 580gtx was slower than piss poor AMD 5770HD, which is even slower than the oldy GTX260 at FullHD?

DX11 is pretty much optimized and runs better than DX9 and DX10, it is AMD fault their driver has serious overhead issues under DX11.

As always, AMD is never to blame, it is always the others.

DAII was an odd case, given how it was heavily VRAM bandwidth bottlenecked. That's one area ATI/AMD has always focused on, and it showed in that particular title.

As a general rule, ATI/AMD does better in titles that are bottlenecked by memory bandwidth, NVIDIA does better in titles that are dominated by shader performance. Problem is, more and more games are more bias towards the latter, not the former.
 
You mean like how Gameworks is also compatible with AMD hardware but highly optimized for the Nvidia hardware? Or maybe how PhysX can also run on the CPU but produces slideshow when not run on Nvidia hardware? Maybe Hairworks is a better example?

Translation: NVIDIA makes technologies that make certain tasks perform faster on NVIDIA's own hardware, and upset AMD doesn't get any benefit for free from NVIDIA's internal R&D.

Take PhysX; the CPU libraries are basically used everywhere now; it's probably the most used physics API out there. The GPU portion though, not so much. Fact is, GPUs aren't powerful enough to handle dynamic multi-object interactions, which get massively complicated very quickly. That's why most physics engines today look more or less identical to what they were a decade ago; we simply don't have enough computational horsepower for multi-object dynamics.

Hairworks has similar problems, since it's just PhysX doing a very specific thing.

AMD looks like charity these last years, not because it is charity, but because it lacks money and market share. Thanks to AMD you have Vulcan and DX12 and not an optimized DX11 that would kept favoring Intel(high IPC) and Nvidia(more optimized drivers for high IPC CPUs). Thanks to AMD you also have, finally, compatibility with Adaptive Sync and you don't have to pay a ridiculous amount of money for that GSync module. And let's not forget that AMD's TressFX was not killing performance when running on an Nvidia GPU.

DX12 was always going to come along, regardless of what AMD claimed.

TressFX does a lot less then Hairworks did; color me shocked the performance impact was less then trying to dynamically perform physics on every single strand of hair on an object.

Likewise, Freesync is a technically inferior solution to Gsync and is DOA outside of gaming monitors due to being tied to Displayport. Both techs are likely DOA once HDMI 2.1 hits though; HDMI's VRR implementation likely wins by default since its mainlined in the HDMI spec.
 
Translation: NVIDIA makes technologies that make certain tasks perform faster on NVIDIA's own hardware, and upset AMD doesn't get any benefit for free from NVIDIA's internal R&D.

Take PhysX; the CPU libraries are basically used everywhere now; it's probably the most used physics API out there. The GPU portion though, not so much. Fact is, GPUs aren't powerful enough to handle dynamic multi-object interactions, which get massively complicated very quickly. That's why most physics engines today look more or less identical to what they were a decade ago; we simply don't have enough computational horsepower for multi-object dynamics.

Hairworks has similar problems, since it's just PhysX doing a very specific thing.



DX12 was always going to come along, regardless of what AMD claimed.

TressFX does a lot less then Hairworks did; color me shocked the performance impact was less then trying to dynamically perform physics on every single strand of hair on an object.

Likewise, Freesync is a technically inferior solution to Gsync and is DOA outside of gaming monitors due to being tied to Displayport. Both techs are likely DOA once HDMI 2.1 hits though; HDMI's VRR implementation likely wins by default since its mainlined in the HDMI spec.


Allow me to summarize.

You have fluid dynamics (hair, fabric, water, etc) and structural (buildings, rag dolls, projectiles, etc) type of PhysX.

In real-time multiplayer games such as battlefield, the CPU PhysX has to be used, because it happens to all 64 players at the same time, such as a wall blowing out. But, things such as a Character's hair wavering around in-game (hairworks), isn't synced to anyone else's game... (it is solely for the end-user's point of view, and is used for marketing & fluff...) it is proprietary to your hardware.


Everyone knows that Microsoft's DirectX ML & RayTracing is better than Nvidia's proprietary solution. And all Nvidia is doing now that their cards have real a-sync compute, is they are trying to find out a way of marketing that fact, without stepping on the 1080ti's toes.

DX12 is all of sudden important, because Nvidia says so..? Or because they now have async?
 
Last edited:
Translation: NVIDIA makes technologies that make certain tasks perform faster on NVIDIA's own hardware, and upset AMD doesn't get any benefit for free from NVIDIA's internal R&D.

Take PhysX; the CPU libraries are basically used everywhere now; it's probably the most used physics API out there. The GPU portion though, not so much. Fact is, GPUs aren't powerful enough to handle dynamic multi-object interactions, which get massively complicated very quickly. That's why most physics engines today look more or less identical to what they were a decade ago; we simply don't have enough computational horsepower for multi-object dynamics.

Hairworks has similar problems, since it's just PhysX doing a very specific thing.



DX12 was always going to come along, regardless of what AMD claimed.

TressFX does a lot less then Hairworks did; color me shocked the performance impact was less then trying to dynamically perform physics on every single strand of hair on an object.

Likewise, Freesync is a technically inferior solution to Gsync and is DOA outside of gaming monitors due to being tied to Displayport. Both techs are likely DOA once HDMI 2.1 hits though; HDMI's VRR implementation likely wins by default since its mainlined in the HDMI spec.

So many words to say just two things.

Everything from Nvidia is awesome.
Everything from AMD is DOA.

OK. Whatever makes you happy.
 
Back