Minecraft definitely has the potential to be a game changer, once released.
Jup seems like Nvidia reflections are rendered in higher quality there (just play the clips at 0.25 speed), thus the bullet casing reflection are sharper and more distorted due to the water surface's imperfections. While the water puddles in AMD are just flat and no impurity at all.
Furthermore
AMD
Nvidia
AMD's reflections are less detailed and suffer from some quite jarring render error there. I also check the rain part, every Nvidia gpu it's right to left and for AMD it's left to right.
...
My opinion is that it’s been quite a good first year for a new tech.
Oh and that people who whine about it being too expensive must be new here. New techs always cost loads, what we’re people expecting, ray tracing on a budget GPU?
What good things happened? We actually got real time live ray tracing in games!What good things happened during the year for RTRT? Was there any single title looking way better(produced 'wow' effect) with RT on than off? I don't see any: you have to carefully examine every pixel in most scenes to tell if this is RT on or off. But you immediately notice the performance drop even using the most advanced 2080 Ti.
This is OK, but almost never it was so expensive, even 20 years ago GeForce 256 was not that much(even if you take the $ inflation into account). And since then there was no single new consumer-grade GPU to cost more than $1000. Of course full RT is a whole different rendering method but we have to wait about 10 years (or more) to make it practically useful. The uplift in rendering quality is absolutely not worth it for full RT. Hybrid rendering shows mixed results so far and is not that impressive.
The NV10 was a relatively basic chip, in terms of manufacturing, for the time of its launch (October 1999) having just 17 million transistors, built on a standard 220 nm stop-gap process node and with a chunky 140 mm2 die size. The SDR version launched at $250, with the DDR and Quadro versions appearing later. The relative lack of SKUs off that single chip is indicative of how little binning the manufacturing went through (this did change with the GeForce 2).This is OK, but almost never it was so expensive, even 20 years ago GeForce 256 was not that much(even if you take the $ inflation into account).
What do you expect?
There are more than 3 games that are playable at above 60fps with RTX on.1-3 games with nextgen RT graphics of quality that once you look at it you say to yourself: I don't want to play this rasterized ... any more And without 50-1000% perf. drop. I don't see any single title of such quality right now or even expect it in the near future. Today to me RT looks like tech used by some not very successful projects(Control, to some extent Metro Exodus) to get some attention (and ... money from nvidia).
There are more than 3 games that are playable at above 60fps with RTX on.
Certainly the very best top/high end GPUs are now very expensive (and Nvidia have clearly played a game of "people will still pay silly prices, if there is no competition"), but they're also utterly massive chips
There are more than 3 games that are playable at above 60fps with RTX on.
I advise doing some more research on this subject as you clearly don’t know anything about it if you think we don’t even have 1-3 games with RTX features.
RT cores are used in code done in the DirectX Raytracing (DXR) pipeline, if that code is doing ray-triangle intersection calculations and BVH acceleration structures. The drivers compile those shaders for the GPU, but it’s the SM scheduler and dispatch units that determine what parts of the GPU process the instructions. For Volta, Pascal, and Turing GTX, it’s the normal shader cores; for Turing RTX, it’s the shader and RT cores.As neeyik just mentioned, RTX Turing chips have special cores, called RT cores. Those "special" cores are not used in DXR games
RT cores are used in code done in the DirectX Raytracing (DXR) pipeline, if that code is doing ray-triangle intersection calculations and BVH acceleration structures. The drivers compile those shaders for the GPU, but it’s the SM scheduler and dispatch units that determine what parts of the GPU process the instructions. For Volta, Pascal, and Turing GTX, it’s the normal shader cores; for Turing RTX, it’s the shader and RT cores.
In Nvidia GPUs, the Gigathread Engine is the front end, so no - the RT cores aren't front end. But the same is true for the shader units, triangle setup, ROPs, TMUs, etc. The fact that the RT cores aren't responsible for warp schedule and dispatch has nothing to do with the performance, nor even relevant to the point as to whether or not they're supported in DXR.RT cores are not front end. That is why you take such a massive hit in performance, waiting for RT cores to speed things up in certain games, because it's not native.
You really ought to actually try using RTX before writing essays on tech forums slamming it. You clearly haven’t, it’s quite obvious when it’s turned on or off. Either that or you need glasses.Of course there are. But not in 4K, you're limited to 1440p at best, also 60 fps is not enough for every situation(some own 144-240 Hz monitors) . The main problem is: you have to carefully examine each frame to notice the better quality of "RTX on". We don't find any striking uplift in visual quality when turning RTX ON in any of the 5-7 titles we have right now. Maybe some obvious differences we notice only in reflections, but other details(light/shadows) you have to carefully explore or it looks a bit different but not super realistic(metro exodus for example). The original article is exactly about this issue. Even Quake II RTX with full RT is not day/night compared to the original: RTX is not able to fix ugly models (like 3-5 polygons each), it also ruins the dark/grim atmosphere of the original with adding more light to some locations. So I find nothing so exciting about RTX which would justify the price $1200-1500 (2080 ti), compared to my (used) $350-400 1080 Ti. Performance uplift for non-RTX is also not quite there: + 25-35% FPS in 1440p. So I'm skipping Turing and waiting for the next gen of RTX hardware.
In Nvidia GPUs, the Gigathread Engine is the front end, so no - the RT cores aren't front end. But the same is true for the shader units, triangle setup, ROPs, TMUs, etc. The fact that the RT cores aren't responsible for warp schedule and dispatch has nothing to do with the performance, nor even relevant to the point as to whether or not they're supported in DXR.
For OpenGL and Vulkan, an extension has to be used (but this is true for lots of GPU functions). However, Direct 12 has an integrated pipeline that has specific hardware requirements for it to be used - however, the actual implementation of the architecture is transparent to the pipeline. For example, the shader architecture employed by Nvidia is different to that used by AMD, but vertex, pixel, compute, etc shaders programmed in D3D12 are oblivious to this difference. The reason being is that the GPU drivers compile the code for the hardware, not the API. So when using the DXR pipeline, the respective GPU running the code will implement whatever hardware is available to process the instructions; in the case of RTX graphics cards, that means the RT cores are used for processing triangle-ray intersection calculations and BVH algorithms, and the tensor cores are used for denoising calculations. For GTX cards running the code, the shader cores are used for all such calculations and since they're not specialised for such work, the performance is clearly a lot slower.RT cores are not natively used, & they can't be used in real time
A bit late but Ray Tracing reminds me of 32-bit color back in the late '90s.