Path Tracing vs. Ray Tracing, Explained

I mentioned Monte Carlo algorithms in a ray tracing story a number of months ago here - in hindsight - it would obvious people would already be using such well known techniques for such intensive calculations .
That plus selective use of Ray and Path tracing and combining other strategies games will be looking fantastic in the next decade .
I think will will also get an explosion of 3d Animation . Yes the big Pixar one will still take months/years - but some kids shows will see a big improvement - given some of them are real basic - flat and uninspiring at moment .

So I will have to think up something new - someone will create an ultra mini 3D holographic light and possible world creator( basic shapes/textures form in it - different fields give such objects different properties like translucent etc ) - surrounded by as much image capture as possible - which would be used to enhance GPU image - so real time light effects combined with GPU inside your PC - given tiny laser projectors , given electro magnetic fields can manipulate certain gases/materials very quickly
 
Seems like it all depends on devs putting in the work to make games shine, too bad not many are willing to do that much work.

Its not only on the devs though, they have to deal with gamers who aren't just cheapskates yet want cutting edge visuals, but also with gamers who want whatever is announced asap then gripe when its wonky on release.

the fact game makers keep making games is what truly amazes me.
 
I must be a real tech philistine. I prefer the original pictures above.

I guess I'm a bit weird, I'm not really after photo realism in games, I just want better "games".
I get your point, perhaps still images are not the best showcase for ray traced lightning since it's about the atmosphere of your surroundings and how those elements interact with each other.
 
Well those images look amazing to me, but I wonder when can we really enjoy it without sacrificing fps or paying huge amount of money for high end GPUs ?

That day will come but in the mean time just save your money xD Sounds like a bad joke but well, anything can happen with new GPU generation 😉
 
Ray tracing is 15-20fps
Path tracing is 3-5fps

And yet, Teardown, a game that is nothing but a fully raytraced renderer hits 60+fps just fine...

Also, booo for no mention of Teardown and it being 100% ray traced with no traditional rasterization nor special GPU hardware to run not being mentioned at all in this.
Of all the titles released using ray/path tracing tech, let's ignore the only one that uses nothing but one of these technologies without special hardware.

Otherwise, good breakdown and explanation. I tended to struggle when trying to explain the difference between the two, will be a big help with breaking that down going forward.
 
There's an old benchmark called Catzilla released almost 10 years ago that has a path tracing part and it's quite intensive to run even today.

I totally forgot about Catzilla.
Man, that benchmark used to bring my old PC to a grinding halt even at lower settings.
Didn't know it incorporated path tracing, neat.

I should find that and give it a go on newer hardware.
 
I don't want to spend $1500 on a video card to run raytraced games. Must be a major cheapskate.

And you don't have to.
My 2060 ran Control just fine with all RT effects enabled and high settings at 1080, quality dlss mode. A 300.00 GPU at the time.

The 3080 I upgraded to will push it in 4k, quality dlss, maxed settings just fine. Dips below 60 in one hallway (the Hallway of Doom)

A 3060 is no where close to 1500.00 and will handle modern RT titles just fine as well.
 
And yet, Teardown, a game that is nothing but a fully raytraced renderer hits 60+fps just fine...

Also, booo for no mention of Teardown and it being 100% ray traced with no traditional rasterization nor special GPU hardware to run not being mentioned at all in this.
Don't forget that this article was predominantly about path tracing - a fully investigation of ray tracing would be a substantially larger piece. In the case of Teardown, the whole thing isn't a traditional polygonal renderer, because it's almost all done with voxels. It's not 100% ray traced, as liquid surfaces are done via screen space reflections.

It uses a clever array of different data structures (multiple voxel volumes, intead one single volume) for determining occlusion, surface colour, and so on - it makes it much simpler for marching rays from the camera, as for each voxel (or volume of voxels) a bounding box is rendered and then cast into the scene.

This relatively simple shape requires just a few shaders to handle, and along with the occlusion data structure, it's an easy process to determine which voxels the rays will interact with.
 
Don't forget that this article was predominantly about path tracing - a fully investigation of ray tracing would be a substantially larger piece. In the case of Teardown, the whole thing isn't a traditional polygonal renderer, because it's almost all done with voxels. It's not 100% ray traced, as liquid surfaces are done via screen space reflections.

It uses a clever array of different data structures (multiple voxel volumes, intead one single volume) for determining occlusion, surface colour, and so on - it makes it much simpler for marching rays from the camera, as for each voxel (or volume of voxels) a bounding box is rendered and then cast into the scene.

This relatively simple shape requires just a few shaders to handle, and along with the occlusion data structure, it's an easy process to determine which voxels the rays will interact with.

Fair enough and yeah, it's a bit off pace for the article itself. I just get sad that it seems to be an overlooked title when discussions along the lines of these technologies come up, even if the way it handles rendering varies from traditional design (having to use voxels vs traditional triangles for performance cost and physics calculations for example).

Do legit appreciate the article and breakdown between the two in relative 'laymans terms', so to speak.
 
There's an old benchmark called Catzilla released almost 10 years ago that has a path tracing part and it's quite intensive to run even today.
It's actually raymarching, not path tracing/ray tracing, as such. Essentially a single ray per pixel is run through the scene volume and at set distances along the ray's path, a spherical distance function is run in a pixel shader (well, fragment shader, as Catzilla is OpenGL). The results of the shader determine whether or not the ray has 'hit' a surface (usually done by setting a hit to equal a sphere radius below a small value). Where a ray 'hits' a surface, the normal to that surface can then be calculated, and the appropriate lighting can then be calculated.

In the case of Catzilla, I think raymarching is used to determine the relative lighting effects of the two cats' laser beams; either that, or it is used to generate the combined beams structure itself.
 
Thanks for the compact explanation of a relevant topic! I had occasionally wondered what the difference between path tracing and ray tracing is, but the whole subject is so complex that I never tried to find out. This brings out the main points clearly, focusing on aspects and ideas that will be useful to most people while reducing the arcana. Kinda like path tracing itself, I guess.
 
I would disagree with only two summary points and the first is that you say that the primary ray first detects if it intersects an object and then fires a second ray to determine which triangle it hit.

Technically that's probably true of most raytracers/path tracers but that's an optimization not intrinsic to the process.

And usually it's not first testing to see if it hit an object, it's testing to see if it hit a bounding box.

A scene often is reduced to multiple resolutions of cubes. Cubes render really really really fast. So instead of needing to test all 300 billion triangles in a scene, you can draw a box around a tree composed of 200 million triangles and ray trace against that simplification. If it hits the box, then you can go down to like 16 cubes forming the shape of the tree. If it hits one of those 16 then trace against another 16 cubes. Finally you can trace against say 16 actual geometric triangles in a leaf against the ray.

So often the multi scale bounding box tracing step actually fires a couple dozen rays... Although firing a ray isn't really precisely accurate either.

Overall acceleration structures are esoteric enough (and purely optional enough) that I wouldn't bother discussing them. There are lots of other somewhat exotic optimization structures.

It's also worth mentioning that almost every real production raytracer uses a hybrid of traditional whitted raytracing and pathtracing.

My other objection would be over saying path tracing uses a "random direction". That's not really correct either. That would be true of an *unbiased* path tracer (which doesn't really exist in the wild outside of hobby projects) but fundamentally a path tracer has the same ray paths as a raytracer, the difference is the path tracer only randomly selects one of those paths per bounce.

So a whitted raytracer may have 30 reflection rays, 30 refraction rays and 5 direct light samples and 100 global illumination samples. The path tracer is still tracing all of those paths, the difference is that it randomly picks !!Reflection Ray #12!!

https://www.quora.com/Whats-the-dif...8&share=c361fcff&srid=MTnI&target_type=answer
 
I would disagree with only two summary points and the first is that you say that the primary ray first detects if it intersects an object and then fires a second ray to determine which triangle it hit.

Technically that's probably true of most raytracers/path tracers but that's an optimization not intrinsic to the process.

And usually it's not first testing to see if it hit an object, it's testing to see if it hit a bounding box.
I don't think I specifically said that a secondary ray is used to determine which triangle is hit, but it's late here and my eyes are too tired to check! Apologies if I did. I grossly glossed over the use of bounding volumes in the ray traversal loop to keep this article shorter, but I did cover it in another one.
My other objection would be over saying path tracing uses a "random direction". That's not really correct either. That would be true of an *unbiased* path tracer (which doesn't really exist in the wild outside of hobby projects) but fundamentally a path tracer has the same ray paths as a raytracer, the difference is the path tracer only randomly selects one of those paths per bounce.

So a whitted raytracer may have 30 reflection rays, 30 refraction rays and 5 direct light samples and 100 global illumination samples. The path tracer is still tracing all of those paths, the difference is that it randomly picks !!Reflection Ray #12!!
This is an interesting point you raise, because the use of random direction rays appears to be how it is done in Q2VKPT:
At each visible surface point, two rays are traced. One is a shadow ray traced toward a random point on a random light source. This point on the emitter is sampled in a way that makes it representative for the entire illumination in the scene. To recursively accumulate multi-bounce illumination, another ray points in a random direction sampled proportionally to the scattering of the shaded material.
(Source: page 794 of Ray Tracing Gems II).
It's also worth mentioning that almost every real production raytracer uses a hybrid of traditional whitted raytracing and pathtracing.
And every other rendering trick in the book :) Many thanks for the feedback!
 
With path tracing, multiple rays are generated for each pixel but they're bounced off in a random direction. This gets repeated when a ray hits an object, and keeps on occurring until a light source is reached or a preset bounce limit is reached.

This is an accurate description of unbiased global illumination which is terribly inefficient since as you'll very rarely randomly find small point lights.

But as the quake quote mentions, it helps to use some direct light samples as well.
 
I don't want to spend $1500 on a video card to run raytraced games. Must be a major cheapskate.

They invented DLSS/FSR/XeSS exactly for this reason. You don't need a ludicrous 3090 to get good performance with RT enabled.

Such a shame that the GPU companies went down the wrong path and stuck with crappy ray tracing instead of developing accelerators for beam tracing. Rays are a 100% artificial construct, they are not light rays, they have zero physics embedded in them, and this is why we have to fudge things like colour, and we need hundreds of millions of rays to get a clean image. Beams on the other hand are actual solution of a simplified version of the actual Maxwell equations,called the Helmholtz equation. As such they carry all the physics with them: refraction, diffraction, refractive index, caustics, etc are all inherent in the beam. The equations for the beam tracing algorithm are similar to that required for rays, but more complex, but there are no fudges required. The big thing is you can get output with only a few hundred to a thousand beams, rather than millions of rays. You would just need to make the GPU's fp64 powerful. You don't need to solve the Helmholtz equation on the fly, just use beams that are solutions of the equation. I worked for Canon cameras and we developed a beam tracer to hopefully be used by the Canon lens design group. They were using the progam Code V as a test bed for their own design software, but it costs a bomb. Our relatively simple beam tracing program was benchmarked against Zeemax and could produce equal results to 10 million rays with 400 beams, but we could do things impossible with ray tracing. We could easily model things like vignetting, negative refractive index metmaterials, and colour of surfaces was naturally occuring by inclusion of the refractive index in the equations, even for metals which have complex refractive indices.

Beam tracing is a big deal in sonar and acoustics so disappointing to see us keep using ray tracing. Even Hollywood clings to this outdated technology.
 
Last edited:
Ray tracing is 15-20fps
Path tracing is 3-5fps
You, sir, are missing the point of the article. Let me help you sum it up using your own statement;

Ray tracing = 15-20fps
Path tracing = 35-40fps

Path tracing is an improvement and refinement to ray-tracing, which makes the whole process more efficient. It does not increase the workload of a given ray-tracing task.
 
You, sir, are missing the point of the article. Let me help you sum it up using your own statement;

Ray tracing = 15-20fps
Path tracing = 35-40fps

Path tracing is an improvement and refinement to ray-tracing, which makes the whole process more efficient. It does not increase the workload of a given ray-tracing task.

The article shines light on an important topic but could have been clearer about the fact that what everyone calls raytracing in games today is actually path tracing. It would be easy for someone to assume that path tracing is coming to future games to “fix” current ray tracing troubles which isn’t true.

Ray tracing as implemented in current games already uses very few rays and is extremely biased about when and where those rays are cast. So it has none of the waste of the classic RT algorithm.

The article also implies that shading is necessary for rasterization but not RT. In reality rasterization and raytracing are just visibility queries. They both rely on (essentially the same) shaders to compute the actual texture and lighting of the geometry returned by those queries.
 
Yeah, raytracing and later pathtracing are definitely the future, though we are still probably like 2-3 generations away from making less intensive raytracing viable without big FPS sacrifice. Pathtracing will likely take quite a bit more of time to be viable. Still interesting tech and it truly does help making atmosphere in those older games better. Sure original graphics got their charm, but still, so does raytracing. Plus I do hope they try to eventually use raytracing to do something new in games that will eventually just require it, so it won't have to be this very optional thing they out on the top for few percent. But fir now it will just remain RTX on for screenshots, RTX off for playing. :-D
 
Back