How 3D Game Rendering Works: Anti-Aliasing

Thanks for the detailed review of anti-aliasing. But it feels like this subject has been re-iterated on TechSpot countless times. Or maybe I have been a TechSpot user for too long :)
 
Last edited:
Awesome technical read!

Thanks for the detailed review of anti-aliasing. But it feels like this subject has been re-iterated on TechSpot countless times. Or maybe I have been a TechSpot user for too long :)

Haha. I know what you mean, but this is a new part in a series. This is just a new entry on the same topic of 3D rendering.
 
There is also Temporal Upsampling, that is already used in a few games and can produce good results.
Unreal Engine, since version 4.19 has a version of Temporal Upsampling that is almost as good as DLSS.
 
There is also Temporal Upsampling, that is already used in a few games and can produce good results.
Unreal Engine, since version 4.19 has a version of Temporal Upsampling that is almost as good as DLSS.

The problem I've seen with Temporal Upsampling is that it only does its magic when the image is still, when it starts moving it loses the higher detail and looks as blurry as regular TAA.
 
The problem I've seen with Temporal Upsampling is that it only does its magic when the image is still, when it starts moving it loses the higher detail and looks as blurry as regular TAA.

The same thing happens with DLSS, more noticeably on the lower quality settings
 
AMD AA always looked better to me over the years. No idea if there is any science behind that observation or not.
 
...And one more way - fragment aa. from matrox
Matrox's FAA was an interesting idea - starts similar to MSAA, in that for each primitive being rendered, each pixel is run through a depth test to determine which ones lie on the edges of the primitive. Those pixels are written to a separate buffer in memory (aka the fragment buffer), whereas all of the rest are passed along the pipeline to the final frame buffer. The fragment buffer is then sampled and blended, by a dedicated ASIC in the chip, with the results then blended with the frame buffer. Nice in principle, but the need for separate hardware (and the overall lack of performance of the Parhelia) meant few developers would take the time needed to implement some that only a tiny minority of the game's user base could support.

AMD AA always looked better to me over the years. No idea if there is any science behind that observation or not.
Depending on what era of ATi/AMD's AA techniques one is looking at, there is some science behind it. ATi experimented a lot with using a variety of sampling patterns (mostly stochastic in nature, and with lots of sampling points), for both SSAA and MSAA, to improve the overall quality of the blending; Nvidia typically stuck with ordered and rotated grids, and relatively few samples.
 
You know you're getting old when the before and after images look identical to you :(
Depending on the game I sometimes prefer the jaggies so turn off AA in all its forms. Then again I had no problem gaming on 8 bit machines in my youth so mayhap I'm more forgiving than those who need perfect graphics to enjoy a game.
 
There is also Temporal Upsampling, that is already used in a few games and can produce good results.
Unreal Engine, since version 4.19 has a version of Temporal Upsampling that is almost as good as DLSS.
No its not.
 
You guys left out several AA modes.

Just from memory there's Quincunx, CMAA, CSAA, TrAA, Fragment AA, CFAA
 
You guys left out several AA modes.

Just from memory there's Quincunx, CMAA, CSAA, TrAA, Fragment AA, CFAA
For good reason: most of those just aren't used these days. Quincunx and CSAA are old modified versions of MSAA from Nvidia - the former used a 2x sampling pattern but blended the results from neighboring pixel's samples too; low cost but poor visual result. CSAA included a sampling pattern for the primitive coverage of a sub-pixel and use those results in the blending algorithm. TrAA is another one of Nvidia's old systems, that used MSAA for all non-transparent primitives, and SSAA for transparent ones. Nvidia now recommends that developers use a variant of TAA or DLSS, if there's hardware support.

CFAA is AMD's name for their old MSAA implementation from about a decade ago and, depending on the settings used, involved either a custom sampling pattern that had taps outside of the source pixel, or ran an edge detecting algorithm in a pass. Fragment AA was mentioned above - exclusive to Matrox and no longer in use.

CMAA is the only listed that is (a) reasonably new and (b) sort of still around. It's one from Intel - an evolution of their original MLAA and subsequent enhancements, such as SMAA - and it's another post processing edge-detecting algorithm. Visually superior to FXAA and roughly similar in terms of performance (sometimes worse, sometimes better), it's not seen widespread adaptation.
 
You know you're getting old when the before and after images look identical to you :(

So, older people have eyes with anti-aliasing built-in. They can spend less on a graphics card, leaving more money for the medications. That's fair.
 
So, older people have eyes with anti-aliasing built-in. They can spend less on a graphics card, leaving more money for the medications. That's fair.
A few years ago, I played Jedi Knight (still play it every now and then) and I noticed the game was almost jaggie free but a little bit blurry. I figured I had enabled FXAA or something, after I went back to desktop and noticed everything was a little bit blurry. I realized "ok I need glasses" :)
 
A few years ago, I played Jedi Knight (still play it every now and then) and I noticed the game was almost jaggie free but a little bit blurry. I figured I had enabled FXAA or something, after I went back to desktop and noticed everything was a little bit blurry. I realized "ok I need glasses" :)

Did you first check if your monitor is maybe broken? :)
 
Back