Nvidia details Neural Texture Compression, claims significant improvements over traditional...

Consoles use way more of the RAM for OS and actual GAME CODE than for graphics.
Like I said, 10/16 is often OS/GAME leaving the remaining 6GB available for graphics (you never max out completely = stuttering).
If in doubt, you can look up game devs speaking about this on Youtube.

A console _can_ use 14-15/16GB for graphics yes, teoretically. It just never happends because GAME CODE usually uses way more than graphics.

Same is true for PC.
And this is why most game requirements have way more RAM needed than VRAM. It's often 2:1 or so. Sometimes 3:1.

More and more PC games wants 32GB RAM for max settings. It's listed in many games for ultra spec.
I explained that the OS uses it's pre-allocated RAM (around 1.5GB). The game then has 13.5 to use the the GPU and other things.

Like I also said, on PC you have more RAM required because you have to load into RAM first and then into VRAM. The OS and other applications take up a lot more than the usual console OS. PC max settings are also way above what console settings are.

I tried searching on youtube like you said, I found zero relevant info. You'll have to provide the links yourself because nobody here will spend more than 5-10 minutes randomly searching for such info.
 
Those compressions examples do not look so great. I am guessing they base the amount of blur to some personal internal tastings.

Pretty sure the technology is going to be pushed even more when mobile GPU's start to catch up as the PC GPU's stand still. We already see Raytracing on Mobiles, from the likes of snapdragon 8 gen 2. We barely seen on PC such features, but only when the smaller devices, consoles included catch up, we would really see a revolution.
 
Yeah but it costs more than double the time to sample the textures (from 0.44ms to 1.15ms). yeah, 16 vs 4 samples but still

on the other hand it is nothing new, with an autoencoder you solve most of it and with a simple SR at the output bring more detail.

of course, you need HW acceleration to make it efficient, and no, it's not doing it in shaders, it's doing it in the TMUs, update ones, with dedicated hard to accelerate this new decompression/sampling

reading the paper, the "compression" takes a long time if you want to maintain the best quality, simply because part of the operation is to train an autoenconder based on a multilayer perceptron (an old type of neural network)

0-is an experimental operation.

1-they are doing the operation partly in software (shaders) and partly in the tensorcores (hardware, which the 1650 does not have), but neither are the tensorcores hardware specialized in this operation and the execution in the shaders is much slower in all the cases. THIS IS WHY THE texture mapping units EXIST since the first gpus, dedicated hardware specialized in decompression, sampling and texture filtering.

2-in the case of "neural" decompression, it does so with 16 times more samples than classic decompression/sampling. So even if it takes twice as long, it has higher quality. then if you reduce that to, let's say, 8 samples, the times may even out, but the quality will still be higher.

If this technique is included accelerated in the TMUs, a part specialized in fixed hardware, it should be much faster and more efficient in all aspects, including energy.

this neural compression thing is something "old". It can be done with an autoencoder, and at its output some deconvolution layers (it would be like a superresolution) to improve the quality and detail. this fact nailed down in hardware is part of what is expected

It has to be in the TMUs, but new, modernized TMUs, with the ability to speed up this operation, with specific fixed hardware units. Notice that in an entire GPU almost the only fixed operation units that are still left are the TMUs, everything else has been programmable for a long time.

and this is faster and more efficient than doing it in the shaders, be it with CUDA, with HLSL, or with GLSL or with Cg or OpenCL, DirectML or whatever. just software takes time and capacity that should be free for everything else.

ever since texture compression has been around, they all come compressed, almost always lossy, and hardware has been adapted to speed up sampling and decompression.

this can help with respect to the amount of memory, since you can compress the texture more and deliver the same quality as the current methods but consuming less memory, instead of giving more quality with the same consumption. it implies that the "medium" option for the quality of the textures could then be the equivalent in the "Ultra" option, or the ultra to the Mediun, in memory consumption...

the developers will be able to use textures with higher compression, which occupy less memory, maintaining the same current quality, so more can be put into the same amount of given VRAM. what they do is save memory.

suppose they at least compress 25% more. It's like taking a current graphics card from 8 to 10GB. but it's even more, reading the paper, for a texture with current compression method to achieve the same (slightly lower) quality than with this compression method, it needs to be 2.83 times bigger/more memory. so you could say that in the best case (not all textures are compressed the same) it would be like "going from 8GB to more than 16GB".

This compression method (note, it is specific to textures/materials as used in graphics) delivers slightly more than twice the visual quality when decompressing using the same amount of memory as if it were with the current compression methods

the developers will be able to continue using 1/2/4K textures, which consume less memory (if they want to maintain the same visual quality) or the same memory with more quality, but they can make it selective depending on which textures, how they will be used and how they will be seen on the screen
 
Last edited:
My only question is will this new method be back ported to older cards that actually need it, or will it be an exclusive feature of the latest overpriced generation? Sorry but I'm not paying 2 grand for a 10GB RTX 6090 the size of a microwave just so I can use the so called latest and greatest. Good graphics are fine, but once I get past the "gee whiz" factor I end up judging my games on gameplay, not how it looks...
 
Just more Nvidia BS to confuse and complicate the matter, just put more VRAM on the graphic cards. Problem solved greedy bastards.
 
Back