AMD demo shows procedural generation slashing VRAM use from 35 GB to just 51 KB

Daniel Sims

Posts: 1,954   +53
Staff
Why it matters: Managing graphics memory has become one of the most pressing challenges facing the realm of real-time 3D rendering. As visuals become more detailed, the amount of VRAM required for modern high-end games is pushing against what average customers can afford. AMD and Nvidia are currently developing remedies to the issue, which involve shifting certain rendering tasks from memory to the GPU.

A new research paper from AMD explains how procedurally generating certain 3D objects in real-time-rendered scenes, like trees and other vegetation, can reduce VRAM usage by orders of magnitude. The technique could benefit hardware with small memory pools or enable future games to increase perceived detail dramatically.

Game developers already create assets like trees and bushes using procedural generation, which employs algorithms to dynamically build variations of a limited number of hand-crafted models. However, those models are then stored in the game data, and rendering them can significantly increase VRAM usage and storage requirements. AMD's proposed technique utilizes work graphs to procedurally generate vegetation on the fly, eliminating the need to keep it in video memory or system storage.

In a video demonstration, the researchers show a dense forest running smoothly on a Radeon RX 7900 XTX at 1080p. Achieving the level of quality shown using traditional methods might require almost 35GB of VRAM, far above the GPU's 24GB allocation. However, real-time generation cuts memory usage to just 51KB. Furthermore, the trees maintain impressive visual detail and variety. They can shift with seasons, animate by swaying in the wind, and efficiently manage levels of detail without visible pop-in.

The technique fundamentally resembles the neural texture compression system Nvidia has been developing for a few years. While Nvidia's applies to textures instead of vegetation, both methods aim to dynamically calculate assets entirely on the GPU without repeatedly pulling them to and from memory and storage.

Neural Texture Compression utilizes machine learning to decompress textures as needed during rendering, reducing VRAM usage by up to 95% while potentially increasing detail. A minor performance hit would be the only downside. A recent study from Nvidia describes ongoing improvements in the technique's filtering solution.

Technologies such as work graphs and neural compression could enable next-generation hardware to provide significant visual improvements without requiring dramatic increases in memory size and storage speed if they gain wide adoption.

Permalink to story:

 
I could see this being useful for say having a game where you quickly move through an area or have it applied to things in the background (like the outskirts of the golf course in a golf game?). But for actual level design I imagine this isn't all that useful?
 
I hate these BS comparisons with cherrypicked unoptimal extremes to make the case; keep it real FFS and the point will be much more worthy even if small. No, those stoopid trees wouldn't take 35GB of VRAM in any reasonable universe.
 
I mean, clearly today’s games are unoptimized and bloated to high heaven, but to this degree? We’ll see I guess. Always love to see innovations which improve resource utilization. Doing more with less is the bess.
 
I hate these BS comparisons with cherrypicked unoptimal extremes to make the case; keep it real FFS and the point will be much more worthy even if small. No, those stoopid trees wouldn't take 35GB of VRAM in any reasonable universe.

I agree. I think you have to go out of your way and actually do an unoptimization effort to make this demo use 35GB of VRAM. With proper optimization and art direction tricks it shouldn't use more than 3GB at 1080p, a skilled dev and competent artist could probably optimize it further to around 1GB or even less. These are basically early PS4-level graphics rendered at full HD resolution after all.

If they can take a more realistic real-world example and cut vram usage to just 1/3 or 1/4 without any quality loss or artefacting, it would be a much more appealing and interesting demo...
 
Last edited:
Back