In context: Nvidia has been playing with NeRFs. No, they haven't been shooting each other with foam darts. The term NeRF is short for Neural Radiance Field. It's a technique that uses AI to create a three-dimensional scene from a handful of still images (inverse rendering). Depending on how much depth is desired, it generally takes hours or days to render results.

Nvidia's AI research arm has been working on inverse rendering and developed a Neural Radiance Field it calls Instant NeRF because it can render the 3D scene up to 1,000-times faster than other NeRF techniques. The AI model only needs a few seconds to train on a few dozen stills taken from multiple angles and then just tens of milliseconds more to render a 3D view of the scene.

Since the process is the opposite of taking a Polaroid — that is to say, instantly turning a 3D scene into a 2D image — Nvidia recreated a photo of Andy Warhol using a Polaroid. This week, the research team presented a demo of the Instant NeRF results at Nvidia GTC (below).

"Instant NeRF could be used to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps," said Nvidia. "Collecting data to feed a NeRF is a bit like being a red carpet photographer trying to capture a celebrity's outfit from every angle — the neural network requires a few dozen images taken from multiple positions around the scene, as well as the camera position of each of those shots."

The NeRF generates the 3D image from these dozens of angles, filling in the blanks where needed. It can even compensate for occlusions. For example, if an object is blocking the view of the subject in one of the images, the AI can still fill in that angle even though it cannot see the subject well or at all.

The technology's one area of weakness is dealing with moving objects.

"In a scene that includes people or other moving elements, the quicker these shots are captured, the better," Nvidia said. "If there's too much motion during the 2D image capture process, the AI-generated 3D scene will be blurry."

For more technical details, check out Nvidia's blog post. You can also catch the rest of Jensen Huang's GTC keynote on YouTube.