Monday, May 16, 2022

Nvidia shows off AI model that converts several dozen snapshots into a 3D-rendered scene

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Nvidia’s latest AI demo is pretty impressive: a tool that quickly converts a “few dozen” 2D snapshots into a 3D-rendered scene. The video below shows the method in action, with a model dressed as Andy Warhol holding an old-fashioned Polaroid camera. (Don’t overthink the Warhol connection: it’s just a little PR scene dressing.)

The tool is called Instant NeRF, referring to “neural radiation fields” — a technique developed by researchers at UC Berkeley, Google Research and UC San Diego in 2020. If you want a detailed explanation of neural radiation fields, you can read one here, but in short, the method maps the color and light intensity of different 2D shots and then generates data to connect these images from different points of view and represent a finished 3D scene. In addition to images, the system needs data about the position of the camera.

Researchers have been improving these types of 2D-to-3D models for a few years now, adding more detail to finished renders and increasing rendering speed. Nvidia says its new Instant NeRF model is one of the fastest developed to date, cutting rendering time from a few minutes to a process that completes “almost instantaneously.”

As the technique becomes faster and easier to implement, it can be used for all kinds of tasks, Nvidia says in a blog post describe the work

“Instant NeRF can be used to create avatars or scenes for virtual worlds, to capture video conference participants and their environment in 3D, or to reconstruct scenes for 3D digital maps,” writes Nvidia’s Isha Salian. “The technology could be used to train robots and self-driving cars to understand the size and shape of objects in the real world by creating 2D images or video footage of them. It could also be used in architecture and entertainment to quickly create digital representations.” of real environments that creators can adapt and build upon.” (Sounds like the metaverse is calling.)

Unfortunately, Nvidia hasn’t shared any details about its method, so we don’t know exactly how many 2D graphics are needed or how long it takes to render the finished 3D scene (which would also depend on the power of the computer running the rendering). ). Still, it seems that the technology is advancing rapidly and could have a real impact in the coming years.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article