AI images are converted into 3D images in milliseconds

nvidia Says the new artificial intelligence It takes a number of 2D images, and then finds out for itself in a few seconds what a 3D scene will look like. Using neural networks, AI can predict what colors might appear on non-visible parts of the body and what the lighting would look like.

Then a 3D model of the scene is generated in “tens of milliseconds”. According to the company, this is an improvement of several orders of magnitude compared to older versions of the technology. After all, previous models would have taken hours to analyze the images, and then it would take minutes to create a 3D rendering.

NVIDIA builds on current technology from What’s called Neuroradiation fields or NeRF. The latest version, which the instant company calls NeRF, works a thousand times faster than the old version. Interestingly enough, this technology was first developed only a few years ago.

Instant NeRF could have a huge impact, said David Luebke, vice president of graphics research at NVIDIA. “If traditional 3D representations are like vector graphics, NeRFs are like bitmaps. They capture in depth how light radiates from an object. In this sense, Instant NeRF can be just as important as a 3D camera as digital cameras and JPEGs. The push for 2D imaging.”

According to NVIDIA, the technology can be used to create avatars or scenes in . format virtual worlds To make or to invent. Also, video conferencing participants can capture their 3D environment, or simulate scenes to create 3D digital maps.

See also  Will we soon be living in a 'metaverse' or will it turn into a sensation?

Leave a Reply

Your email address will not be published.