Jul 28, 2017 | By Julia

A team of scientists from Purdue University in Indiana has developed a novel process which uses artificial intelligence (AI) to transform 2D images into 3D models. The technology, called SurfNet, could have applications in the advancement of self-driving vehicles, as well as virtual and augmented reality.

In simplest terms, the AI process uses machine learning and deep learning to transform 2D images it is registering—through a movie camera, for instance—into 3D shapes and environments. This is achieved by “teaching” the SurfNet system 2D images and 3D models in pairs, which enables it to predict the 3D versions of other 2D images it encounters.

Karthik Ramani, the Donald W. Feddersen Professor of Mechanical Engineering at Purdue, explained: "If you show it hundreds of thousands of shapes of something such as a car, if you then show it a 2D image of a car, it can reconstruct that model in 3D. It can even take two 2D images and create a 3D shape between the two, which we call 'hallucination.'"

As he continues to explain, the innovative AI technology could be applied for various applications in the future, including helping autonomous vehicles to better read and understand their environments, improving 3D searches on the web, and automatically creating high quality 3D content for virtual reality and augmented reality environments.

Think about it: how cool would it be to show the system a photograph of a place and then be transported into a 3D version of it by simply putting on your AR/VR headset. "Pretty soon we will be at a stage where humans will not be able to differentiate between reality and virtual reality,” commented Ramani.

The AI deep learning technique can be compared to how a camera or scanner operate using RGB colors, says the research team, only it uses XYZ coordinates to create three dimensional spacial understanding. This method reportedly offers a greater level of accuracy than other 3D deep learning processes which rely on voxels (volumetric pixels).

"We use the surfaces instead since it fully defines the shape,” said Ramani. “It's kind of an interesting offshoot of this method. Because we are working in the 2D domain to reconstruct the 3D structure, instead of doing 1,000 data points like you would otherwise with other emerging methods, we can do 10,000 points. We are more efficient and compact.”

In practice, the technology could mean that robots and things such as self-driving vehicles could be fitted with standard 2D cameras and still have the ability to “understand” their surroundings and environments. At present, however, there is still much work to be done before SurfNet is a viable system.

“To move from the flatland to the 3D world we will need much more basic research,” concluded Ramani. “We are pushing, but the mathematics and computational techniques of deep learning are still being invented and largely an unknown area in 3D."

 

 

Posted in 3D Software

 

 

Maybe you also like:


   






Leave a comment:

Your Name:

 


Subscribe us to

3ders.org Feeds 3ders.org twitter 3ders.org facebook   

About 3Ders.org

3Ders.org provides the latest news about 3D printing technology and 3D printers. We are now six years old and have around 1.5 million unique visitors per month.

News Archive