Aug 24, 2017 | By Benedict

Researchers from the Berkeley Artificial Intelligence Research lab have developed a way to create 3D objects from a single 2D color image. The technique uses something called “hierarchical surface prediction” to identify free space, occupied space, and—crucially—boundaries.

At the start of the month, we published an article about research taking place Canada’s University of British Columbia, where computer scientists had developed software called FlowRep. The software was able to turn 3D images into visually informative 2D line sketches that conveyed all the important details of the 3D shape.

It has clearly been a big month for the interface between 2D and 3D images, because a new research project has just emerged which explores how 3D objects can be generated from tiny, low-resolution 2D images.

The research is taking place at the Berkeley Artificial Intelligence Research lab, where researchers led by Christian Häne have utilized the process of “hierarchical surface prediction” (HSP) to create accurate 3D models from simple 2D images. The new process could have applications in virtual reality, 3D printing, and other fields concerned with 3D models.

Häne explains that “humans have the ability to effortlessly reason about the shapes of objects and scenes even if we only see a single image,” but asks the important question: “How can we teach machines this ability?”

The secret, he says, is in separating volume elements (voxels) of an image into three categories: occupied space, free space, and boundaries. Previous attempts to create 3D models from 2D images have only dealt with the two kinds of space, something that Häne sees as an obstacle in the way of high resolution.

Häne says that the additional use of boundaries “allows us to analyze the outputs at low resolution and only predict a higher resolution of the parts of the volume where there is evidence that it contains the surface." By iterating the refinement procedure, he says, "we hierarchically predict high-resolution voxel grids.”

In other words, the new approach is less concerned with every single voxel in the image—few of which actually convey useful information about the 3D shape in question—but focuses explicitly on surfaces, which are key to generating the 3D model.

To test the HSP system, Häne and his team attempted to extract high-resolution geometries from a single color image, contrasting their method with two other predictive techniques: low-resolution hard (LR hard), a binary method; and low-resolution soft (LR soft), a fractional method.

The new technique was able to generate more accurate 3D images from the initial 2D image than the other methods could.

“The results…show the benefits in terms of surface quality and completeness of the high resolution prediction compared to the low resolution baselines,” Häne says.

And while there is still a long way to go before computational methods like this come close to what the human brain can do, this clever shortcut could be a stepping stone for further important research.

The researchers’ report, “Hierarchical Surface Prediction for 3D Object Reconstruction,” can be accessed here. Its authors were Shubham Tulsiani and Jitendra Malik.

 

 

Posted in 3D Software

 

 

Maybe you also like:


   






Leave a comment:

Your Name:

 


Subscribe us to

3ders.org Feeds 3ders.org twitter 3ders.org facebook   

About 3Ders.org

3Ders.org provides the latest news about 3D printing technology and 3D printers. We are now six years old and have around 1.5 million unique visitors per month.

News Archive