Dec 1, 2015 | By Kira

Researchers at the MIT Media Lab have developed a new polarization technique, called Polarized 3D, that can increase the resolution of standard, commercial 3D scanners by a factor of 1,000. According to the researchers, not only is this technique more affordable and better than many high-precision, industrial-grade laser 3D scanners, it is a technique that could eventually lead to high-quality 3D cameras built into smartphones, incredibly high-resolution 3D prints, and even 3D scanners built into safer and more sensitive self-driving cars.

An undeniable asset to 3D modelers, 3D printing enthusiasts, and a range of other skilled professions, 3D scanning technology has already come quite a long way. There are portable 3D scanners for long and short-range projects, as well as much more advanced 3D scanners that can capture surface texture, color, and even light absorption and reflection. Yet while good 3D scanners and cheap 3D scanners already exist, MIT’s Polarized 3D technology crosses the final frontier: cheap 3D scanners whose quality isn’t just good—it’s unprecedented.

Of all things, this 3D imaging breakthrough was made possible thanks to old-fashioned polarization as well as a trusty Microsoft Kinect 3D scanner. As MIT news explains, polarization is the physical phenomenon behind polarized sunglasses and most 3D movie systems. Essentially, it affects the way in which light bounces off of physical objects.

"Today, photographers use polarizing filters on 2D cameras to create stunning photos. Polarized 3D probes the question: what if a polarizing filter is used on a 3D camera? The answer: commodity depth sensors operating at millimeter quality, can be enhanced to micron quality, improving resolution to 3 orders of magnitude," explained the researchers.

In order to channel the power of polarized light for 3D scanning purposes, the team at MIT created an algorithm that exploits light’s polarization, and measured the exact orientation of light that bounces off of an object.Even with their advanced light-calculating formulas, calculating surface orientation from measurements of polarized light is admittedly very hard to do, however the same standard graphics chip found in most video game consoles is capable of doing just that.

The researchers thus used a Microsoft Kinect with an ordinary polarizing photographic lens placed in front of the camera. In each experiment, the researchers took three photos of an object with three separate filters, and their algorithms compared the light intensities of the resulting images. After several experiments, the results were definitive: on its own, the Kinect was already capable of resolving physical features as small as a centimeter across. However with the polarization information, it became possible to resolve features in the range of hundred of micrometers, or one-thousandth of the previous size.

One-upping the Kinect is one thing, however as an affordable consumer option, it’s far from being considered a ‘high-end’ 3D scanner. So, to truly test their technique, the researchers also imaged several test objects with a multithousand dollar, industrial-grade laser scanner. Once again, Polarized 3D offered higher resolution.

“Today, they can miniaturize 3-D cameras to fit on cellphones,” said MIT graduate student Achuta Kadambi who helped develop the technology. “But they make compromises to the 3D sensing, leading to very coarse recovery of geometry. That’s a natural application for polarization, because you can still use a low-quality sensor, and adding a polarizing filter gives you something that’s better than many machine-shop laser scanners.”

“The work fuses two 3D sensing principles, each having pros and cons,” said Yoav Schechner, an associate professor of electrical engineering at Technion — Israel Institute of Technology in Haifa, Israel. “One principle provides the range for each scene pixel: This is the state of the art of most 3D imaging systems. The second principle does not provide range. On the other hand, it derives the object slope, locally. In other words, per scene pixel, it tells how flat or oblique the object is… The work uses each principle to solve problems associated with the other principle.”

The full details of the research are available in an MIT Media Lab paper, made available to the public, titled “Polarized 3D: High Quality Depth Sensing with Polarization Cues.” Authors Achuta Kadambi, Vage Taamazyan, Boxin Shi and Ramesh Raskar wil also be presenting their findings at the International Conference on Computer Vision in December.

In addition to making extremely cheap yet extremely accurate 3D scanning available for 3D printing enthusiasts, the new Polarized 3D technique could potentially aid in the development of self-driving cars. As MIT explains, many of today’s driverless cars function great under normal light conditions, but once rain, snow, or fog is thrown into the mix, they become serious road hazards. In some tests of their polarization system, however, the team was able to exploit information contained in interfering waves of light from the aforementioned weather conditions to handle how the light is scattered. “Mitigating scattering in controlled scenes is a small step,” said Kadambi, “but that’s something I think will be cool open problem.”

 

 

Posted in 3D Scanning

 

 

Maybe you also like:


   






Leave a comment:

Your Name:

 


Subscribe us to

3ders.org Feeds 3ders.org twitter 3ders.org facebook   

About 3Ders.org

3Ders.org provides the latest news about 3D printing technology and 3D printers. We are now seven years old and have around 1.5 million unique visitors per month.

News Archive