Sep 25, 2014 | By Alec

At this year's European Conference on Computer Vision, that was held in Zurich, Switzerland from the 6th to the 12th of September, a team from the University of Washington revealed a very interesting new way of making 3D scans of facial features. Not only does their interesting technique achieve a very high level of accuracy, it also avoids the time-consuming training and capturing process that so many current 3D scanning techniques involve. Could this be providing the basic necessities for easy, quick and accurate 3D prints of facial features?

Accurately reconstructing someone's facial features in a 3D rendering is one of the hardest jobs a 3D scanner can face. While various techniques and software have been developed over the years, the highly non-rigid nature of the human face, alongside our ability to discern even minute details and geometric flaws make it exceptionally difficult to develop a high quality rendering. Furthermore, it is a very time-consuming process and requires a large set up. The person being scanned will have be locked in a studio for quite some time, while a set up of dozens of cameras is often needed to gather the necessary data. Supermodel Karlie Kloss, for instance, faced a set-up of almost a hundred cameras.


Interestingly, the team from the University of Washington avoids all that. Instead of requiring a studio and an endless supply of cameras, they simply exploit all available materials: 'suppose that we had access to a large collection of other photos of the same person captured at different times, with varying pose, expression and lighting. Indeed, most people are captured in numerous photos and videos over their lifetimes; we propose to leverage the total corpus of available imagery of the same person to help reconstruct his/her face in an input video.'

Their approach is called Total Moving Face Scanning, and was developed by the University of Washington Computer Science & Engineering graduate student Supasorn Suwajakorn, and his supervisors from the CS& E department: assistant professor Ira Kemelmacher-Shlizerman and professor Steven Seitz. For those of you who'd like to read more about it, they have kindly released their conference paper online. You can find it here.

Their TMFS approach (my abbreviation) focuses on videos taken under uncontrolled and uncalibrated imaging conditions, specifically YouTube videos of Prince Charles, Arnold Schwarzenegger, Thom Hanks and so on. Data from those videos are supplemented with large amounts of photos that are available per individual in personal or internet photo conditions – and there are thousands of photos of celebrities out there. This is combined with a new dense 3D flow estimation method coupled with shapes from shading.

As can be seen in the video, this results in highly-detailed and accurate 3D rendered video sequences that account for various lighting conditions, head poses and facial expressions.

In this approach, it vastly differs from commonly used techniques. 'Virtually all modern 3D face tracking and video reconstruction approaches leverage an assumption that the human face is well represented by a linear combination of blend shapes.' While this does allow scientists to limit the number of parameters and measurements, it results in a low-rank model that shows limited expressiveness and captures few details.

Using a giant corpus of images to compute a person's specific face model, instead allows the University of Washington team to render an average image that takes any possible shading and lighting issues into account. It renders an image with 'more precise alignment and robust 3D tracking'.

One thought immediately came to mind: But surely no one looks similar in every single photo? But fortunately, the answer to this is relatively simple:

While a person's face shape may be slightly different at each time instant, their rough shape (e.g., distance between eyes, nose length, overall geometry), tends to be consistent over time. Hence, we leverage all available imagery (photos and/or video frames) to reconstruct a shape and appearance model of the person that captures their average shape and appearance under a subspace of illuminations.

The 3D rendering itself is achieved through a complex algorithm that mathematic enthusiasts can find in Supasorn Suwajakorn's conference paper. Crucial in it is a 'metric based on photo consistency, i.e., comparing mesh renderings with input video frames. This capability depends critically on being able to match the illumination and shading in each input frame to that of the rendered mesh, a property achieved by our appearance subspace representation.'

This rendering process has two steps: in the first, the average shape (based on the database of images) is deformed to match the motion of the shape that is made into a 3D rendering. In the second, the resulted shape is modified to the known shading cues in each frame. This means that non-rigid facial features (like wrinkles) are first captured and added to the reference frame, after which the particular impact of shading is accounted for. All of this, obviously, requires some serious computing.

But the results don't lie. The algorithm behind Total Moving Face Scanning clearly succeeds in capturing very minute details like wrinkles and subtly changing facial expressions on a scale existing software can't match. As the scientists added:

Note the change in facial expression (compared to the average shape) in each frame, e.g., mouth opening, eyes close and open, wrinkles appear and disappear, detail in eye region, and so forth. The approach is robust to very large changes in pose, providing high quality results even for profile views.

This new technique for rendering 3D scans of facial features is thus very promising indeed. Not only can it simply develop a scan based on film footage and photographs - thus negating the necessity for someone to be personally scanned using expensive equipment – it can also achieve a very high quality.

While it will take some time before this scanning technique becomes available to a wider audience, it could be very good news for printing enthusiasts the world over. Perhaps in the near future we can just dip into the family archives to scan and print facial reconstructions of our loved ones?

Check out this video detailing Total Moving Face Scanning:



Posted in 3D Scanning

Maybe you also like:


   





Leave a comment:

Your Name:

 


Subscribe us to

3ders.org Feeds 3ders.org twitter 3ders.org facebook   

About 3Ders.org

3Ders.org provides the latest news about 3D printing technology and 3D printers. We are now seven years old and have around 1.5 million unique visitors per month.

News Archive