Dec 8, 2016 | By Tess
Personalization in gaming is becoming increasingly popular, as people are now not only free to choose from different avatars in their games, but can actually import images and 3D scans to make their avatars look like themselves. High profile games like NBA 2K17 have integrated 3D scanning technology for their players, and we see the trend becoming increasingly pervasive as 3D scanning becomes easier and easier.
Despite it’s growing popularity, 3D scanning is still much more complicated and less accessible than 2D photography. Now, however, thanks to a research project by a team from the Univeristy of Southern California, it seems that it is now possible to turn a 2D photo into an exceptionally detailed 3D rendering. Using deep neural networks, the researchers have demonstrated how they transformed two-dimensional photos of faces into eerily accurate 3D models.
If Muhammad Ali’s black eyes staring out of the photo make you feel ill at ease, you’re certainly not alone, as the 3D model initially gave me chills. Despite the uncanny valley effect provoked by the image, the technology is still very impressive. Traditionally, facial mapping requires a series of perfectly lit images of the subject taken from various angles. Using complex neural networks, however, and working with an in-depth “face database,” the researchers were able to generate an extremely detailed 3D facial model based off of an angled, partial photograph. The neural networks automatically generated the face model by filtering through a wide array of possible textures and skin tones.
“A complete and photorealistic texture map can then be synthesized by iteratively optimizing for the reconstructed feature correlations. Using these high-resolution textures and a commercial rendering framework, we can produce high-fidelity 3D renderings that are visually comparable to those obtained with state-of-the-art multi-view face capture systems,” reads the study.
While the potentials for this new facial 3D modeling technique are manifold, some of its main applications could be in the fields of online gaming and, notably, virtual reality. As the researchers lay out in their work, they intend to use their work to eventually generate full-sized 3D avatars for online VR platforms. In other words, you would not have to have a full 3D scan of a person’s face in order to generate an accurate-looking 3D virtual model of them, and could feasibly create avatars based off of historical or famous people using only their photo.
The research project offers a new technique for creating realistic avatars in virtual environments, and who knows, could even one day be used to create 3D printable models. Think about it, next Halloween you could show up wearing a 3D printed Donald Trump mask—you’ll just have to figure out the hair for yourself.
Posted in 3D Software
Maybe you also like:
- T-Bone Cape motion control board launches on Indiegogo
- New extruder could lower costs of 3D printing cellular structures for drug testing
- New Ninja Printer Plate for consumer 3D printing
- mUVe3D releases improved Marlin firmware for all 3D printers
- Zecotek plans HD 3D display for 3D printers
- Add a smart LCD controller to your Robo3D printer
- Maker Kase: a handy cabinet for 3D printers
- Heated bed for ABS printing with the Printrbot Simple XL
- Next gen all metal 3D printer extruder from Micron
- Pico all-metal hotend 100% funded in 48 hours, B3 announces Stretch Goal
- Create it REAL announces first 3D printing Real Time Processor
- A larger and more powerful 3D printer extruder on Kickstarter