Sep 18, 2018
Engineers and scientists at Lawrence Livermore National Laboratory (LLNL), California, have developed convolutional neural networks (CNNs), a popular type of algorithm primarily used to process images and videos, to predict flaws in 3D printed parts and detect within milliseconds whether a build will be of satisfactory quality.
“This is a revolutionary way to look at the data that you can label video by video, or better yet, frame by frame,” said principal investigator and LLNL researcher Brian Giera. “The advantage is that you can collect video while you’re printing something and ultimately make conclusions as you’re printing it. A lot of people can collect this data, but they don’t know what to do with it on the fly, and this work is a step in that direction.”
Often, Giera explained, sensor analysis done post-build is expensive and part quality can be determined only long after. With parts that take days to weeks to print, CNNs could prove valuable for understanding the print process, learning the quality of the part sooner and correcting or adjusting the build in real time if necessary.
LLNL researchers developed the neural networks using about 2,000 video clips of melted laser tracks under varying conditions, such as speed or power. They scanned the part surfaces with a tool that generated 3D height maps, using that information to train the algorithms to analyze sections of video frames (each area called a convolution). The process would be too difficult and time-consuming for a human to do manually, Giera explained.
University of California, Berkeley student and LLNL researcher Bodi Yuan developed the algorithms that could label automatically the height maps of each build and used the same model to predict the width of the build track, whether the track was broken and the standard deviation of width. Using the algorithms, researchers were able to take video of in-progress builds and determine if the part exhibited acceptable quality. As a result, the neural networks were able to detect whether a part would be continuous with 93 percent accuracy.
“The key to our success is that CNNs can learn lots of useful features of videos during the training by itself," Yuan said. "We only need to feed a huge amount of data to train it and make sure it learns well.”
LLNL researchers have spent years collecting various forms of real-time data on the laser powder-bed fusion metal 3D-printing process, including video, optical tomography and acoustic sensors.
“We were collecting video anyway, so we just connected the dots,” Giera said. “Just like the human brain uses vision and other senses to navigate the world, machine learning algorithms can use all that sensor data to navigate the 3D printing process.”
The neural networks could theoretically be used in other 3D printing systems, Giera said. Other researchers should be able to follow the same formula, creating parts under different conditions, collecting video and scanning them with a height map to generate a labeled video set that could be used with standard machine-learning techniques.
Giera said work still needs to be done to detect voids within parts that can’t be predicted with height map scans but could be measured using ex situ X-ray radiography.
Researchers also will be looking to create algorithms to incorporate multiple sensing modalities besides image and video.
“Right now, any type of detection is considered a huge win. If we can fix it on the fly, that is the greater end goal,” Giera said. “Given the volumes of data we’re collecting that machine learning algorithms are designed to handle, machine learning is going to play a central role in creating parts right the first time.”
The project was funded by the Laboratory Directed Research and Development program.
Posted in 3D Printing Technology
Source: LLNL
Maybe you also like:
- AMT launches PostPro3D automated surface smoothing machine for 3D printed parts
- Hailey Dawson, 8-year-old with 3D printed hand throws 1st pitch at all 30 MLB parks
- Johnson & Johnson acquires 3D printed spinal implant maker Emerging Implant Technologies
- LEGO and Stanley Black & Decker back Evolve Additive Solutions with $19M funding
- Lockheed Martin teams With Deakin University to exploit FORTIS Exoskeleton
- Awesome 3D printed Millennium Falcon took 10 days to complete
- WASP releases modular Crane WASP infinity 3D printer for construction
- Zortrax enters resin 3D printing market with Zortrax Inkspire 3D printer
- Moritz Simon Geist 3D prints sonic robots to play electrifying techno music
- New SCRIM 3D concrete printing uses mesh to produce lightweight structures