Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing, but figuring out how to print with these materials can be a complex and expensive puzzle.
Often, an expert operator must use manual trial and error—perhaps making thousands of prints—to determine ideal settings that consistently print a new material efficiently. These settings include print speed and the amount of material the printer holds.
MIT researchers have now used artificial intelligence to streamline this process. They developed a machine learning system that monitors the manufacturing process using computer vision and then corrects errors in the way it processes the material in real time.
They used simulations to teach a neural network how to adjust print settings to reduce errors, then applied that controller to a real 3D printer. Their system printed objects more accurately than any other 3D printing controller.
The work avoids the prohibitive process of printing thousands or millions of real objects to train a neural network. And it could make it easier for engineers to incorporate new materials into their prints, which could help them develop objects with specific electrical or chemical properties. This can help technicians adjust the printing process on the fly if hardware or environmental conditions change unexpectedly.
“This project is really the first demonstration of building a manufacturing system that uses machine learning to learn complex control principles,” said lead author Wojciech Matusik, professor of electrical engineering and computer science at MIT. who heads the Computer Design and Manufacturing Group (CDFG). ) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have smarter manufacturing machines, they can adapt to changing workplace environments in real time, improving yield or system accuracy. You can get more from the machine. »
Lead co-authors are Mike Fosche, mechanical engineer and project manager at CDFG, and Michael Poversi, postdoctoral fellow at the Austrian Institute of Science and Technology. MIT co-authors include Ji Xu, a graduate student in electrical engineering and computer science, and Timothy Earps, a former CDFG technical associate. The research will be presented at the Association for Computing Machinery’s SIGGRAPH conference.
Determining the ideal parameters for a digital manufacturing process can be one of the most expensive parts of the process as a lot of trial and error is required. And once a technician finds a combination that works well, those settings are only ideal for a particular situation. It has little data on how the component will perform in other environments, on different hardware, or if a new batch exhibits different properties.
Using a machine learning system also presents many challenges. First, the researchers had to measure what was happening at the printer in real time.
To do this, they developed an artificial vision system using two cameras pointed at the tip of a 3D printer. As it is deposited, the system illuminates the material and, based on the amount of light passing through it, calculates the thickness of the material.
“You can think of the vision system as a pair of eyes observing the process in real time,” Fosche explains.
The controller will then process the images received from the vision system and, based on any detected errors, adjust the feed rate and direction of the printer.
But training a neural network-based controller to understand this production process is data-intensive and requires generating millions of impressions. So the researchers created a simulator instead.
The simulation is successful
To train their controller, they used a process known as reinforcement learning where the model learns through trial and error with a reward. The model was responsible for selecting the print settings that would create a specific object in a simulated environment. After showing the expected results, the model was rewarded when the parameters it chose minimized the error between its print and expected results.
In this case, an “error” means that the model has distributed too much material, placing it in places that should have been left open, or not distributed enough, filling the open spots. As the model produced more simulated impressions, it updated its control policy to maximize rewards, becoming more accurate.
However, the real world is messier than a simulation. In practice, the conditions change due to slight variations or noise in the printing process. The researchers therefore created a digital model that approximates the noise of the 3D printer. They used this model to add noise to the simulations, leading to more realistic results.
“What is interesting to us is that by implementing this noise model, we are able to transfer the control principle that is purely trained in simulation to untrained hardware without any physical tests”. “Later we didn’t need to do any fine-tuning on the actual equipment. »
When they tested the controller, it printed objects more accurately than any other control method they evaluated. This works especially well when printing infill, which prints the interior of an object. Some other controllers deposited so much material that the printed object bulged, but the researcher adjusted the print path to keep the controller object level.
Their control policies can even learn how materials are dispersed after storage and adjust settings accordingly.
“We have also been able to design control principles capable of controlling a variety of materials on the fly. So if you have a crafting process and want to change the material, you don’t have to revalidate the crafting process. All you have to do is load the new material and the controller will automatically adjust,” says Fosche.
Now that they have demonstrated the effectiveness of this technique for 3D printing, the researchers want to develop controllers for other manufacturing processes. They also want to see how changes can be made in situations where multiple layers of material or multiple materials are printed at the same time. Additionally, their method assumed that each ingredient had a specific viscosity (“seropositivity”), but future iterations could use AI to detect and adjust viscosity in real time.
Other co-authors of the work include Wahid Babai, who leads the Artificial Intelligence-Aided Design and Manufacturing Group at the Max Planck Institute; Piotr Didyk, Associate Professor at the University of Lugano, Switzerland; Szymon Rusinkiewicz, David M. Siegel ’83 Professor of Computer Science at Princeton University; and Bernd Bickel, professor at the Austrian Institute of Science and Technology.
The work was supported in part by the FWF Lise-Meitner Program, a European Research Council Starting Grant, and the US National Science Foundation.