Feature Stories
Powering Sustainable Energy Research
Whereas the LHC seeks to answer basic questions about the universe, another global physics experiment underway has a much more practical focus: designing a fusion reactor that can solve the world's power needs without the drawbacks of current sources.
Fusion has been touted for decades as the holy grail of energy production — a way to produce power by merging nuclei and releasing massive amounts energy, the way the Sun does. As demonstrated in the EUROfusion Joint European Torus (JET), progress in fusion has reached the "break-even" milestone — the ability to produce as much energy as is put in.
The next step towards positive energy generation will be carried out by the International Thermonuclear Experimental Reactor (ITER), a joint effort from seven governments. Currently under construction in France, this $25 billion experimental facility is designed to produce 10 to 20 times more power than it uses. The reactor is scheduled to be operational by 2025.
An especially urgent and challenging problem facing the development of a fusion reactor is the need to reliably predict and avoid large-scale major disruptions, which can damage the machine.
After years of trying to predict disruptions using physics models and simulations, researchers still struggled to match the dynamics in a real reactor.
"If you try to use conventional theoretical methods, buttressed by high performance computing, you still aren't going to be able to make predictions," said William Tang, principal research physicist at the Princeton Plasma Physics Laboratory — the U.S. DOE National Lab for fusion studies. "You needed the impact of big data analytics that can deal with a lot of data that's relevant to disruptions."
In order to accelerate progress, the Princeton AI/Deep Learning Team led by Julian Kates-Harbeck, Alexey Svyatkovskiy, and Tang, developed the Fusion Recurrent Neural Net (FRNN) Code and successfully deployed deep learning to demonstrate exciting advances.
"We adopted a supervised machine learning approach," he said. "This means that everything involves real physics events — with the pre-disruption classifiers determined by first-principles based physics." They have demonstrated that their code can predict disruption events with better than 90 percent accuracy and less than 5 percent false positives and can do so more than 30 milliseconds before disruptions are triggered.
Furthermore, they showed for the first time that they could train many neural networks on signals from one reactor and make accurate predictions on a much larger device. In addition, they are now able to make predictions much more than 30 milliseconds before a disruption occurs to enable the device to have a much-expanded disruption avoidance window.
For several years, Tang has used TACC systems to develop a well-known code that simulates particle behavior in burning plasmas. He intends to extend this research on Frontera to develop an actual control system that is capable of avoiding disruptions in ITER. In particular, he is excited about Frontera's hybrid design that can enable both HPC simulations and machine learning/deep learning, and their possible integration.
"We'll look forward to bringing these application domains where we've had experience with TACC in the past to be integrated into the exciting area of AI and deep learning execution on Frontera," he said.