in

Using artificial intelligence to speed up and improve the most computationally-intensive aspects of plasma physics in fusion

The intricate dance of atoms fusing and releasing energy has fascinated scientists for decades. Now, human ingenuity and artificial intelligence are coming together at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) to solve one of humankind’s most pressing issues: generating clean, reliable energy from fusing plasma.

Unlike traditional computer code, machine learning — a type of artificially intelligent software — isn’t simply a list of instructions. Machine learning is software that can analyze data, infer relationships between features, learn from this new knowledge and adapt. PPPL researchers believe this ability to learn and adapt could improve their control over fusion reactions in various ways. This includes perfecting the design of vessels surrounding the super-hot plasma, optimizing heating methods and maintaining stable control of the reaction for increasingly long periods.

The Lab’s artificial intelligence research is already yielding significant results. In a new paper published in Nature Communications, PPPL researchers explain how they used machine learning to avoid magnetic perturbations, or disruptions, which destabilize fusion plasma.

“The results are particularly impressive because we were able to achieve them on two different tokamaks using the same code,” said PPPL Staff Research Physicist SangKyeun Kim, the lead author of the paper. A tokamak is a donut-shaped device that uses magnetic fields to hold a plasma.

“There are instabilities in plasma that can lead to severe damage to the fusion device. We can’t have those in a commercial fusion vessel. Our work advances the field and shows that artificial intelligence could play an important role in managing fusion reactions going forward, avoiding instabilities while allowing the plasma to generate as much fusion energy as possible,” said Egemen Kolemen, associate professor in the department of mechanical and aerospace engineering, jointly appointed with the Andlinger Center for Energy and the Environment and the PPPL.

Important decisions must be made every millisecond to control a plasma and keep a fusion reaction going. Kolemen’s system can make those decisions far faster than a human and automatically adjust the settings for the fusion vessel so the plasma is properly maintained. The system can predict disruptions, figure out what settings to change and then make those changes all before the instabilities occur.

Kolemen notes that the results are also impressive because, in both cases, the plasma was in a high-confinement mode. Also known as H-mode, this occurs when a magnetically confined plasma is heated enough that the confinement of the plasma suddenly and significantly improves, and the turbulence at the plasma’s edge effectively disappears. H-mode is the hardest mode to stabilize but also the mode that will be necessary for commercial power generation.

The system was successfully deployed on two tokamaks, DIII-D and KSTAR, which both achieved H-mode without instabilities. This is the first time that researchers achieved this feat in a reactor setting that is relevant to what will be needed to deploy fusion power on a commercial scale.

Machine learning code that detects and eliminates plasma instabilities was deployed in the two tokamaks shown above: DIII-D and KSTAR. (Credit: General Atomics and Korean Institute of Fusion Energy)

PPPL has a significant history of using artificial intelligence to tame instabilities. PPPL Principal Research Physicist William Tang and his team were the first to demonstrate the ability to transfer this process from one tokamak to another in 2019.

“Our work achieved breakthroughs using artificial intelligence and machine learning together with powerful, modern high-performance computing resources to integrate vast quantities of data in thousandths of a second and develop models for dealing with disruptive physics events well before their onset,” Tang said. “You can’t effectively combat disruptions in more than a few milliseconds. That would be like starting to treat a fatal cancer after it’s already too far along.”

The work was detailed in an influential paper published in Nature in 2019. Tang and his team continue to work in this area, with an emphasis on eliminating real-time disruptions in tokamaks using machine learning models trained on properly verified and validated observational data.

A new twist on stellarator design

PPPL’s artificial intelligence projects for fusion extend beyond tokamaks. PPPL’s Head of Digital Engineering, Michael Churchill, uses machine learning to improve the design of another type of fusion reactor, a stellarator. If tokamaks look like donuts, stellarators could be seen as the crullers of the fusion world with a more complex, twisted design.

“We need to leverage a lot of different codes when we’re validating the design of a stellarator. So the question becomes, ‘What are the best codes for stellarator design and the best ways to use them?'” Churchill said. “It’s a balancing act between the level of detail in the calculations and how quickly they produce answers.”

Current simulations for tokamaks and stellarators come close to the real thing but aren’t yet twins. “We know that our simulations are not 100% true to the real world. Many times, we know that there are deficiencies. We think that it captures a lot of the dynamics that you would see on a fusion machine, but there’s quite a bit that we don’t.”

Churchill said ideally, you want a digital twin: a system with a feedback loop between simulated digital models and real-world data captured in experiments. “In a useful digital twin, that physical data could be used and leveraged to update the digital model in order to better predict what future performance would be like.”

Unsurprisingly, mimicking reality requires a lot of very sophisticated code. The challenge is that the more complicated the code, the longer it typically takes to run. For example, a commonly used code called X-Point Included Gyrokinetic Code (XGC) can only run on advanced supercomputers, and even then, it doesn’t run quickly. “You’re not going to run XGC every time you run a fusion experiment unless you have a dedicated exascale supercomputer. We’ve probably run it on 30 to 50 plasma discharges [of the thousands we have run],” Churchill said.

That’s why Churchill uses artificial intelligence to accelerate different codes and the optimization process itself. “We would really like to do higher-fidelity calculations but much faster so that we can optimize quickly,” he said.

Coding to optimize code

Similarly, Research Physicist Stefano Munaretto’s team is using artificial intelligence to accelerate a code called HEAT, which was originally developed by the DOE’s Oak Ridge National Laboratory and the University of Tennessee-Knoxville for PPPL’s tokamak NSTX-U.

HEAT is being updated so that the plasma simulation will be 3D, matching the 3D computer-aided design (CAD) model of the tokamak divertor. Located at the base of the fusion vessel, the divertor extracts heat and ash generated during the reaction. A 3D plasma model should enhance understanding of how different plasma configurations can impact heat fluxes or the movement patterns of heat in the tokamak. Understanding the movement of heat for a specific plasma configuration can provide insights into how heat will likely travel in a future discharge with a similar plasma.

By optimizing HEAT, the researchers hope to quickly run the complex code between plasma shots, using information about the last shot to decide the next.

“This would allow us to predict the heat fluxes that will appear in the next shot and to potentially reset the parameters for the next shot so the heat flux isn’t too intense for the divertor,” Munaretto said. “This work could also help us design future fusion power plants.”

PPPL Associate Research Physicist Doménica Corona Rivera has been deeply involved in the effort to optimize HEAT. The key is narrowing down a wide range of input parameters to just four or five so the code will be streamlined yet highly accurate. “We have to ask, ‘Which of these parameters are meaningful and are going to really be impacting heat?'” said Corona Rivera. Those are the key parameters used to train the machine learning program.

With support from Churchill and Munaretto, Corona Rivera has already greatly reduced the time it takes to run the code to consider the heat while keeping the results roughly 90% in sync with those from the original version of HEAT. “It’s instantaneous,” she said.

Finding the right conditions for ideal heating

Researchers are also trying to find the best conditions to heat the ions in the plasma by perfecting a technique known as ion cyclotron radio frequency heating (ICRF). This type of heating focuses on heating up the big particles in the plasma — the ions.

Plasma has different properties, such as density, pressure, temperature and the intensity of the magnetic field. These properties change how the waves interact with the plasma particles and determine the waves’ paths and areas where the waves will heat the plasma. Quantifying these effects is crucial to controlling the radio frequency heating of the plasma so that researchers can ensure the waves move efficiently through the plasma to heat it in the right areas.

The problem is that the standard codes used to simulate the plasma and radio wave interactions are very complicated and run too slowly to be used to make real-time decisions.

“Machine learning brings us great potential here to optimize the code,” said Álvaro Sánchez Villar, an associate research physicist at PPPL. “Basically, we can control the plasma better because we can predict how the plasma is going to evolve, and we can correct it in real-time.”

The project focuses on trying different kinds of machine learning to speed up a widely used physics code. Sánchez Villar and his team showed multiple accelerated versions of the code for different fusion devices and types of heating. The models can find answers in microseconds instead of minutes with minimal impact on the accuracy of the results. Sánchez Villar and his team were also able to use machine learning to eliminate challenging scenarios with the optimized code.

Sánchez Villar says the code’s accuracy, “increased robustness” and acceleration make it well suited for integrated modeling, in which many physics codes are used together, and real-time control applications, which are crucial for fusion research.

Enhancing our understanding of the plasma’s edge

PPPL Principal Research Physicist Fatima Ebrahimi is the principal investigator on a four-year project for the DOE’s Advanced Scientific Computing Research program, part of the Office of Science, which uses experimental data from various tokamaks, plasma simulation data and artificial intelligence to study the behavior of the plasma’s edge during fusion. The team hopes their findings will reveal the most effective ways to confine a plasma on a commercial-scale tokamak.

While the project has multiple goals, the aim is clear from a machine learning perspective. “We want to explore how machine learning can help us take advantage of all our data and simulations so we can close the technological gaps and integrate a high-performance plasma into a viable fusion power plant system,” Ebrahimi said.

There is a wealth of experimental data gathered from tokamaks worldwide while the devices operated in a state free from large-scale instabilities at the plasma’s edge known as edge-localized modes (ELMs). Such momentary, explosive ELMs need to be avoided because they can damage the inner components of a tokamak, draw impurities from the tokamak walls into the plasma and make the fusion reaction less efficient. The question is how to achieve an ELM-free state in a commercial-scale tokamak, which will be much larger and run much hotter than today’s experimental tokamaks.

Ebrahimi and her team will combine the experimental results with information from plasma simulations that have already been validated against experimental data to create a hybrid database. The database will then be used to train machine learning models about plasma management, which can then be used to update the simulation.

“There is some back and forth between the training and the simulation,” Ebrahimi explained. By running a high-fidelity simulation of the machine learning model on supercomputers, the researchers can then hypothesize about scenarios beyond those covered by the existing data. This could provide valuable insights into the best ways to manage the plasma’s edge on a commercial scale.

This research was conducted with the following DOE grants: DE-SC0020372, DE-SC0024527, DE-AC02-09CH11466, DE-SC0020372, DE-AC52-07NA27344, DE-AC05-00OR22725, DE-FG02-99ER54531, DE-SC0022270, DE-SC0022272, DE-SC0019352, DEAC02-09CH11466 and DE-FC02-04ER54698. This research was also supported by the research and design program of KSTAR Experimental Collaboration and Fusion Plasma Research (EN2401-15) through the Korea Institute of Fusion Energy.

This story includes contributions by John Greenwald.


Source: Computers Math - www.sciencedaily.com

Speedy, secure, sustainable — that’s the future of telecom

Artificial intelligence tool detects male-female-related differences in brain structure