More stories

  • in

    Next-generation sustainable electronics are doped with air

    Semiconductors are the foundation of all modern electronics. Now, researchers at Linköping University, Sweden, have developed a new method where organic semiconductors can become more conductive with the help of air as a dopant. The study, published in the journal Nature, is a significant step towards future cheap and sustainable organic semiconductors.
    “We believe this method could significantly influence the way we dope organic semiconductors. All components are affordable, easily accessible, and potentially environmentally friendly, which is a prerequisite for future sustainable electronics,” says Simone Fabiano, associate professor at Linköping University.
    Semiconductors based on conductive plastics instead of silicon have many potential applications. Among other things, organic semiconductors can be used in digital displays, solar cells, LEDs, sensors, implants, and for energy storage.
    To enhance conductivity and modify semiconductor properties, so-called dopants are typically introduced. These additives facilitate the movement of electrical charges within the semiconductor material and can be tailored to induce positive (p-doping) or negative (n-doping) charges. The most common dopants used today are often either very reactive (unstable), expensive, challenging to manufacture, or all three.
    Now, researchers at Linköping University have developed a doping method that can be performed at room temperature, where inefficient dopants such as oxygen are the primary dopant, and light activates the doping process.
    “Our approach was inspired by nature, as it shares many analogies with photosynthesis, for example. In our method, light activates a photocatalyst, which then facilitates electron transfer from a typically inefficient dopant to the organic semiconductor material,” says Simone Fabiano.
    The new method involves dipping the conductive plastic into a special salt solution — a photocatalyst — and then illuminating it with light for a short time. The duration of illumination determines the degree to which the material is doped. Afterwards, the solution is recovered for future use, leaving behind a p-doped conductive plastic in which the only consumed substance is oxygen in the air.

    This is possible because the photocatalyst acts as an “electron shuttle,” taking electrons or donating them to material in the presence of sacrificial weak oxidants or reductants. This is common in chemistry but has not been used in organic electronics before.
    “It’s also possible to combine p-doping and n-doping in the same reaction, which is quite unique. This simplifies the production of electronic devices, particularly those where both p-doped and n-doped semiconductors are required, such as thermoelectric generators. All parts can be manufactured at once and doped simultaneously instead of one by one, making the process more scalable,” says Simone Fabiano.
    The doped organic semiconductor has better conductivity than traditional semiconductors, and the process can be scaled up. Simone Fabiano and his research group at the Laboratory of Organic Electronics showed earlier in 2024 how conductive plastics could be processed from environmentally friendly solvents like water; this is their next step.
    “We are at the beginning of trying to fully understand the mechanism behind it and what other potential application areas exist. But it’s a very promising approach showing that photocatalytic doping is a new cornerstone in organic electronics,” says Simone Fabiano, a Wallenberg Academy Fellow. More

  • in

    Tech can’t replace human coaches in obesity treatment

    A new Northwestern Medicine study shows that technology alone can’t replace the human touch to produce meaningful weight loss in obesity treatment.
    “Giving people technology alone for the initial phase of obesity treatment produces unacceptably worse weight loss than giving them treatment that combines technology with a human coach,” said corresponding study author Bonnie Spring, director of the Center for Behavior and Health and professor of preventive medicine at Northwestern University Feinberg School of Medicine.
    The need for low cost but effective obesity treatments delivered by technology has become urgent as the ongoing obesity epidemic exacerbates burgeoning health care costs.
    But current technology is not advanced enough to replace human coaches, Spring said.
    In the new SMART study, people who initially only received technology without coach support were less likely to have a meaningful weight loss, considered to be at least 5% of body weight, compared to those who had a human coach at the start.
    Investigators intensified treatment quickly (by adding resources after just two weeks) if a person showed less than optimal weight loss, but the weight loss disadvantage for those who began their weight loss effort without coach support persisted for six months, the study showed.
    The study will be published May 14 in JAMA.

    Eventually more advanced technology may be able to supplant human coaches, Spring said.
    “At this stage, the average person still needs a human coach to achieve clinically meaningful weight loss goals because the tech isn’t sufficiently developed yet,” Spring said. “We may not be so far away from having an AI chat bot that can sub for a human, but we are not quite there yet. It’s within reach. The tech is developing really fast.”
    Previous research showed that mobile health tools for tracking diet, exercise and weight increase engagement in behavioral obesity treatment. Before this new study, it wasn’t clear whether they produced clinically acceptable weight loss in the absence of support from a human coach.
    Scientists are now trying to parse what human coaches do that makes them successful, and how AI can better imitate a human, not just in terms of content but in emotional tone and context awareness, Spring said.
    Surprising results
    “We predicted that starting treatment with technology alone would save money and reduce burden without undermining clinically beneficial weight loss, because treatment augmentation occurred so quickly once poor weight loss was detected,” Spring said. “That hypothesis was disproven, however.”
    Drug and surgical interventions also are available for obesity but have some drawbacks. “They’re very expensive, convey medical risks and side effects and aren’t equitably accessible,” Spring said. Most people who begin taking a GLP-1 agonist stop taking the drug within a year against medical advice, she noted.

    Many people can achieve clinically meaningful weight loss without antiobesity medications, bariatric surgery or even behavioral treatment, Spring said. In the SMART study, 25% of people who began treatment with technology alone achieved 5% weight loss after six months without anytreatment augmentation. (In fact, the team had to take back the study technologies after three months to recycle to new participants.)
    An unsolved problem in obesity treatment is matching treatment type and intensity to individuals’ needs and preferences. “If we could just tell ahead of time who needs which treatment at what intensity, we might start to manage the obesity epidemic,” Spring said.
    How the study worked
    The SMART Weight Loss Management study was a randomized controlled trial that compared two different stepped care treatment approaches for adult obesity. Stepped care offers a way to spread treatment resources across more of the population in need. The treatment that uses the least resources but that will benefit some people is delivered first; then treatment is intensified only for those who show insufficient response. Half of participants in the SMART study began their weight loss treatment with technology alone. The other half began with gold standard treatment involving both technology and a human coach.
    The technology used in the SMART trial was a Wireless Feedback System (an integrated app, Wi-Fi scale and Fitbit) that participants used to track and receive feedback about their diet, activity and weight.
    Four-hundred adults between ages 18-60 with obesity were randomly assigned to begin three months of stepped care behavioral obesity treatment beginning with either the Wireless Feedback System (WFS) alone or the WFS plus telehealth coaching. Weight loss was measured after two, four and eight weeks of treatment, and treatment was intensified at the first sign of suboptimal weight loss (less than .5 pounds per week).
    Treatment for both groups began with the same WFS tracking technology, but standard-of-care treatment also transmitted the participant’s digital data to a coach, who used it to provide behavioral coaching by telehealth. Those showing suboptimal weight loss in either group were re-randomized once to either of two levels of treatment intensification: modest (adding an inexpensive technology component — supportive messaging) or vigorous (adding both messaging plus a more costly traditional weight loss treatment component — coaching for those who hadn’t received it, meal replacement for those who’d already received coaching). More

  • in

    Simulating diffusion using ‘kinosons’ and machine learning

    Researchers from the University of Illinois Urbana-Champaign have recast diffusion in multicomponent alloys as a sum of individual contributions, called “kinosons.” Using machine learning to compute the statistical distribution of the individual contributions, they were able to model the alloy and calculate its diffusivity orders of magnitude more efficiently than computing whole trajectories. This work was recently published in the journal Physical Review Letters.
    “We found a much more efficient way to calculate diffusion in solids, and at the same time, we learned more about the fundamental processes of diffusion in that same system,” says materials science & engineering professor Dallas Trinkle, who led this work, along with graduate student Soham Chattopadhyay.
    Diffusion in solids is the process by which atoms move throughout a material. The production of steel, ions moving through a battery and the doping of semiconductor devices are all things that are controlled by diffusion.
    Here, the team modeled diffusion in multicomponent alloys, which are metals composed of five different elements — manganese, cobalt, chromium, iron and nickel in this research — in equal amounts. These types of alloys are interesting because one way to make strong materials is to add different elements together like adding carbon and iron to make steel. Multicomponent alloys have unique properties, such as good mechanical behavior and stability at high temperatures, so it is important to understand how atoms diffuse in these materials.
    To get a good look at diffusion, long timescales are needed since atoms randomly move around and, over time, their displacement from the starting point will grow. “If somebody tries to simulate the diffusion, it’s a pain because you have to run the simulation for a very long time to get the full picture,” Trinkle says. “That really limits a lot of the ways that we can study diffusion. More accurate methods for calculating transition rates often can’t be used because you wouldn’t be able to do enough steps of a simulation to get the longtime trajectory and get a reasonable value of diffusion.”
    An atom might jump to the left but then it might jump back to the right. In that case, the atom hasn’t moved anywhere. Now, say it jumps left, then 1000 other things happen, then it jumps back to the right. That’s the same effect. Trinkle says, “We call that correlation because at one point the atom made one jump and then later it undid that jump. That’s what makes diffusion complicated. When we look at how machine learning is solving the problem, what it’s really doing is it’s changing the problem into one where there aren’t any of these correlated jumps.”
    Therefore, any jump that an atom makes contributes to diffusion and the problem becomes a lot easier to solve. “We call those jumps kinosons, for little moves,” Trinkle says. “We’ve shown that you can extract the distribution of those, the probability of seeing a kinoson of a certain magnitude, and add them all up to get the true diffusivity. On top of that you can tell how different elements are diffusing in a solid.”
    Another advantage of modeling diffusion using kinosons and machine learning is that it is significantly faster than calculating long-timescale, whole trajectories. Trinkle says that with this method, simulations can be done 100 times faster than it would take with the normal methods.
    “I think this method is really going to change the way we think about diffusion,” he says. “It’s a different way to look at the problem and I hope that in the next 10 years, this will be the standard way of looking at diffusion. To me, one of the exciting things is not just that it works faster, but you also learn more about what’s happening in the system.” More

  • in

    Virtual reality becomes more engaging when designers use cinematic tools

    Cinematography techniques can significantly increase user engagement with virtual environments and, in particular, the aesthetic appeal of what users see in virtual reality.
    This was the result of a recent study conducted by computer scientists at the University of Helsinki. The results were published in May at the ACM Conference on Human Factors in Computing Systems (CHI).
    The team aimed to investigate how principles of composition and continuity, commonly used in filmmaking, could be utilized to enhance navigation around virtual environments.
    Composition refers to how the elements in a scene are oriented with respect to the viewer, whereas continuity is about how camera positions between shots can help viewers to understand spatial relationships between elements in the scene.
    “Using these ideas, we developed a new teleportation method for exploring virtual environments that subtly repositions and reorientates the user’s viewpoint after teleportation to better frame the contents of the scene,” says Alan Medlar, University Researcher in computer science at the University of Helsinki.
    The images show how this differs from regular teleportation used in modern VR games: from the same starting point (top images), regular teleportation moves the user forward while retaining the same orientation (middle images), whereas cinematic techniques can increase the visual appeal of the environment (bottom images).
    Sense of space preserved without motion sickness
    The results also address the issue of motion sickness — a common problem for VR users. Usually, to prevent nausea, designers use teleportation as a method for moving through digital spaces. The researchers’ approach is also based on teleportation, but it aims to fix the problems associated with this technique.

    “In virtual environments, teleportation can lead to reduced spatial awareness, forcing users to reorient themselves after teleporting and can cause them to miss important elements in their surroundings,” says Medlar.
    “The cinematography techniques we used give the designers of virtual environments a way to influence users’ attention as they move around the space to affect how they perceive their surroundings,” he continues.
    Implications for gaming, museums, and movies
    The research carries substantial implications for a range of VR applications, especially as the affordability of VR headsets keeps improving. Video games, virtual museums, galleries, and VR movies could all benefit from these findings, utilizing the proposed methods to craft more engaging and coherent experiences for their users.
    Medlar believes the results will be of practical use to virtual reality designers.
    “The potential impact of improving navigation in VR and giving designers more tools to affect user experience is huge.” More

  • in

    Scientists create an ‘optical conveyor belt’ for quasiparticles

    Using interference between two lasers, a research group led by scientists from RIKEN and NTT Research have created an ‘optical conveyor belt’ that can move polaritons — a type of light-matter hybrid particle — in semiconductor-based microcavities. This work could lead to the development of new devices with applications in areas such as quantum metrology and quantum information.
    For the current study, published in Nature Photonics, the scientists used the interference between two lasers to create a dynamic potential energy landscape — imagine a landscape of valleys and hills, in constant repeating motion — for a coherent, laser-like state of polaritons known as a polariton condensate. They achieved this by introducing a new optical tool — an optical conveyor belt — to enable the control of the energy landscape, concretely, the lattice depths and the interactions between neighboring particles. By further tuning the frequency difference between the two lasers, the conveyer belt moves at speeds of the order of 0.1 percent of the speed of light, driving the polaritons into a new state.
    Non-reciprocity — a phenomenon where system dynamics are different in opposing directions — is a crucial ingredient for creating what is known as an artificial topological phase of matter. Topology is the mathematical classification of objects by counting the number of number of ‘holes’, e.g. a donut or a knot may have a finite number of holes, while a ball has none. Quantum materials can also be engineered with a non-zero topology, which in this case is more abstractly embedded into the band structure. Such materials can exhibit behavior such as dissipationless transport, meaning that they can move without energy loss, and other exotic quantum phenomena. It is extremely challenging to introduce non-reciprocity into engineered optical platforms, and this simple, extendible experimental demonstration opens new opportunities for emerging quantum technologies incorporate functional topology.
    The research group, including first author Yago del Valle Inclan Redondo, and led by Senior Research Scientist Michael Fraser, both from RIKEN CEMS and NTT Research, together with collaborators from Germany, Singapore and Australia, have conducted a study in this direction. Fraser says, “We have created a topological state of light in a semiconductor structure by a mechanism involving rapid modulation of the energy landscape, resulting in the introduction of a synthetic dimension.” A synthetic dimension is a method of mapping a non-spatial dimension, in this case time, into a space-like dimension, such that the system dynamics can evolve in a higher number of dimensions and become better suited to realizing topological matter. This work extends upon a technique developed by the group, published last year (see “Optically Driven Rotation of Exciton-Polariton Condensates”), which similarly used temporally modulated lasers to drive the rapid rotation of polariton condensates.
    Using this simple experimental scheme involving the interference between two lasers, the scientists were able to organize polaritons in precisely the right dimensions to create an artificial band structure, meaning that the particles organized into energy bands like electrons in a material. By tuning the dimensions, depth and speed of the polariton optical lattice, control over the band structure is achieved. Thanks to this rapid motion, the polaritons see a different potential energy landscape depending on whether they are propagating with or against the flow of the lattice, an effect which is analogous to the Doppler shift for sound. This asymmetric response of the confined polaritons breaks time-reversal symmetry, driving non-reciprocity and the formation of a topological band structure.
    “Photonic states with topological properties can be used in advanced opto-electronic devices where topology might greatly improve the performance of optical devices, circuits, and networks, such as by reducing noise and lasing threshold powers, and dissipationless optical waveguiding. Further, the simplicity and robustness of our technique opens new opportunities for the development of topological photonic devices with applications in quantum metrology and quantum information,” concludes Fraser. More

  • in

    New transit station in Japan significantly reduced cumulative health expenditures

    The declining population in Osaka is related to an aging society that is driving up health expenditures. Dr. Haruka Kato, a junior associate professor at Osaka Metropolitan University, teamed up with the Future Co-creation Laboratory at Japan System Techniques Co., Ltd. to conduct natural experiments on how a new train station might impact healthcare expenditures.
    JR-Sojiji Station opened in March 2018 in a suburban city on the West Japan Railway line connecting Osaka and Kyoto. The researchers used a causal impact algorithm to analyze the medical expenditure data gathered from the time series medical dataset REZULT provided by Japan System Techniques.
    Their results indicate that opening this mass transit station was significantly associated with a decrease in average healthcare expenditures per capita by approximately 99,257.31 Japanese yen (USD 929.99) over four years, with US dollar figures based on March 2018 exchange rates. In addition, the 95% confidence interval indicated the four-year decreasing expenditure of JPY 136,194.37 ($1276.06) to JPY 62,119.02 ($582.02). This study’s findings are consistent with previous studies suggesting that increased access to transit might increase physical activity among transit users. The results provided evidence for the effectiveness of opening a mass transit station from the viewpoint of health expenditures.
    “From the perspective of evidence-based policymaking, there is a need to assess the social impact of urban designs,” said Dr. Kato. “Our findings are an important achievement because they enable us to assess this impact from the perspective of health care expenditures, as in the case of JR-Sojiji Station.” More

  • in

    Artificial intelligence tool detects male-female-related differences in brain structure

    Artificial intelligence (AI) computer programs that process MRI results show differences in how the brains of men and women are organized at a cellular level, a new study shows. These variations were spotted in white matter, tissue primarily located in the human brain’s innermost layer, which fosters communication between regions.
    Men and women are known to experience multiple sclerosis, autism spectrum disorder, migraines, and other brain issues at different rates and with varying symptoms. A detailed understanding of how biological sex impacts the brain is therefore viewed as a way to improve diagnostic tools and treatments. However, while brain size, shape, and weight have been explored, researchers have only a partial picture of the brain’s layout at the cellular level.
    Led by researchers at NYU Langone Health, the new study used an AI technique called machine learning to analyze thousands of MRI brain scans from 471 men and 560 women. Results revealed that the computer programs could accurately distinguish between biological male and female brains by spotting patterns in structure and complexity that were invisible to the human eye. The findings were validated by three different AI models designed to identify biological sex using their relative strengths in either zeroing in on small portions of white matter or analyzing relationships across larger regions of the brain.
    “Our findings provide a clearer picture of how a living, human brain is structured, which may in turn offer new insight into how many psychiatric and neurological disorders develop and why they can present differently in men and women,” said study senior author and neuroradiologist Yvonne Lui, MD.
    Lui, a professor and vice chair for research in the Department of Radiology at NYU Grossman School of Medicine, notes that previous studies of brain microstructure have largely relied on animal models and human tissue samples. In addition, the validity of some of these past findings has been called into question for relying on statistical analyses of “hand-drawn” regions of interest, meaning researchers needed to make many subjective decisions about the shape, size, and location of the regions they choose. Such choices can potentially skew the results, says Lui.
    The new study results, publishing online May 14 in the journal Scientific Reports, avoided that problem by using machine learning to analyze entire groups of images without asking the computer to inspect any specific spot, which helped to remove human biases, the authors say.
    For the research, the team started by feeding AI programs existing data examples of brain scans from healthy men and women and also telling the machine programs the biological sex of each brain scan. Since these models were designed to use complex statistical and mathematical methods to get “smarter” over time as they accumulated more data, they eventually “learned” to distinguish biological sex on their own. Importantly, the programs were restricted from using overall brain size and shape to make their determinations, says Lui.

    According to the results, all of the models correctly identified the sex of subject scans between 92% and 98% of the time. Several features in particular helped the machines make their determinations, including how easily and in what direction water could move through brain tissue.
    “These results highlight the importance of diversity when studying diseases that arise in the human brain,” said study co-lead author Junbo Chen, MS, a doctoral candidate at NYU Tandon School of Engineering.
    “If, as has been historically the case, men are used as a standard model for various disorders, researchers may miss out on critical insight,” added study co-lead author Vara Lakshmi Bayanagari, MS, a graduate research assistant at NYU Tandon School of Engineering.
    Bayanagari cautions that while the AI tools could report differences in brain-cell organization, they could not reveal which sex was more likely to have which features. She adds that the study classified sex based on genetic information and only included MRIs from cis-gendered men and women.
    According to the authors, the team next plans to explore the development of sex-related brain structure differences over time to better understand environmental, hormonal, and social factors that could play a role in these changes.
    Funding for the study was provided by the National Institutes of Health grants R01NS119767, R01NS131458, and P41EB017183, as well as by the United States Department of Defense grant W81XWH2010699.
    In addition to Lui, Chen, and Bayanagari, other NYU Langone Health and NYU researchers involved in the study were Sohae Chung, PhD, and Yao Wang, PhD. More

  • in

    Using artificial intelligence to speed up and improve the most computationally-intensive aspects of plasma physics in fusion

    The intricate dance of atoms fusing and releasing energy has fascinated scientists for decades. Now, human ingenuity and artificial intelligence are coming together at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) to solve one of humankind’s most pressing issues: generating clean, reliable energy from fusing plasma.
    Unlike traditional computer code, machine learning — a type of artificially intelligent software — isn’t simply a list of instructions. Machine learning is software that can analyze data, infer relationships between features, learn from this new knowledge and adapt. PPPL researchers believe this ability to learn and adapt could improve their control over fusion reactions in various ways. This includes perfecting the design of vessels surrounding the super-hot plasma, optimizing heating methods and maintaining stable control of the reaction for increasingly long periods.
    The Lab’s artificial intelligence research is already yielding significant results. In a new paper published in Nature Communications, PPPL researchers explain how they used machine learning to avoid magnetic perturbations, or disruptions, which destabilize fusion plasma.
    “The results are particularly impressive because we were able to achieve them on two different tokamaks using the same code,” said PPPL Staff Research Physicist SangKyeun Kim, the lead author of the paper. A tokamak is a donut-shaped device that uses magnetic fields to hold a plasma.
    “There are instabilities in plasma that can lead to severe damage to the fusion device. We can’t have those in a commercial fusion vessel. Our work advances the field and shows that artificial intelligence could play an important role in managing fusion reactions going forward, avoiding instabilities while allowing the plasma to generate as much fusion energy as possible,” said Egemen Kolemen, associate professor in the department of mechanical and aerospace engineering, jointly appointed with the Andlinger Center for Energy and the Environment and the PPPL.
    Important decisions must be made every millisecond to control a plasma and keep a fusion reaction going. Kolemen’s system can make those decisions far faster than a human and automatically adjust the settings for the fusion vessel so the plasma is properly maintained. The system can predict disruptions, figure out what settings to change and then make those changes all before the instabilities occur.
    Kolemen notes that the results are also impressive because, in both cases, the plasma was in a high-confinement mode. Also known as H-mode, this occurs when a magnetically confined plasma is heated enough that the confinement of the plasma suddenly and significantly improves, and the turbulence at the plasma’s edge effectively disappears. H-mode is the hardest mode to stabilize but also the mode that will be necessary for commercial power generation.

    The system was successfully deployed on two tokamaks, DIII-D and KSTAR, which both achieved H-mode without instabilities. This is the first time that researchers achieved this feat in a reactor setting that is relevant to what will be needed to deploy fusion power on a commercial scale.
    Machine learning code that detects and eliminates plasma instabilities was deployed in the two tokamaks shown above: DIII-D and KSTAR. (Credit: General Atomics and Korean Institute of Fusion Energy)
    PPPL has a significant history of using artificial intelligence to tame instabilities. PPPL Principal Research Physicist William Tang and his team were the first to demonstrate the ability to transfer this process from one tokamak to another in 2019.
    “Our work achieved breakthroughs using artificial intelligence and machine learning together with powerful, modern high-performance computing resources to integrate vast quantities of data in thousandths of a second and develop models for dealing with disruptive physics events well before their onset,” Tang said. “You can’t effectively combat disruptions in more than a few milliseconds. That would be like starting to treat a fatal cancer after it’s already too far along.”
    The work was detailed in an influential paper published in Nature in 2019. Tang and his team continue to work in this area, with an emphasis on eliminating real-time disruptions in tokamaks using machine learning models trained on properly verified and validated observational data.
    A new twist on stellarator design
    PPPL’s artificial intelligence projects for fusion extend beyond tokamaks. PPPL’s Head of Digital Engineering, Michael Churchill, uses machine learning to improve the design of another type of fusion reactor, a stellarator. If tokamaks look like donuts, stellarators could be seen as the crullers of the fusion world with a more complex, twisted design.

    “We need to leverage a lot of different codes when we’re validating the design of a stellarator. So the question becomes, ‘What are the best codes for stellarator design and the best ways to use them?'” Churchill said. “It’s a balancing act between the level of detail in the calculations and how quickly they produce answers.”
    Current simulations for tokamaks and stellarators come close to the real thing but aren’t yet twins. “We know that our simulations are not 100% true to the real world. Many times, we know that there are deficiencies. We think that it captures a lot of the dynamics that you would see on a fusion machine, but there’s quite a bit that we don’t.”
    Churchill said ideally, you want a digital twin: a system with a feedback loop between simulated digital models and real-world data captured in experiments. “In a useful digital twin, that physical data could be used and leveraged to update the digital model in order to better predict what future performance would be like.”
    Unsurprisingly, mimicking reality requires a lot of very sophisticated code. The challenge is that the more complicated the code, the longer it typically takes to run. For example, a commonly used code called X-Point Included Gyrokinetic Code (XGC) can only run on advanced supercomputers, and even then, it doesn’t run quickly. “You’re not going to run XGC every time you run a fusion experiment unless you have a dedicated exascale supercomputer. We’ve probably run it on 30 to 50 plasma discharges [of the thousands we have run],” Churchill said.
    That’s why Churchill uses artificial intelligence to accelerate different codes and the optimization process itself. “We would really like to do higher-fidelity calculations but much faster so that we can optimize quickly,” he said.
    Coding to optimize code
    Similarly, Research Physicist Stefano Munaretto’s team is using artificial intelligence to accelerate a code called HEAT, which was originally developed by the DOE’s Oak Ridge National Laboratory and the University of Tennessee-Knoxville for PPPL’s tokamak NSTX-U.
    HEAT is being updated so that the plasma simulation will be 3D, matching the 3D computer-aided design (CAD) model of the tokamak divertor. Located at the base of the fusion vessel, the divertor extracts heat and ash generated during the reaction. A 3D plasma model should enhance understanding of how different plasma configurations can impact heat fluxes or the movement patterns of heat in the tokamak. Understanding the movement of heat for a specific plasma configuration can provide insights into how heat will likely travel in a future discharge with a similar plasma.
    By optimizing HEAT, the researchers hope to quickly run the complex code between plasma shots, using information about the last shot to decide the next.
    “This would allow us to predict the heat fluxes that will appear in the next shot and to potentially reset the parameters for the next shot so the heat flux isn’t too intense for the divertor,” Munaretto said. “This work could also help us design future fusion power plants.”
    PPPL Associate Research Physicist Doménica Corona Rivera has been deeply involved in the effort to optimize HEAT. The key is narrowing down a wide range of input parameters to just four or five so the code will be streamlined yet highly accurate. “We have to ask, ‘Which of these parameters are meaningful and are going to really be impacting heat?'” said Corona Rivera. Those are the key parameters used to train the machine learning program.
    With support from Churchill and Munaretto, Corona Rivera has already greatly reduced the time it takes to run the code to consider the heat while keeping the results roughly 90% in sync with those from the original version of HEAT. “It’s instantaneous,” she said.
    Finding the right conditions for ideal heating
    Researchers are also trying to find the best conditions to heat the ions in the plasma by perfecting a technique known as ion cyclotron radio frequency heating (ICRF). This type of heating focuses on heating up the big particles in the plasma — the ions.
    Plasma has different properties, such as density, pressure, temperature and the intensity of the magnetic field. These properties change how the waves interact with the plasma particles and determine the waves’ paths and areas where the waves will heat the plasma. Quantifying these effects is crucial to controlling the radio frequency heating of the plasma so that researchers can ensure the waves move efficiently through the plasma to heat it in the right areas.
    The problem is that the standard codes used to simulate the plasma and radio wave interactions are very complicated and run too slowly to be used to make real-time decisions.
    “Machine learning brings us great potential here to optimize the code,” said Álvaro Sánchez Villar, an associate research physicist at PPPL. “Basically, we can control the plasma better because we can predict how the plasma is going to evolve, and we can correct it in real-time.”
    The project focuses on trying different kinds of machine learning to speed up a widely used physics code. Sánchez Villar and his team showed multiple accelerated versions of the code for different fusion devices and types of heating. The models can find answers in microseconds instead of minutes with minimal impact on the accuracy of the results. Sánchez Villar and his team were also able to use machine learning to eliminate challenging scenarios with the optimized code.
    Sánchez Villar says the code’s accuracy, “increased robustness” and acceleration make it well suited for integrated modeling, in which many physics codes are used together, and real-time control applications, which are crucial for fusion research.
    Enhancing our understanding of the plasma’s edge
    PPPL Principal Research Physicist Fatima Ebrahimi is the principal investigator on a four-year project for the DOE’s Advanced Scientific Computing Research program, part of the Office of Science, which uses experimental data from various tokamaks, plasma simulation data and artificial intelligence to study the behavior of the plasma’s edge during fusion. The team hopes their findings will reveal the most effective ways to confine a plasma on a commercial-scale tokamak.
    While the project has multiple goals, the aim is clear from a machine learning perspective. “We want to explore how machine learning can help us take advantage of all our data and simulations so we can close the technological gaps and integrate a high-performance plasma into a viable fusion power plant system,” Ebrahimi said.
    There is a wealth of experimental data gathered from tokamaks worldwide while the devices operated in a state free from large-scale instabilities at the plasma’s edge known as edge-localized modes (ELMs). Such momentary, explosive ELMs need to be avoided because they can damage the inner components of a tokamak, draw impurities from the tokamak walls into the plasma and make the fusion reaction less efficient. The question is how to achieve an ELM-free state in a commercial-scale tokamak, which will be much larger and run much hotter than today’s experimental tokamaks.
    Ebrahimi and her team will combine the experimental results with information from plasma simulations that have already been validated against experimental data to create a hybrid database. The database will then be used to train machine learning models about plasma management, which can then be used to update the simulation.
    “There is some back and forth between the training and the simulation,” Ebrahimi explained. By running a high-fidelity simulation of the machine learning model on supercomputers, the researchers can then hypothesize about scenarios beyond those covered by the existing data. This could provide valuable insights into the best ways to manage the plasma’s edge on a commercial scale.
    This research was conducted with the following DOE grants: DE-SC0020372, DE-SC0024527, DE-AC02-09CH11466, DE-SC0020372, DE-AC52-07NA27344, DE-AC05-00OR22725, DE-FG02-99ER54531, DE-SC0022270, DE-SC0022272, DE-SC0019352, DEAC02-09CH11466 and DE-FC02-04ER54698. This research was also supported by the research and design program of KSTAR Experimental Collaboration and Fusion Plasma Research (EN2401-15) through the Korea Institute of Fusion Energy.
    This story includes contributions by John Greenwald. More