More stories

  • in

    A strange quantum metal just rewrote the rules of electricity

    Quantum metals are metals where quantum effects — behaviors that normally only matter at atomic scales — become powerful enough to control the metal’s macroscopic electrical properties.
    Researchers in Japan have explained how electricity behaves in a special group of quantum metals called kagome metals. The study is the first to show how weak magnetic fields reverse tiny loop electrical currents inside these metals. This switching changes the material’s macroscopic electrical properties and reverses which direction has easier electrical flow, a property known as the diode effect, where current flows more easily in one direction than the other.
    Notably, the research team found that quantum geometric effects amplify this switching by about 100 times. The study, published in Proceedings of the National Academy of Sciences, provides the theoretical foundation that could eventually lead to new electronic devices controlled by simple magnets.
    Scientists had observed this strange magnetic switching behavior in experiments since around 2020 but could not explain why it happened and why the effect was so strong. This study provides the first theoretical framework explaining both.
    When frustrated electrons cannot settle
    The name “kagome metal” comes from the Japanese word “kagome,” meaning “basket eyes” or “basket pattern,” which refers to a traditional bamboo weaving technique that creates interlocking triangular designs.
    These metals are special because their atoms are arranged in this unique basket-weave pattern that creates what scientists call “geometric frustration” — electrons cannot settle into simple, organized patterns and are forced into more complex quantum states that include the loop currents.

    When the loop currents inside these metals change direction, the electrical behavior of the metal changes. The research team showed that loop currents and wave-like electron patterns (charge density waves) work together to break fundamental symmetries in the electronic structure. They also discovered that quantum geometric effects — unique behaviors that only occur at the smallest scales of matter — significantly enhance the switching effect.
    “Every time we saw the magnetic switching, we knew something extraordinary was happening, but we couldn’t explain why,” Hiroshi Kontani, senior author and professor from the Graduate School of Science at Nagoya University, recalled.
    “Kagome metals have built-in amplifiers that make the quantum effects much stronger than they would be in ordinary metals. The combination of their crystal structure and electronic behavior allows them to break certain core rules of physics simultaneously, a phenomenon known as spontaneous symmetry breaking. This is extremely rare in nature and explains why the effect is so powerful.”
    The research method involved cooling the metals to extremely low temperatures of about -190°C. At this temperature, the kagome metal naturally develops quantum states where electrons form circulating currents and create wave-like patterns throughout the material. When scientists apply weak magnetic fields, they reverse the direction these currents spin, and as a result, the preferred direction of current flow in the metal changes.
    New materials meet new theory
    This breakthrough in quantum physics was not possible until recently because kagome metals were only discovered around 2020. While scientists quickly observed the mysterious electrical switching effect in experiments, they could not explain how it worked.
    The quantum interactions involved are very complex and require advanced understanding of how loop currents, quantum geometry, and magnetic fields work together — knowledge that has only developed in recent years. These effects are also very sensitive to impurities, strain, and external conditions, which makes them difficult to study.
    “This discovery happened because three things came together at just the right time: we finally had the new materials, the advanced theories to understand them, and the high-tech equipment to study them properly. None of these existed together until very recently, which is why no one could solve this puzzle before now,” Professor Kontani added.
    “The magnetic control of electrical properties in these metals could potentially enable new types of magnetic memory devices or ultra-sensitive sensors. Our study provides the fundamental understanding needed to begin developing the next generation of quantum-controlled technology,” he said. More

  • in

    Physicists just built a quantum lie detector. It works

    Can you prove whether a large quantum system truly behaves according to the weird and wonderful rules of quantum mechanics — or if it just looks like it does? In a groundbreaking study, physicists from Leiden, Beijing en Hangzhou found the answer to this question.
    You could call it a ‘quantum lie detector’: Bell’s test designed by famous physicist John Bell. This test shows whether a machine, like a quantum computer, is truly using quantum effects or just mimics them.
    As quantum technologies become more mature, ever more stringent tests of quantumness become necessary. In this new study, the researchers took things to the next level, testing Bell correlations in systems with up to 73 qubits — the basic building blocks of a quantum computer.
    The study involved a global team: theoretical physicists Jordi Tura, Patrick Emonts, PhD candidate Mengyao Hu from Leiden University, together with colleagues from Tsinghua University (Beijing) and experimental physicists from Zhejiang University (Hangzhou).The world of quantum physics
    Quantum mechanics is the science that explains how the tiniest particles in the universe — like atoms and electrons — behave. It’s a world full of strange and counterintuitive ideas.
    One of those is quantum nonlocality, where particles appear to instantly affect each other, even when far apart. Although it sounds strange, it’s a real effect, and it won the Nobel Prize in Physics in 2022. This research is focused on proving the occurrence of nonlocal correlation, also known as Bell correlations.

    Clever experimenting
    It was an extremely ambitious plan, but the team’s well-optimized strategy made all the difference. Instead of trying to directly measure the complex Bell correlations, they focused on something quantum devices are already good at: minimizing energy.
    And it paid off. The team created a special quantum state using 73 qubits in a superconducting quantum processor and measured energies far below what would be possible in a classical system. The difference was striking — 48 standard deviations — making it almost impossible that the result was due to chance.
    But the team didn’t stop there. They went on to certify a rare and more demanding type of nonlocality – known as genuine multipartite Bell correlations. In this kind of quantum correlation, all qubits in the system must be involved, making it much harder to generate — and even harder to verify. Remarkably, the researchers succeeded in preparing a whole series of low-energy states that passed this test up to 24 qubits, confirming these special correlations efficiently.
    This result shows that quantum computers are not just getting bigger — they are also becoming better at displaying and proving truly quantum behaviour.
    Why this matters
    This study proves that it’s possible to certify deep quantum behaviour in large, complex systems — something never done at this scale before. It’s a big step toward making sure quantum computers are truly quantum.
    These insights are more than just theoretical. Understanding and controlling Bell correlations could improve quantum communication, make cryptography more secure, and help develop new quantum algorithms. More

  • in

    Scientists accidentally create a tiny “rainbow chip” that could supercharge the internet

    A few years ago, researchers in Michal Lipson’s lab noticed something remarkable.
    They were working on a project to improve LiDAR, a technology that uses lightwaves to measure distance. The lab was designing high-power chips that could produce brighter beams of light.
    “As we sent more and more power through the chip, we noticed that it was creating what we call a frequency comb,” says Andres Gil-Molina, a former postdoctoral researcher in Lipson’s lab.
    A frequency comb is a special type of light that contains many colors lined up next to each other in an orderly pattern, kind of like a rainbow. Dozens of colors — or frequencies of light — shine brightly, while the gaps between them remain dark. When you look at a frequency comb on a spectrogram, these bright frequencies appear as spikes, or teeth on a comb. This offers the tremendous opportunity of sending dozens of streams of data simultaneously. Because the different colors of light don’t interfere with each other, each tooth acts as its own channel.
    Today, creating a powerful frequency comb requires large and expensive lasers and amplifiers. In their new paper in Nature Photonics, Lipson, Eugene Higgins Professor of Electrical Engineering and professor of Applied Physics, and her collaborators show how to do the same thing on a single chip.
    “Data centers have created tremendous demand for powerful and efficient sources of light that contain many wavelengths,” says Gil-Molina, who is now a principal engineer at Xscape Photonics. “The technology we’ve developed takes a very powerful laser and turns it into dozens of clean, high-power channels on a chip. That means you can replace racks of individual lasers with one compact device, cutting cost, saving space, and opening the door to much faster, more energy-efficient systems.”
    “This research marks another milestone in our mission to advance silicon photonics,” Lipson said. “As this technology becomes increasingly central to critical infrastructure and our daily lives, this type of progress is essential to ensuring that data centers are as efficient as possible.”
    Cleaning up messy light

    The breakthrough started with a simple question: What’s the most powerful laser we can put on a chip?
    The team chose a type called a multimode laser diode, which is used widely in applications like medical devices and laser cutting tools. These lasers can produce enormous amounts of light, but the beam is “messy,” which makes it hard to use for precise applications.
    Integrating such a laser into a silicon photonics chip, where the light pathways are just a few microns — even hundreds of nanometers — wide, required careful engineering.
    “We used something called a locking mechanism to purify this powerful but very noisy source of light,” Gil-Molina says. The method relies on silicon photonics to reshape and clean up the laser’s output, producing a much cleaner, more stable beam, a property scientists call high coherence.
    Once the light is purified, the chip’s nonlinear optical properties take over, splitting that single powerful beam into dozens of evenly spaced colors, a defining feature of a frequency comb. The result is a compact, high-efficiency light source that combines the raw power of an industrial laser with the precision and stability needed for advanced communications and sensing.
    Why it matters now
    The timing for this breakthrough is no accident. With the explosive growth of artificial intelligence, the infrastructure inside data centers is straining to move information fast enough, for example, between processors and memory. State-of-the-art data centers are already using fiber optic links to transport data, but most of these still rely on single-wavelength lasers.

    Frequency combs change that. Instead of one beam carrying one data stream, dozens of beams can run in parallel through the same fiber. That’s the principle behind wavelength-division multiplexing (WDM), the technology that turned the internet into a global high-speed network in the late 1990s.
    By making high-power, multi-wavelength combs small enough to fit directly on a chip, Lipson’s team has made it possible to bring this capability into the most compact, cost-sensitive parts of modern computing systems. Beyond data centers, the same chips could enable portable spectrometers, ultra-precise optical clocks, compact quantum devices, and even advanced LiDAR systems.
    “This is about bringing lab-grade light sources into real-world devices,” says Gil-Molina. “If you can make them powerful, efficient, and small enough, you can put them almost anywhere.” More

  • in

    These little robots literally walk on water

    Imagine a tiny robot, no bigger than a leaf, gliding across a pond’s surface like a water strider. One day, devices like this could track pollutants, collect water samples or scout flooded areas too risky for people.
    Baoxing Xu, professor of mechanical and aerospace engineering at the University of Virginia’s School of Engineering and Applied Science, is pioneering a way to build them. In a new study published in Science Advances, Xu’s research introduces HydroSpread, a first-of-its-kind fabrication method that has great potential to impact the growing field of soft robotics. This innovation allows scientists to make soft, floating devices directly on water, a technology that could be utilized in fields from health care to electronics to environmental monitoring.
    Until now, the thin, flexible films used in soft robotics had to be manufactured on rigid surfaces like glass and then peeled off and transferred to water, a delicate process that often caused films to tear. HydroSpread sidesteps this issue by letting liquid itself serve as the “workbench.” Droplets of liquid polymer could naturally spread into ultrathin, uniform sheets on the water’s surface. With a finely tuned laser, Xu’s team can then carve these sheets into complex patterns — circles, strips, even the UVA logo — with remarkable precision.
    Using this approach, the researchers built two insect-like prototypes: HydroFlexor, which paddles across the surface using fin-like motions. HydroBuckler, which “walks” forward with buckling legs, inspired by water striders.In the lab, the team powered these devices with an overhead infrared heater. As the films warmed, their layered structure bent or buckled, creating paddling or walking motions. By cycling the heat on and off, the devices could adjust their speed and even turn — proof that controlled, repeatable movement is possible. Future versions could be designed to respond to sunlight, magnetic fields or tiny embedded heaters, opening the door to autonomous soft robots that can move and adapt on their own.
    “Fabricating the film directly on liquid gives us an unprecedented level of integration and precision,” Xu said. “Instead of building on a rigid surface and then transferring the device, we let the liquid do the work to provide a perfectly smooth platform, reducing failure at every step.”
    The potential reaches beyond soft robots. By making it easier to form delicate films without damaging them, HydroSpread could open new possibilities for creating wearable medical sensors, flexible electronics and environmental monitors — tools that need to be thin, soft and durable in settings where traditional rigid materials don’t work.
    About the Researcher
    Baoxing Xu is a nationally recognized expert in mechanics, compliant structures and bioinspired engineering. His lab at UVA Engineering focuses on translating strategies from nature — such as the delicate mechanics of insect locomotion — into resilient, functional devices for human use.
    This work, supported by the National Science Foundation and 4-VA, was carried out within UVA’s Department of Mechanical and Aerospace Engineering. Graduate and undergraduate researchers in Xu’s group played a central role in the experiments, gaining hands-on experience with state-of-the-art fabrication and robotics techniques. More

  • in

    Scientists finally found the “dark matter” of electronics

    In a world-first, researchers from the Femtosecond Spectroscopy Unit at the Okinawa Institute of Science and Technology (OIST) have directly observed the evolution of the elusive dark excitons in atomically thin materials, laying the foundation for new breakthroughs in both classical and quantum information technologies. Their findings have been published in Nature Communications. Professor Keshav Dani, head of the unit, highlights the significance: “Dark excitons have great potential as information carriers, because they are inherently less likely to interact with light, and hence less prone to degradation of their quantum properties. However, this invisibility also makes them very challenging to study and manipulate. Building on a previous breakthrough at OIST in 2020, we have opened a route to the creation, observation, and manipulation of dark excitons.”
    “In the general field of electronics, one manipulates electron charge to process information,” explains Xing Zhu, co-first author and PhD student in the unit. “In the field of spintronics, we exploit the spin of electrons to carry information. Going further, in valleytronics, the crystal structure of unique materials enables us to encode information into distinct momentum states of the electrons, known as valleys.” The ability to use the valley dimension of dark excitons to carry information positions them as promising candidates for quantum technologies. Dark excitons are by nature more resistant to environmental factors like thermal background than the current generation of qubits, potentially requiring less extreme cooling and making them less prone to decoherence, where the unique quantum state breaks down.
    Defining landscapes of energy with bright and dark excitons
    Over the past decade, progress has been made in the development of a class of atomically thin semiconducting materials known as TMDs (transition metal dichalcogenides). As with all semiconductors, atoms in TMDs are aligned in a crystal lattice, which confines electrons to a specific level (or band) of energy, such as the valence band. When exposed to light, the negatively charged electrons are excited to a higher energy state – the conduction band – leaving behind a positively charged hole in the valence band. The electrons and holes are bound together by electrostatic attraction, forming hydrogen-like quasiparticles called excitons. If certain quantum properties of the electron and hole match, i.e. they have the same spin configuration and they inhabit the same ‘valley’ in momentum space (the energy minima that electrons and holes can occupy in the atomic crystal structure) the two recombine within a picosecond (1ps = 10−12 second), emitting light in the process. These are ‘bright’ excitons.
    However, if the quantum properties of the electron and hole do not match up, the electron and hole are forbidden from recombining on their own and do not emit light. These are characterized as ‘dark’ excitons. “There are two ‘species’ of dark excitons,” explains Dr. David Bacon, co-first author who is now at University College London, “momentum-dark and spin-dark, depending on where the properties of electron and hole are in conflict. The mismatch in properties not only prevents immediate recombination, allowing them to exist up to several nanoseconds (1ns = 10−9 second – a much more useful timescale), but also makes dark excitons more isolated from environmental interactions.”
    “The unique atomic symmetry of TMDs means that when exposed to a state of light with a circular polarization, one can selectively create bright excitons only in a specific valley. This is the fundamental principle of valleytronics. However, bright excitons rapidly turn into numerous dark excitons that can potentially preserve the valley information. Which species of dark excitons are involved and to what degree they can sustain the valley information is unclear, but this is a key step in the pursuit of valleytronic applications,” explains Dr. Vivek Pareek, co-first author and OIST graduate who is now a Presidential Postdoctoral Fellow at the California Institute of Technology.
    Observing electrons at the femtosecond scale
    Using the world-leading TR-ARPES (time- and angle resolved photoemission spectroscopy) setup at OIST, which includes a proprietary, table-top XUV (extreme ultraviolet) source, the team has managed to track the characteristics of all excitons after the creation of bright excitons in a specific valley in a TMD semiconductor over time by simultaneously quantifying momentum, spin state, and population levels of electrons and holes – these properties have never been simultaneously quantified before.
    Their findings show that within a picosecond, some bright excitons are scattered by phonons (quantized crystal lattice vibrations) into different momentum valleys, rendering them momentum-dark. Later, spin-dark excitons dominate, where electrons have flipped spin within the same valley, persisting on nanosecond scales.
    With this, the team has overcome the fundamental challenge of how to access and track dark excitons, laying the foundation for dark valleytronics as a field. Dr. Julien Madéo of the unit summarizes: “Thanks to the sophisticated TR-ARPES setup at OIST, we have directly accessed and mapped how and what dark excitons keep long-lived valley information. Future developments to read out the dark excitons valley properties will unlock broad dark valleytronic applications across information systems.” More

  • in

    Scientists just recreated a wildfire that made its own weather

    On September 5, 2020, California’s Creek Fire grew so severe that it began producing it’s own weather system. The fire’s extreme heat produced an explosive thunderhead that spewed lightning strikes and further fanned the roaring flames, making containment elusive and endangering the lives of firefighters on the ground. These wildfire-born storms have become a growing part of fire seasons across the West, with lasting impacts on air quality, weather, and climate. Until now, scientists have struggled to replicate them in Earth system models, hindering our ability to predict their occurrence and understand their impacts on the global climate. Now, a new study provides a breakthrough by developing a novel wildfire-Earth system modeling framework.
    The research, published September 25th in Geophysical Research Letters, represents the first successful simulation of these wildfire-induced storms, known as pyrocumulonimbus clouds, within an Earth system model. Led by DRI scientist Ziming Ke, the study successfully reproduced the observed timing, height, and strength of the Creek Fire’s thunderhead – one of the largest known pyrocumulonimbus clouds seen in the U.S., according to NASA. The model also replicated multiple thunderstorms produced by the 2021 Dixie Fire, which occurred under very different conditions. Accounting for the way that cloud development is aided by moisture lofted into the higher reaches of the atmosphere by terrain and winds is key to their findings.
    “This work is a first-of-its-kind breakthrough in Earth system modeling,” Ke said. “It not only demonstrates how extreme wildfire events can be studied within Earth system models, but also establishes DRI’s growing capability in Earth system model development — a core strength that positions the institute to lead future advances in wildfire-climate science.”
    When a pyrocumulonimbus cloud forms, it injects smoke and moisture into the upper atmosphere at magnitudes comparable to those of small volcanic eruptions, impacting the way Earth’s atmosphere receives and reflects sunlight. These fire aerosols can persist for months or longer, altering stratospheric composition. When transported to polar regions, they affect Antarctic ozone dynamics, modify clouds and albedo, and accelerate ice and snow melt, reshaping polar climate feedbacks. Scientists estimate that tens to hundreds of these storms occur globally each year, and that the trend of increasingly severe wildfires will only grow their numbers. Until now, failing to incorporate these storms into Earth system models has hindered our ability to understand this natural disturbance’s impact on global climate.
    The research team also included scientists from Lawrence Livermore National Laboratory, U.C. Irvine, and Pacific Northwest National Laboratory. Their breakthrough leveraged the Department of Energy’s (DOE) Energy Exascale Earth System Model (E3SM) to successfully capture the complex interplay between wildfires and the atmosphere.
    “Our team developed a novel wildfire-Earth system modeling framework that integrates high-resolution wildfire emissions, a one-dimensional plume-rise model, and fire-induced water vapor transport into DOE’s cutting-edge Earth system model,” Ke said. “This breakthrough advances high-resolution modeling of extreme hazards to improve national resilience and preparedness, and provides the framework for future exploration of these storms at regional and global scales within Earth system models.” More

  • in

    DOLPHIN AI uncovers hundreds of invisible cancer markers

    McGill University researchers have developed an artificial intelligence tool that can detect previously invisible disease markers inside single cells.
    In a study published in Nature Communications, the researchers demonstrate how the tool, called DOLPHIN, could one day be used by doctors to catch diseases earlier and guide treatment options.
    “This tool has the potential to help doctors match patients with the therapies most likely to work for them, reducing trial-and-error in treatment,” said senior author Jun Ding, assistant professor in McGill’s Department of Medicine and a junior scientist at the Research Institute of the McGill University Health Centre.
    Zooming in on genetic building blocks
    Disease markers are often subtle changes in RNA expression that can indicate when a disease is present, how severe it may become or how it might respond to treatment.
    Conventional gene-level methods of analysis collapse these markers into a single count per gene, masking critical variation and capturing only the tip of the iceberg, said the researchers.
    Now, advances in artificial intelligence have made it possible to capture the fine-grained complexity of single-cell data. DOLPHIN moves beyond gene-level, zooming in to see how genes are spliced together from smaller pieces called exons to provide a clearer view of cell states.

    “Genes are not just one block, they’re like Lego sets made of many smaller pieces,” said first author Kailu Song, a PhD student in McGill’s Quantitative Life Sciences program. “By looking at how those pieces are connected, our tool reveals important disease markers that have long been overlooked.”
    In one test case, DOLPHIN analyzed single-cell data from pancreatic cancer patients and found more than 800 disease markers missed by conventional tools. It was able to distinguish patients with high-risk, aggressive cancers from those with less severe cases, information that would help doctors choose the right treatment path.
    A step toward ‘virtual cells’
    More broadly, the breakthrough lays the foundation for achieving the long-term goal of building digital models of human cells. DOLPHIN generates richer single-cell profiles than conventional methods, enabling virtual simulations of how cells behave and respond to drugs before moving to lab or clinical trials, saving time and money.
    The researchers’ next step will be to expand the tool’s reach from a few datasets to millions of cells, paving the way for more accurate virtual cell models in the future.
    About the study
    “DOLPHIN advances single-cell transcriptomics beyond gene level by leveraging exon and junction reads” by Kailu Song and Jun Ding et al., was published inNature Communications.
    This research was supported the Meakins-Christie Chair in Respiratory Research, the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada and the Fonds de recherche du Québec. More

  • in

    Princeton’s AI reveals what fusion sensors can’t see

    Imagine watching a favorite movie when suddenly the sound stops. The data representing the audio is missing. All that’s left are images. What if artificial intelligence (AI) could analyze each frame of the video and provide the audio automatically based on the pictures, reading lips and noting each time a foot hits the ground?
    That’s the general concept behind a new AI that fills in missing data about plasma, the fuel of fusion, according to Azarakhsh Jalalvand of Princeton University. Jalalvand is the lead author on a paper about the AI, known as Diag2Diag, that was recently published in Nature Communications. “We have found a way to take the data from a bunch of sensors in a system and generate a synthetic version of the data for a different kind of sensor in that system,” he said. The synthetic data aligns with real-world data and is more detailed than what an actual sensor could provide. This could increase the robustness of control while reducing the complexity and cost of future fusion systems. “Diag2Diag could also have applications in other systems such as spacecraft and robotic surgery by enhancing detail and recovering data from failing or degraded sensors, ensuring reliability in critical environments.”
    The research is the result of an international collaboration between scientists at Princeton University, the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), Chung-Ang University, Columbia University and Seoul National University. All of the sensor data used in the research to develop the AI was gathered from experiments at the DIII-D National Fusion Facility, a DOE user facility.
    The new AI enhances the way scientists can monitor and control the plasma inside a fusion system and could help keep future commercial fusion systems a reliable source of electricity. “Fusion devices today are all experimental laboratory machines, so if something happens to a sensor, the worst thing that can happen is that we lose time before we can restart the experiment. But if we are thinking about fusion as a source of energy, it needs to work 24/7, without interruption,” Jalalvand said.
    AI could lead to compact, economical fusion systems
    The name Diag2Diag originates from the word “diagnostic,” which refers to the technique used to analyze a plasma and includes sensors that measure the plasma. Diagnostics take measurements at regular intervals, often as fast as a fraction of a second apart. But some don’t measure the plasma often enough to detect particularly fast-evolving plasma instabilities: sudden changes in the plasma that can make it hard to produce power reliably.
    There are many diagnostics in a fusion system that measure different characteristics of the plasma. Thomson scattering, for example, is a diagnostic technique used in doughnut-shaped fusion systems called tokamaks. The Thomson scattering diagnostic measures the temperature of negatively charged particles known as electrons, as well as the density: the number of electrons packed into a unit of space. It takes measurements quickly but not fast enough to provide details that plasma physicists need to keep the plasma stable and at peak performance.

    “Diag2Diag is kind of giving your diagnostics a boost without spending hardware money,” said Egemen Kolemen, principal investigator of the research who is jointly appointed at PPPL and Princeton University’s Andlinger Center for Energy and the Environment and the Department of Mechanical and Aerospace Engineering.
    This is particularly important for Thomson scattering because the other diagnostics can’t take measurements at the edge of the plasma, which is also known as the pedestal. It is the most important part of the plasma to monitor, but it’s very hard to measure. Carefully monitoring the pedestal helps scientists enhance plasma performance so they can learn the best ways to get the most energy out of the fusion reaction efficiently.
    For fusion energy to be a major part of the U.S. power system, it must be both economical and reliable. PPPL Staff Research Scientist SangKyeun Kim, who was part of the Diag2Diag research team, said the AI moves the U.S. toward those goals. “Today’s experimental tokamaks have a lot of diagnostics, but future commercial systems will likely need to have far fewer,” Kim said. “This will help make fusion reactors more compact by minimizing components not directly involved in producing energy.” Fewer diagnostics also frees up valuable space inside the machine, and simplifying the system also makes it more robust and reliable, with fewer chances for error. Plus, it lowers maintenance costs.
    PPPL: A leader in AI approaches to stabilizing fusion plasma
    The research team also found that the AI data supports a leading theory about how one method for stopping plasma disruptions works. Fusion scientists around the world are working on ways to control edge-localized modes (ELMs), which are powerful energy bursts in fusion reactors that can severely damage the reactor’s inner walls. One promising method to stop ELMs involves applying resonant magnetic perturbations (RMPs): small changes made to the magnetic fields used to hold a plasma inside a tokamak. PPPL is a leader in ELM-suppression research, with recent papers on AI and traditional approaches to stopping these problematic disruptions. One theory suggests that RMPs create “magnetic islands” at the plasma’s edge. These islands cause the plasma’s temperature and density to flatten, meaning the measurements were more uniform across the edge of the plasma.
    “Due to the limitation of the Thomson diagnostic, we cannot normally observe this flattening,” said PPPL Principal Research Scientist Qiming Hu, who also worked on the project. “Diag2Diag provided much more details on how this happens and how it evolves.”
    While magnetic islands can lead to ELMs, a growing body of research suggests they can also be fine-tuned using RMPs to improve plasma stability. Diag2Diag generated data that provided new evidence of this simultaneous flattening of both temperature and density in the pedestal region of the plasma. This strongly supports the magnetic island theory for ELM suppression. Understanding this mechanism is crucial for the development of commercial fusion reactors.
    The scientists are already pursuing plans to expand the scope of Diag2Diag. Kolemen noted that several researchers have already expressed interest in trying the AI. “Diag2Diag could be applied to other fusion diagnostics and is broadly applicable to other fields where diagnostic data is missing or limited,” he said.
    This research was supported by DOE under awards DE-FC02-04ER54698, DE-SC0022270, DE-SC0022272, DE-SC0024527, DE-SC0020413, DE-SC0015480 and DE-SC0024626, as well as the National Research Foundation of Korea award RS-2024-00346024 funded by the Korean government (MSIT). The authors also received financial support from the Princeton Laboratory for Artificial Intelligence under award 2025-97. More