More stories

  • in

    AI creates the first 100-billion-star Milky Way simulation

    Researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, working with partners from The University of Tokyo and Universitat de Barcelona in Spain, have created the first Milky Way simulation capable of tracking more than 100 billion individual stars across 10 thousand years of evolution. The team achieved this milestone by pairing artificial intelligence (AI) with advanced numerical simulation techniques. Their model includes 100 times more stars than the most sophisticated earlier simulations and was generated more than 100 times faster.
    The work, presented at the international supercomputing conference SC ’25, marks a major step forward for astrophysics, high-performance computing, and AI-assisted modeling. The same strategy could also be applied to large-scale Earth system studies, including climate and weather research.
    Why Modeling Every Star Is So Difficult
    For many years, astrophysicists have aimed to build Milky Way simulations detailed enough to follow each individual star. Such models would allow researchers to compare theories of galactic evolution, structure, and star formation directly to observational data. However, simulating a galaxy accurately requires calculating gravity, fluid behavior, chemical element formation, and supernova activity across enormous ranges of time and space, which makes the task extremely demanding.
    Scientists have not previously been able to model a galaxy as large as the Milky Way while maintaining fine detail at the level of single stars. Current cutting-edge simulations can represent systems with the equivalent mass of about one billion suns, far below the more than 100 billion stars that make up the Milky Way. As a result, the smallest “particle” in those models usually represents a group of roughly 100 stars, which averages away the behavior of individual stars and limits the accuracy of small-scale processes. The challenge is tied to the interval between computational steps: to capture rapid events such as supernova evolution, the simulation must advance in very small time increments.
    Shrinking the timestep means dramatically greater computational effort. Even with today’s best physics-based models, simulating the Milky Way star by star would require about 315 hours for every 1 million years of galactic evolution. At that rate, generating 1 billion years of activity would take over 36 years of real time. Simply adding more supercomputer cores is not a practical solution, as energy use becomes excessive and efficiency drops as more cores are added.
    A New Deep Learning Approach
    To overcome these barriers, Hirashima and his team designed a method that blends a deep learning surrogate model with standard physical simulations. The surrogate was trained using high-resolution supernova simulations and learned to predict how gas spreads during the 100,000 years following a supernova explosion without requiring additional resources from the main simulation. This AI component allowed the researchers to capture the galaxy’s overall behavior while still modeling small-scale events, including the fine details of individual supernovae. The team validated the approach by comparing its results against large-scale runs on RIKEN’s Fugaku supercomputer and The University of Tokyo’s Miyabi Supercomputer System.

    The method offers true individual-star resolution for galaxies with more than 100 billion stars, and it does so with remarkable speed. Simulating 1 million years took just 2.78 hours, meaning that 1 billion years could be completed in approximately 115 days instead of 36 years.
    Broader Potential for Climate, Weather, and Ocean Modeling
    This hybrid AI approach could reshape many areas of computational science that require linking small-scale physics with large-scale behavior. Fields such as meteorology, oceanography, and climate modeling face similar challenges and could benefit from tools that accelerate complex, multi-scale simulations.
    “I believe that integrating AI with high-performance computing marks a fundamental shift in how we tackle multi-scale, multi-physics problems across the computational sciences,” says Hirashima. “This achievement also shows that AI-accelerated simulations can move beyond pattern recognition to become a genuine tool for scientific discovery — helping us trace how the elements that formed life itself emerged within our galaxy.” More

  • in

    Chimps shock scientists by changing their minds with new evidence

    Chimpanzees may share more with human thinkers than researchers once realized. A new study published in Science presents compelling evidence that chimpanzees can revise their beliefs in a rational way when they encounter new information.
    The study, titled “Chimpanzees rationally revise their beliefs,” was carried out by an international team that included UC Berkeley Psychology Postdoctoral Researcher Emily Sanford, UC Berkeley Psychology Professor Jan Engelmann and Utrecht University Psychology Professor Hanna Schleihauf. Their results indicate that chimpanzees, similar to humans, adjust their decisions based on how strong the available evidence is, which is a central component of rational thinking.
    At the Ngamba Island Chimpanzee Sanctuary in Uganda, the researchers designed an experiment involving two boxes, one of which contained food. The chimps were first given a hint about which box held the reward. Later, they received a clearer and more convincing clue that pointed to the other box. Many of the animals changed their choice after receiving the stronger information.
    “Chimpanzees were able to revise their beliefs when better evidence became available,” said Sanford, a researcher in the UC Berkeley Social Origins Lab. “This kind of flexible reasoning is something we often associate with 4-year-old children. It was exciting to show that chimps can do this too.”
    Testing Whether Chimps Are Reasoning or Acting on Instinct
    To confirm that the animals were truly engaging in reasoning rather than reacting on impulse, the researchers used tightly controlled experiments combined with computational modeling. These methods helped rule out simpler explanations, such as the chimps favoring the most recent clue (recency bias) or simply responding to the easiest cue to notice. The modeling showed that their decisions followed patterns consistent with rational belief revision.
    “We recorded their first choice, then their second, and compared whether they revised their beliefs,” Sanford said. “We also used computational models to test how their choices matched up with various reasoning strategies.”
    This work challenges long-held assumptions that rationality, defined as forming and updating beliefs based on evidence, belongs only to humans.

    “The difference between humans and chimpanzees isn’t a categorical leap. It’s more like a continuum,” Sanford said.
    Broader Implications for Learning, Childhood Development and AI
    Sanford believes these findings may influence how scientists think about a wide range of fields. Learning how primates update their beliefs could reshape ideas about how children learn and even how artificial intelligence systems are designed.
    “This research can help us think differently about how we approach early education or how we model reasoning in AI systems,” she said. “We shouldn’t assume children are blank slates when they walk into a classroom.”
    The next phase of the project will apply the same belief revision tasks to young children. Sanford’s team is now gathering data from two- to four-year-olds to see how toddlers handle changing information compared to chimps.
    “It’s fascinating to design a task for chimps, and then try to adapt it for a toddler,” she said.

    Expanding the Study to Other Primates
    Sanford hopes to broaden the work to additional primate species, creating a comparative view of reasoning abilities across evolutionary branches. Her previous research spans topics from empathy in dogs to numerical understanding in children, and she notes that one theme continues to stand out: animals often demonstrate far more cognitive sophistication than people assume.
    “They may not know what science is, but they’re navigating complex environments with intelligent and adaptive strategies,” she said. “And that’s something worth paying attention to.”
    Other members of the research team include: Bill Thompson (UC Berkeley Psychology); Snow Zhang (UC Berkeley Philosophy); Joshua Rukundo (Ngamba Island Chimpanzee Sanctuary/Chimpanzee Trust, Uganda); Josep Call (School of Psychology and Neuroscience, University of St Andrews); and Esther Herrmann (School of Psychology, University of Portsmouth). More

  • in

    A single beam of light runs AI with supercomputer power

    Tensor operations are a form of advanced mathematics that support many modern technologies, especially artificial intelligence. These operations go far beyond the simple calculations most people encounter. A helpful way to picture them is to imagine manipulating a Rubik’s cube in several dimensions at once by rotating, slicing, or rearranging its layers. Humans and traditional computers must break these tasks into sequences, but light can perform all of them at the same time.
    Today, tensor operations are essential for AI systems involved in image processing, language understanding, and countless other tasks. As the amount of data continues to grow, conventional digital hardware such as GPUs faces increasing strain in speed, energy use, and scalability.
    Researchers Demonstrate Single-Shot Tensor Computing With Light
    To address these challenges, an international team led by Dr. Yufeng Zhang from the Photonics Group at Aalto University’s Department of Electronics and Nanoengineering has developed a fundamentally new approach. Their method allows complex tensor calculations to be completed within a single movement of light through an optical system. The process, described as single-shot tensor computing, functions at the speed of light.
    “Our method performs the same kinds of operations that today’s GPUs handle, like convolutions and attention layers, but does them all at the speed of light,” says Dr. Zhang. “Instead of relying on electronic circuits, we use the physical properties of light to perform many computations simultaneously.”
    Encoding Information Into Light for High-Speed Computation
    The team accomplished this by embedding digital information into the amplitude and phase of light waves, transforming numerical data into physical variations within the optical field. As these light waves interact, they automatically carry out mathematical procedures such as matrix and tensor multiplication, which form the basis of deep learning. By working with multiple wavelengths of light, the researchers expanded their technique to support even more complex, higher-order tensor operations.

    “Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins,” Zhang says. “Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple ‘optical hooks’ that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel.”
    Passive Optical Processing With Wide Compatibility
    One of the most striking benefits of this method is how little intervention it requires. The necessary operations occur on their own as the light travels, so the system does not need active control or electronic switching during computation.
    “This approach can be implemented on almost any optical platform,” says Professor Zhipei Sun, leader of Aalto University’s Photonics Group. “In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption.”
    Path Toward Future Light-Based AI Hardware
    Zhang notes that the ultimate objective is to adapt the technique to existing hardware and platforms used by major technology companies. He estimates that the method could be incorporated into such systems within 3 to 5 years.
    “This will create a new generation of optical computing systems, significantly accelerating complex AI tasks across a myriad of fields,” he concludes.
    The study was published in Nature Photonics on November 14th, 2025. More

  • in

    Breakthrough shows light can move atoms in 2D semiconductors

    Researchers at Rice University have found that certain atom-thin semiconductors, known as transition metal dichalcogenides (TMDs), can physically shift their atomic lattice when exposed to light. This newly observed response offers a controllable way to tune the behavior and properties of these ultrathin materials.
    The phenomenon appears in a subtype of TMDs called Janus materials, named for the Roman god associated with transitions. Their light sensitivity could support future technologies that rely on optical signals instead of electrical currents, including faster and cooler computer chips, highly responsive sensors and flexible optoelectronic systems.
    “In nonlinear optics, light can be reshaped to create new colors, faster pulses or optical switches that turn signals on and off,” said Kunyan Zhang, a Rice doctoral alumna and first author of the study. “Two-dimensional materials, which are only a few atoms thick, make it possible to build these optical tools on a very small scale.”
    What Makes Janus Materials Different
    TMDs are built from stacked layers of a transition metal such as molybdenum and two layers of a chalcogen element like sulfur or selenium. Their blend of conductivity, strong light absorption and mechanical flexibility has made them key candidates for next-generation electronic and optical devices.
    Within this group, Janus materials stand apart because their top and bottom atomic layers are composed of different chemical elements, giving them an asymmetric structure. This imbalance produces a built-in electrical polarity and increases their sensitivity to light and external forces.
    “Our work explores how the structure of Janus materials affects their optical behavior and how light itself can generate a force in the materials,” Zhang said.

    Detecting Atomic Motion With Laser Light
    To investigate this behavior, the team used laser beams of various colors on a two-layer Janus TMD material composed of molybdenum sulfur selenide stacked on molybdenum disulfide. They examined how it alters light through second harmonic generation (SHG), a process in which the material emits light at twice the frequency of the incoming beam. When the incoming laser matched the material’s natural resonances, the usual SHG pattern became distorted, revealing that the atoms were shifting.
    “We discovered that shining light on Janus molybdenum sulfur selenide and molybdenum disulfide creates tiny, directional forces inside the material, which show up as changes in its SHG pattern,” Zhang said. “Normally, the SHG signal forms a six-pointed ‘flower’ shape that mirrors the crystal’s symmetry. But when light pushes on the atoms, this symmetry breaks — the petals of the pattern shrink unevenly.”
    Optostriction and Layer Coupling
    The researchers traced the SHG distortion to optostriction, a process in which the electromagnetic field of light applies a mechanical force on atoms. In Janus materials, the strong coupling between layers magnifies this effect, allowing even extremely small forces to produce measurable strain.
    “Janus materials are ideal for this because their uneven composition creates an enhanced coupling between layers, which makes them more sensitive to light’s tiny forces — forces so small that it is difficult to measure directly, but we can detect them through changes in the SHG signal pattern,” Zhang said.

    Potential for Future Optical Technologies
    This high sensitivity suggests that Janus materials could become valuable components in a wide range of optical technologies. Devices that guide or control light using this mechanism may lead to faster, more energy-efficient photonic chips, since light-based circuits produce less heat than traditional electronics. Similar properties could be used to build finely tuned sensors that detect extremely small vibrations or pressure shifts, or to develop adjustable light sources for advanced displays and imaging systems.
    “Such active control could help design next-generation photonic chips, ultrasensitive detectors or quantum light sources — technologies that use light to carry and process information instead of relying on electricity,” said Shengxi Huang, associate professor of electrical and computer engineering and materials science and nanoengineering at Rice and a corresponding author of the study. Huang is also affiliated with the Smalley-Curl Institute, the Rice Advanced Materials Institute and the Ken Kennedy Institute.
    Small Structural Imbalances With Big Impact
    By demonstrating how the internal asymmetry of Janus TMDs creates new ways to influence the flow of light, the study shows that tiny structural differences can unlock significant technological opportunities.
    The research was supported by the National Science Foundation (2246564, 1943895), the Air Force Office of Scientific Research (FA9550-22-1-0408), the Welch Foundation (C-2144), the U.S. Department of Energy (DE‐SC0020042, DE-AC02-05CH11231), the U.S. Air Force Office of Scientific Research (FA2386-24-1-4049) and the Taiwan Ministry of Education. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of funding organizations and institutions. More

  • in

    New prediction breakthrough delivers results shockingly close to reality

    An international group of mathematicians led by Lehigh University statistician Taeho Kim has developed a new way to generate predictions that line up more closely with real-world results. Their method is aimed at improving forecasting across many areas of science, particularly in health research, biology and the social sciences.
    The researchers call their technique the Maximum Agreement Linear Predictor, or MALP. Its central goal is to enhance how well predicted values match observed ones. MALP does this by maximizing the Concordance Correlation Coefficient, or CCC. This statistical measure evaluates how pairs of numbers fall along the 45-degree line in a scatter plot, reflecting both precision (how tightly the points cluster) and accuracy (how close they are to that line). Traditional approaches, including the widely used least-squares method, typically try to reduce average error. Although effective in many situations, these methods can miss the mark when the main objective is to ensure strong alignment between predictions and actual values, says Kim, assistant professor of mathematics.
    “Sometimes, we don’t just want our predictions to be close — we want them to have the highest agreement with the real values,” Kim explains. “The issue is, how can we define the agreement of two objects in a scientifically meaningful way? One way we can conceptualize this is how close the points are aligned with a 45 degree line on a scatter plot between the predicted value and the actual values. So, if the scatter plot of these shows a strong alignment with this 45 degree line, then we could say there is a good level of agreement between these two.”
    Why Agreement Matters More Than Simple Correlation
    According to Kim, people often think first of Pearson’s correlation coefficient when they hear the word agreement, since it is introduced early in statistics education and remains a fundamental tool. Pearson’s method measures the strength of a linear relationship between two variables, but it does not specifically check whether the relationship aligns with the 45-degree line. For instance, it can detect strong correlations for lines that tilt at 50 degrees or 75 degrees, as long as the data points lie close to a straight line, Kim says.
    “In our case, we are specifically interested in alignment with a 45-degree line. For that, we use a different measure: the concordance correlation coefficient, introduced by Lin in 1989. This metric focuses specifically on how well the data align with a 45-degree line. What we’ve developed is a predictor designed to maximize the concordance correlation between predicted values and actual values.”
    Testing MALP With Eye Scans and Body Measurements
    To evaluate how well MALP performs, the team ran tests using both simulated data and real measurements, including eye scans and body fat assessments. One study applied MALP to data from an ophthalmology project comparing two types of optical coherence tomography (OCT) devices: the older Stratus OCT and the newer Cirrus OCT. As medical centers move to the Cirrus system, doctors need a dependable way to translate measurements so they can compare results over time. Using high-quality images from 26 left eyes and 30 right eyes, the researchers examined how accurately MALP could predict Stratus OCT readings from Cirrus OCT measurements and compared its performance with the least-squares method. MALP produced predictions that aligned more closely with the true Stratus values, while least squares slightly outperformed MALP in reducing average error, highlighting a tradeoff between agreement and error minimization.

    The team also looked at a body fat data set from 252 adults that included weight, abdomen size and other body measurements. Direct measures of body fat percentage, such as underwater weighing, are reliable but expensive, so easier measurements are often substituted. MALP was used to estimate body fat percentage and was evaluated against the least-squares method. The results were similar to the eye scan study: MALP delivered predictions that more closely matched real values, while least squares again had slightly lower average errors. This repeated pattern underscored the ongoing balance between agreement and minimizing error.
    Choosing the Right Tool for the Right Task
    Kim and his colleagues observed that MALP frequently provided predictions that matched the actual data more effectively than standard techniques. Even so, they note that researchers should choose between MALP and more traditional methods based on their specific priorities. When reducing overall error is the primary goal, established methods still perform well. When the emphasis is on predictions that align as closely as possible with real outcomes, MALP is often the stronger option.
    The potential impact of this work reaches into many scientific areas. Improved prediction tools could benefit medicine, public health, economics and engineering. For researchers who rely on forecasting, MALP offers a promising alternative, especially when achieving close agreement with real-world results matters more than simply narrowing the average gap between predicted and observed values.
    “We need to investigate further,” Kim says. “Currently, our setting is within the class of linear predictors. This set is large enough to be practically used in various fields, but it is still restricted mathematically speaking. So, we wish to extend this to the general class so that our goal is to remove the linear part and so it becomes the Maximum Agreement Predictor.” More

  • in

    A radical upgrade pushes quantum links 200x farther

    Quantum computers can perform certain calculations at remarkable speeds, yet connecting them over long distances has been one of the major obstacles to building large, reliable quantum networks.
    Until recently, two quantum computers could only link through a fiber cable over a span of a few kilometers. This limitation meant that a system on the University of Chicago’s South Side campus could not communicate with one in the Willis Tower, even though both are located within the same city. The distance was simply too great for current technology.
    A new study published on November 6 in Nature Communications by University of Chicago Pritzker School of Molecular Engineering (UChicago PME) Asst. Prof. Tian Zhong suggests that this boundary can be pushed dramatically farther. His team’s work indicates that quantum connections could, in theory, extend up to 2,000 km (1,243 miles).
    With this method, the UChicago quantum computer that once struggled to reach the Willis Tower could instead connect with a device located outside Salt Lake City, Utah.
    “For the first time, the technology for building a global-scale quantum internet is within reach,” said Zhong, who recently received the prestigious Sturge Prize for this research.
    Why Quantum Coherence Matters
    To create high-performance quantum networks, researchers must entangle atoms and maintain that entanglement as signals travel through fiber cables. The greater the coherence time of those entangled atoms, the farther apart the connected quantum computers can be.

    In the new study, Zhong’s team succeeded in raising the coherence time of individual erbium atoms from 0.1 milliseconds to more than 10 milliseconds. In one experiment, they achieved 24 milliseconds of coherence. Under ideal conditions, this improvement could enable communication between quantum computers separated by roughly 4,000 km, the distance between UChicago PME and Ocaña, Colombia.
    Building the Same Materials in a New Way
    The team did not switch to unfamiliar or exotic materials. Instead, they reimagined how the materials were constructed. They produced the rare-earth doped crystals required for quantum entanglement using a method called molecular-beam epitaxy (MBE) rather than the standard Czochralski method.
    “The traditional way of making this material is by essentially a melting pot,” Zhong said, referring to the Czochralski approach. “You throw in the right ratio of ingredients and then melt everything. It goes above 2,000 degrees Celsius and is slowly cooled down to form a material crystal.”
    Afterward, researchers carve the cooled crystal chemically to shape it into a usable component. Zhong likens this to a sculptor chiseling away at marble until the final form emerges.
    MBE relies on a very different idea. It resembles 3D printing, but at the atomic scale. The process lays down the crystal in extremely thin layers, eventually forming the exact structure needed for the device.

    “We start with nothing and then assemble this device atom by atom,” Zhong said. “The quality or purity of this material is so high that the quantum coherence properties of these atoms become superb.”
    Although MBE has been used in other areas of materials science, it had not previously been applied to this type of rare-earth doped material. For this project, Zhong collaborated with materials synthesis specialist UChicago PME Asst. Prof. Shuolong Yang to adapt MBE to their needs.
    Institute of Photonic Sciences Prof. Dr. Hugues de Riedmatten, who was not part of the study, described the results as an important step forward. “The approach demonstrated in this paper is highly innovative,” he said. “It shows that a bottom-up, well-controlled nanofabrication approach can lead to the realization of single rare-earth ion qubits with excellent optical and spin coherence properties, leading to a long-lived spin photon interface with emission at telecom wavelength, all in a fiber-compatible device architecture. This is a significant advance that offers an interesting scalable avenue for the production of many networkable qubits in a controlled fashion.”
    Preparing for Real-World Tests
    The next phase of the project is to determine whether the improved coherence times can indeed support long-distance quantum communication outside of theoretical models.
    “Before we actually deploy fiber from, let’s say, Chicago to New York, we’re going to test it just within my lab,” Zhong said.
    The team plans to link two qubits housed in separate dilution refrigerators (“fridges”) inside Zhong’s laboratory using 1,000 kilometers of coiled fiber. This step will help them verify that the system behaves as expected before moving to larger scales.
    “We’re now building the third fridge in my lab. When it’s all together, that will form a local network, and we will first do experiments locally in my lab to simulate what a future long-distance network will look like,” Zhong said. “This is all part of the grand goal of creating a true quantum internet, and we’re achieving one more milestone towards that.” More

  • in

    Entangled spins give diamonds a quantum advantage

    The quest to create useful quantum technologies begins with a deep understanding of the strange laws that govern quantum behavior and how those principles can be applied to real materials. At the University of California, Santa Barbara, physicist Ania Jayich, Bruker Endowed Chair in Science and Engineering, Elings Chair in Quantum Science, and co-director of the NSF Quantum Foundry, leads a lab where the key material is laboratory-grown diamond.
    Working at the intersection of quantum physics and materials science, Jayich and her team study how precise atomic-scale imperfections in diamond — known as spin qubits — can be engineered for advanced quantum sensing. Among the group’s standout researchers, Lillian Hughes, who recently completed her Ph.D. and is heading to Caltech for postdoctoral work, made a major breakthrough in this field.
    Through three co-authored papers — one in PRX in March and two in Nature in October — Hughes demonstrated for the first time that not just individual qubits but two-dimensional ensembles of many quantum defects can be organized and entangled inside diamond. This achievement marks a milestone toward solid-state systems that deliver a measurable quantum advantage in sensing, opening a new path for the next generation of quantum devices.
    Engineering Quantum Defects in Diamond
    “We can create a configuration of nitrogen-vacancy (NV) center spins in the diamonds with control over their density and dimensionality, such that they are densely packed and depth-confined into a 2D layer,” Hughes explained. “And because we can design how the defects are oriented, we can engineer them to exhibit non-zero dipolar interactions.” This accomplishment formed the basis of the PRX study, “A strongly interacting, two-dimensional, dipolar spin ensemble in (111)-oriented diamond.”
    An NV center consists of a nitrogen atom replacing a carbon atom and an adjacent vacancy where a carbon atom is missing. “The NV center defect has a few properties, one of which is a degree of freedom called a spin — a fundamentally quantum mechanical concept. In the case of the NV center, the spin is very long lived,” said Jayich. “These long-lived spin states make NV centers useful for quantum sensing. The spin couples to the magnetic field that we’re trying to sense.”
    From MRI to Quantum Sensing
    The concept of using spin as a sensor dates back to the development of magnetic resonance imaging (MRI) in the 1970s. Jayich explained that MRI works by controlling the alignment and energy states of protons and detecting the signals they emit as they relax, forming an image of internal structures.

    “Previous quantum-sensing experiments conducted in a solid-state system have all made use of single spins or non-interacting spin ensembles,” Jayich said. “What’s new here is that, because Lillian was able to grow and engineer these very strongly interacting dense spin ensembles, we can actually leverage the collective behavior, which provides an extra quantum advantage, allowing us to use the phenomena of quantum entanglement to get improved signal-to-noise ratios, providing greater sensitivity and making a better measurement possible.”
    Why Diamond Matters for Quantum Sensors
    The type of entanglement-assisted sensing demonstrated by Hughes has been shown before, but only in gas-phase atomic systems. “Ideally, for many target applications, your sensor should be easy to integrate and to bring close to the system under study,” Jayich said. “It is much easier to do that with a solid-state material, like diamond, than with gas-phase atomic sensors on which, for instance, GPS is based. Furthermore, atomic sensors require significant auxiliary hardware to confine and control, such as vacuum chambers and numerous lasers, making it hard to bring an atomic sensor within nanometer-scale proximity to a protein, for instance, prohibiting high-spatial-resolution imaging.”
    Jayich’s team is especially focused on using diamond-based quantum sensors to study electronic properties of materials. “You can place material targets into nanometer-scale proximity of a diamond surface, thus bringing them really close to sub-surface NV centers,” Jayich explained. “So it’s very easy to integrate this type of diamond quantum sensor with a variety of interesting target systems. That’s a big reason why this platform is so exciting.”
    Probing Materials and Biology with Quantum Precision
    “A solid-state magnetic sensor of this kind could be very useful for probing, for instance, biological systems,” Jayich said. “Nuclear magnetic resonance [NMR] is based on detecting very small magnetic fields coming from the constituent atoms in, for example, biological systems. Such an approach is also useful if you want to understand new materials, whether electronic materials, superconducting materials, or magnetic materials that could be useful for a variety of applications.”
    Overcoming Quantum Noise

    Every measurement has a limit set by noise, which restricts precision. A fundamental form of this noise, called quantum projection noise, sets what’s known as the standard quantum limit — the point beyond which unentangled sensors cannot improve. If scientists can engineer specific interactions between sensors, they can surpass this boundary. One way to do this is through spin squeezing, which correlates quantum states to reduce uncertainty.
    “It’s as if you were trying to measure something with a meter stick having gradations a centimeter apart; those centimeter-spaced gradations are effectively the amplitude of the noise in your measurement. You would not use such a meter stick to measure the size of an amoeba, which is much smaller than a centimeter,” Jayich said. “By squeezing — silencing the noise — you effectively use quantum mechanical interactions to ‘squish’ that meter stick, effectively creating finer gradations and allowing you to measure smaller things more precisely.”
    Amplifying Quantum Signals
    The team’s second Nature paper details another strategy for improving measurement: signal amplification. This approach strengthens the signal without increasing noise. In the meter stick analogy, amplifying the signal makes the amoeba appear larger so that even coarse measurement markings can capture it accurately.
    Looking ahead, Jayich is confident about applying these principles in real-world systems. “I don’t think the foreseen technical challenges will prevent demonstrating a quantum advantage in a useful sensing experiment in the near future,” she said. “It’s mostly about making the signal amplification stronger or increasing the amount of squeezing. One way to do that is to control the position of the spins in the 2Dxy plane, forming a regular array.”
    “There’s a materials challenge here, in that, because we can’t dictate exactly where the spins will incorporate, they incorporate in somewhat random fashion within a plane,” Jayich added. “That’s something we’re working on now, so that eventually we can have a grid of these spins, each placed a specific distance from each other. That would address an outstanding challenge to realizing practical quantum advantage in sensing.” More

  • in

    Brain-like learning found in bacterial nanopores

    Pore-forming proteins are widespread across living organisms. In humans, they are essential for immune defense, while in bacteria they often act as toxins that puncture cell membranes. These microscopic pores allow ions and molecules to move through membranes, controlling molecular traffic within cells. Because of their precision and control, scientists have adapted them as nanopore tools for biotechnology, such as in DNA sequencing and molecular sensing.
    Although biological nanopores have revolutionized biotechnology, they can behave in complex and sometimes erratic ways. Researchers still lack a complete understanding of how ions travel through them or why ion flow occasionally stops altogether.
    Two particularly puzzling behaviors have long intrigued scientists: rectification and gating. Rectification occurs when the flow of ions changes depending on the “sign” (plus or minus — positive or negative) of the voltage applied. Gating happens when the ion flow suddenly decreases or stops. These effects, especially gating, can disrupt nanopore-based sensing and have remained difficult to explain.
    A research team led by Matteo Dal Peraro and Aleksandra Radenovic at EPFL has now identified the physical mechanisms behind these two effects. Using a combination of experiments, simulations, and theoretical modeling, they found that both rectification and gating arise from the nanopore’s own electrical charges and the way those charges interact with the ions moving through the pore.
    Experimenting With Electric Charges
    The team studied aerolysin, a bacterial pore commonly used in sensing research. They modified the charged amino acids lining its interior to create 26 nanopore variants, each with a distinct charge pattern. By observing how ions traveled through these modified pores under different conditions, they were able to isolate key electrical and structural factors.
    To better understand how these effects evolve over time, the scientists applied alternating voltage signals to the nanopores. This approach allowed them to distinguish rectification, which occurs quickly, from gating, which develops more slowly. They then built biophysical models to interpret their data and reveal the mechanisms at work.

    How Nanopores Learn Like the Brain
    The researchers discovered that rectification happens because of how the charges along the inner surface influence ion movement, making it easier for ions to flow in one direction than the other, similar to a one-way valve. Gating, in contrast, occurs when a heavy ion flow disrupts the charge balance and destabilizes the pore’s structure. This temporary collapse blocks ion passage until the system resets.
    Both effects depend on the exact placement and type of electrical charge within the nanopore. By reversing the charge “sign,” the team could control when and how gating occurred. When they increased the pore’s rigidity, gating stopped completely, confirming that structural flexibility is key to this phenomenon.
    Toward Smarter Nanopores
    These findings open new possibilities for engineering biological nanopores with custom properties. Scientists can now design pores that minimize unwanted gating for applications in nanopore sensing, or deliberately use gating for bio-inspired computing. In one demonstration, the team created a nanopore that mimics synaptic plasticity, “learning” from voltage pulses much like a neural synapse. This discovery suggests that future ion-based processors could one day harness such molecular “learning” to power new forms of computing. More