More stories

  • in

    Scientists discover flaws that make electronics faster, smarter, and more efficient

    Scientists have turned a longstanding challenge in electronics — material defects — into a quantum-enhanced solution, paving the way for new-generation ultra-low-power spintronic devices.
    Spintronics, short for “spin electronics,” is a field of technology that aims to go beyond the limits of conventional electronics. Traditional devices rely only on the electric charge of electrons to store and process information. Spintronics takes advantage of two additional quantum properties: spin angular momentum, which can be imagined as a built-in “up” or “down” orientation of the electron, and orbital angular momentum, which describes how electrons move around atomic nuclei. By using these extra degrees of freedom, spintronic devices can store more data in smaller spaces, operate faster, consume less energy, and retain information even when the power is switched off.
    A longstanding challenge in spintronics has been the role of material defects. Introducing imperfections into a material can sometimes make it easier to “write” data into memory bits by reducing the current needed, but this typically comes at a cost: electrical resistance increases, spin Hall conductivity declines, and overall power consumption goes up. This trade-off has been a major obstacle to developing ultra-low-power spintronic devices.
    Now, the Flexible Magnetic-Electronic Materials and Devices Group from the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences have found a way to turn this problem into an advantage. Their study, published in Nature Materials, focused on the orbital Hall effect in strontium ruthenate (SrRuO3), a transition metal oxide whose properties can be finely tuned. This quantum phenomenon causes electrons to move in a way determined by their orbital angular momentum.
    Using custom-designed devices and precision measurement techniques, the researchers uncovered an unconventional scaling law that achieves a “two birds with one stone” outcome: Defect engineering simultaneously boosts both orbital Hall conductivity and orbital Hall angle, a stark contrast to conventional spin-based systems.
    To explain this finding, the team linked it to the Dyakonov-Perel-like orbital relaxation mechanism. “Scattering processes that typically degrade performance actually extend the lifetime of orbital angular momentum, thereby enhancing orbital current,” said Dr. Xuan Zheng, a co-first author of the study.
    “This work essentially rewrites the rulebook for designing these devices,” said Prof. Zhiming Wang, a corresponding author of the study. “Instead of fighting material imperfections, we can now exploit them.”
    Experimental measurements confirm the technology’s potential: tailored conductivity modulation yielded a threefold improvement in switching energy efficiency.
    This study not only provides new insights into orbital transport physics but also redefines design strategies for energy-efficient spintronics.
    This study received support from the National Key Research and Development Program of China, the National Natural Science Foundation of China, and other funding bodies. More

  • in

    Why tiny bee brains could hold the key to smarter AI

    A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.iversity of Sheffield built a digital model of a bee’s brain that explains how these movements create clear, efficient brain signals, allowing bees to easily understand what they see This discovery could revolutionize AI and robotics, suggesting that future robots can be smarter and more efficient by using movement to gather relevant information, rather than relying on huge computer networks The study highlights a big idea: intelligence comes from how brains, bodies and the environment work together. It demonstrates how even tiny insect brains can solve complex visual tasks using very few brain cells, which has major implications for both biology and AIA new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.
    By building a computational model — or a digital version of a bee’s brain — researchers have discovered how the way bees move their bodies during flight helps shape visual input and generates unique electrical messages in their brains. These movements generate neural signals that allow bees to easily and efficiently identify predictable features of the world around them. This ability means bees demonstrate remarkable accuracy in learning and recognizing complex visual patterns during flight, such as those found in a flower.
    The model not only deepens our understanding of how bees learn and recognize complex patterns through their movements, but also paves the way for next-generation AI. It demonstrates that future robots can be smarter and more efficient by using movement to gather information, rather than relying on massive computing power.
    Professor James Marshall, Director of the Centre of Machine Intelligence at the University of Sheffield and senior author on the study, said:”In this study we’ve successfully demonstrated that even the tiniest of brains can leverage movement to perceive and understand the world around them. This shows us that a small, efficient system — albeit the result of millions of years of evolution — can perform computations vastly more complex than we previously thought possible.
    “Harnessing nature’s best designs for intelligence opens the door for the next generation of AI, driving advancements in robotics, self-driving vehicles and real-world learning.”
    The study, a collaboration with Queen Mary University of London, is published recently in the journal eLife. It builds on the team’s previous research into how bees use active vision — the process where their movements help them collect and process visual information. While their earlier work observed how bees fly around and inspect specific patterns, this new study provides a deeper understanding of the underlying brain mechanisms driving that behavior.

    The sophisticated visual pattern learning abilities of bees, such as differentiating between human faces, have long been understood; however the study’s findings shed new light on how pollinators navigate the world with such seemingly simple efficiency.
    Dr. HaDi MaBouDi, lead author and researcher at the University of Sheffield, said: “In our previous work, we were fascinated to discover that bees employ a clever scanning shortcut to solve visual puzzles. But that just told us what they do; for this study, we wanted to understand how.
    “Our model of a bee’s brain demonstrates that its neural circuits are optimized to process visual information not in isolation, but through active interaction with its flight movements in the natural environment, supporting the theory that intelligence comes from how the brain, bodies and the environment work together.
    “We’ve learnt that bees, despite having brains no larger than a sesame seed, don’t just see the world — they actively shape what they see through their movements. It’s a beautiful example of how action and perception are deeply intertwined to solve complex problems with minimal resources. This is something that has major implications for both biology and AI.”
    The model shows that bee neurons become finely tuned to specific directions and movements as their brain networks gradually adapt through repeated exposure to various stimuli, refining their responses without relying on associations or reinforcement. This lets the bee’s brain adapt to its environment simply by observing while flying, without requiring instant rewards. This means the brain is incredibly efficient, using only a few active neurons to recognize things, conserving both energy and processing power.
    To validate their computational model, the researchers subjected it to the same visual challenges encountered by real bees. In a pivotal experiment, the model was tasked with differentiating between a ‘plus’ sign and a ‘multiplication’ sign. The model exhibited significantly improved performance when it mimicked the real bees’ strategy of scanning only the lower half of the patterns, a behaviour observed by the research team in a previous study.

    Even with just a small network of artificial neurons, the model successfully showed how bees can recognise human faces, underscoring the strength and flexibility of their visual processing.
    Professor Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary University of London, added: ‘Scientists have been fascinated by the question of whether brain size predicts intelligence in animals. But such speculations make no sense unless one knows the neural computations that underpin a given task.
    “Here we determine the minimum number of neurons required for difficult visual discrimination tasks and find that the numbers are staggeringly small, even for complex tasks such as human face recognition. Thus insect microbrains are capable of advanced computations.”
    Professor Mikko Juusola, Professor in System Neuroscience from the University of Sheffield’s School of Biosciences and Neuroscience Institute said: “This work strengthens a growing body of evidence that animals don’t passively receive information — they actively shape it.
    “Our new model extends this principle to higher-order visual processing in bees, revealing how behaviorally driven scanning creates compressed, learnable neural codes. Together, these findings support a unified framework where perception, action and brain dynamics co-evolve to solve complex visual tasks with minimal resources — offering powerful insights for both biology and AI.”
    By bringing together findings from how insects behave, how their brains work, and what the computational models show, the study shows how studying small insect brains can uncover basic rules of intelligence. These findings not only deepen our understanding of cognition but also have significant implications for developing new technologies. More

  • in

    Tiny quantum dots unlock the future of unbreakable encryption

    Physicists have developed a breakthrough concept in quantum encryption that makes private communication more secure over significantly longer distances, surpassing state-of-the-art technologies. For decades, experts believed such a technology upgrade required perfect optical hardware, namely, light sources that strictly emit one light particle (photon) at a time — something extremely difficult and expensive to build. But the new approach uses innovative encryption protocols applied to tiny, engineered materials called quantum dots to send encrypted information securely, even with imperfect light sources. Real-world tests show it can outperform even the best of current systems, potentially bringing quantum-safe communication closer to everyday use.
    A team of physicists has made a breakthrough that could bring secure quantum communication closer to everyday use — without needing flawless hardware.
    The research, led by PhD students Yuval Bloom and Yoad Ordan, under the guidance of Professor Ronen Rapaport from theRacah Institute of Physics at Hebrew University in collaboration with researchers from Los-Alamos National Labs, and published in PRX Quantum, introduces a new practical approach that significantly improve how we send quantum encrypted information using light particles — even when using imperfect equipment.
    Cracking a 40-Year-Old Challenge in Quantum Communication
    For four decades, the holy grail of quantum key distribution (QKD) — the science of creating unbreakable encryption using quantum mechanics — has hinged on one elusive requirement: perfectly engineered single-photon sources. These are tiny light sources that can emit one particle of light (photon) at a time. But in practice, building such devices with absolute precision has proven extremely difficult and expensive.
    To work around that, the field has relied heavily on lasers, which are easier to produce but not ideal. These lasers send faint pulses of light that contain a small, but unpredictable, number of photons — a compromise that limits both security and the distance over which data can be safely transmitted, as a smart eavesdropper can “steal” the information bits that are encoded simultaneously on more than one photon.
    A Better Way with Imperfect Tools
    Bloom, Ordan, and their team flipped the script. Instead of waiting for perfect photon sources, they developed two new protocols that work with what we have now — sub-Poissonian photon sources based on quantum dots, which are tiny semiconductor particles that behave like artificial atoms.

    By dynamically engineering the optical behavior of these quantum dots and pairing them with nanoantennas, the team was able to tweak how the photons are emitted. This fine-tuning allowed them to suggest and demonstrate two advanced encryption strategies: A truncated decoy state protocol: A new version of a widely used quantum encryption approach, tailored for imperfect single photon sources, that weeds out potential hacking attempts due to multi-photon events. A heralded purification protocol: A new method that dramatically improves signal security by “filtering” the excess photons in real time, ensuring that only true single photon bits are recorded.In simulations and lab experiments, these techniques outperformed even the best versions of traditional laser-based QKD methods — extending the distance over which a secure key can be exchanged by more than 3 decibels, a substantial leap in the field.
    A Real-World Test and a Step Toward Practical Quantum Networks
    To prove it wasn’t just theory, the team built a real-world quantum communication setup using a room-temperature quantum dot source. They ran their new reinforced version of the well-known BB84 encryption protocol — the backbone of many quantum key distribution systems — and showed that their approach was not only feasible but superior to existing technologies.
    What’s more, their approach is compatible with a wide range of quantum light sources, potentially lowering the cost and technical barriers to deploying quantum-secure communication on a large scale.
    “This is a significant step toward practical, accessible quantum encryption,” said Professor Rapaport. “It shows that we don’t need perfect hardware to get exceptional performance — we just need to be smarter about how we use what we have.”
    Co-Lead author Yuval Bloom added, “We hope this work helps open the door to real-world quantum networks that are both secure and affordable. The cool thing is that we don’t have to wait, it can be implemented with what we already have in many labs world-wide” More

  • in

    Scientists discover forgotten particle that could unlock quantum computers

    Quantum computers have the potential to solve problems far beyond the reach of today’s fastest supercomputers. But today’s machines are notoriously fragile. The quantum bits, or “qubits,” that store and process information are easily disrupted by their environment, leading to errors that quickly accumulate.
    One of the most promising approaches to overcoming this challenge is topological quantum computing, which aims to protect quantum information by encoding it in the geometric properties of exotic particles called anyons. These particles, predicted to exist in certain two-dimensional materials, are expected to be far more resistant to noise and interference than conventional qubits.
    “Among the leading candidates for building such a computer are Ising anyons, which are already being intensely investigated in condensed matter labs due to their potential realization in exotic systems like the fractional quantum Hall state and topological superconductors,” said Aaron Lauda, professor of mathematics, physics and astronomy at the USC Dornsife College of Letters, Arts and Sciences and the study’s senior author. “On their own, Ising anyons can’t perform all the operations needed for a general-purpose quantum computer. The computations they support rely on ‘braiding,’ physically moving anyons around one another to carry out quantum logic. For Ising anyons, this braiding only enables a limited set of operations known as Clifford gates, which fall short of the full power required for universal quantum computing.”
    But in a new study published in Nature Communications, a team of mathematicians and physicists led by USC researchers has demonstrated a surprising workaround. By adding a single new type of anyon, which was previously discarded in traditional approaches to topological quantum computation, the team shows that Ising anyons can be made universal, capable of performing any quantum computation through braiding alone. The team dubbed these rescued particles “neglectons,” a name that reflects both their overlooked status and their newfound importance. This new anyon emerges naturally from a broader mathematical framework and provides exactly the missing ingredient needed to complete the computational toolkit.
    From mathematical trash to quantum treasure
    The key lies in a new class of mathematical theories called non-semisimple topological quantum field theories (TQFTs). These extend the standard “semisimple” frameworks that physicists typically use to describe anyons. Traditional models simplify the underlying math by discarding objects with so-called “quantum trace zero,” effectively declaring them useless.
    “But those discarded objects turn out to be the missing piece,” Lauda explained. “It’s like finding treasure in what everyone else thought was mathematical garbage.”
    The new framework retains these neglected components and reveals a new type of anyon — the neglecton — which, when combined with Ising anyons, allows for universal computation using braiding alone. Crucially, only one neglecton is needed, and it remains stationary while the computation is performed by braiding Ising anyons around it.

    A house with unstable rooms
    The discovery wasn’t without its mathematical challenges. The non-semisimple framework introduces irregularities that violate unitarity, a fundamental principle ensuring that quantum mechanics preserve probability. Most physicists would have seen this as a fatal flaw.
    But Lauda’s team found an elegant workaround. They designed their quantum encoding to isolate these mathematical irregularities away from the actual computation. “Think of it like designing a quantum computer in a house with some unstable rooms,” Lauda explained. “Instead of fixing every room, you ensure all of your computing happens in the structurally sound areas while keeping the problematic spaces off-limits.”
    “We’ve effectively quarantined the strange parts of the theory,” Lauda said. “By carefully designing where the quantum information lives, we make sure it stays in the parts of the theory that behave properly, so the computation works even if the global structure is mathematically unusual.”
    From pure math to quantum reality
    The breakthrough illustrates how abstract mathematics can solve concrete engineering problems in unexpected ways.

    “By embracing mathematical structures that were previously considered useless, we unlocked a whole new chapter for quantum information science,” Lauda said.
    The research opens new directions both in theory and in practice. Mathematically, the team is working to extend their framework to other parameter values and to clarify the role of unitarity in non-semisimple TQFTs. On the experimental side, they aim to identify specific material platforms where the stationary neglecton could arise and to develop protocols that translate their braiding-based approach into realizable quantum operations.
    “What’s particularly exciting is that this work moves us closer to universal quantum computing with particles we already know how to create,” Lauda said. “The math gives a clear target: If experimentalists can find a way to realize this extra stationary anyon, it could unlock the full power of Ising-based systems.”
    In addition to Lauda, other authors include the study’s first author, Filippo Iulianelli, and Sung Kim of USC, and Joshua Sussan of Medgar Evers College of The City University of New York.
    The study was supported by National Science Foundation (NSF) Grants (DMS-1902092, DMS-2200419, DMS-2401375), Army Research Office (W911NF-20-1-0075), Simons Foundation Collaboration Grant on New Structures in Low-Dimensional Topology, Simons Foundation Travel Support Grant, NSF Graduate Research Fellowship (DGE- 1842487) and PSC CUNY Enhanced Award (66685-00 54). More

  • in

    A star torn apart by a black hole lit up the Universe twice

    Astronomers used a UC Santa Cruz-led AI system to detect a rare supernova, SN 2023zkd, within hours of its explosion, allowing rapid follow-up observations before the fleeting event faded. Evidence suggests the blast was triggered by a massive star’s catastrophic encounter with a black hole companion, either partially swallowing the star or tearing it apart before it could explode on its own. Researchers say the same real-time anomaly-detection AI used here could one day be applied to fields like medical diagnostics, national security, and financial-fraud prevention.The explosion of a massive star locked in a deadly orbit with a black hole has been discovered with the help of artificial intelligence used by an astronomy collaboration led by the University of California, Santa Cruz, that hunts for stars shortly after they explode as supernovae.The blast, named SN 2023zkd, was first discovered in July 2023 with the help of a new AI algorithm designed to scan for unusual explosions in real time. The early alert allowed astronomers to begin follow-up observations immediately — an essential step in capturing the full story of the explosion.
    By the time the explosion was over, it had been observed by a large set of telescopes, both on the ground and from space. That included two telescopes at the Haleakalāa Observatory in Hawaiʻi used by the Young Supernova Experiment (YSE) based at UC Santa Cruz.
    “Something exactly like this supernova has not been seen before, so it might be very rare,” said Ryan Foley, associate professor of astronomy and astrophysics at UC Santa Cruz. “Humans are reasonably good at finding things that ‘aren’t like the others,’ but the algorithm can flag things earlier than a human may notice. This is critical for these time-sensitive observations.”
    Time-bound astrophysics
    Foley’s team runs YSE, which surveys an area of the sky equivalent to 6,000 times the full moon (4% of the night sky) every three days and has discovered thousands of new cosmic explosions and other astrophysical transients — dozens of them just days or hours after explosion.

    The scientists behind the discovery of SN 2023zkd said the most likely interpretation is that a collision between the massive star and the black hole was inevitable. As energy was lost from the orbit, their separation decreased until the supernova was triggered by the star’s gravitational stress as it was partially swallowed the black hole.
    The discovery was published on August 13 in the Astrophysical Journal. “Our analysis shows that the blast was sparked by a catastrophic encounter with a black hole companion, and is the strongest evidence to date that such close interactions can actually detonate a star,” said lead author Alexander Gagliano, a fellow at the NSF Institute for Artificial Intelligence and Fundamental Interactions.
    An alternative interpretation considered by the team is that the black hole completely tore the star apart before it could explode on its own. In that case, the black hole quickly pulled in the star’s debris and bright light was generated when the debris crashed into the gas surrounding it. In both cases, a single, heavier black hole is left behind.
    An unusual, gradual glow up
    Located about 730 million light-years from Earth, SN 2023zkd initially looked like a typical supernova, with a single burst of light. But as the scientists tracked its decline over several months, it did something unexpected: It brightened again. To understand this unusual behavior, the scientists analyzed archival data, which showed something even more unusual: The system had been slowly brightening for more than four years before the explosion. That kind of long-term activity before the explosion is rarely seen in supernovae.
    Detailed analysis done in part at UC Santa Cruz revealed that the explosion’s light was shaped by material the star had shed in the years before it died. The early brightening came from the supernova’s blast wave hitting low-density gas. The second, delayed peak was caused by a slower but sustained collision with a thick, disk-like cloud. This structure — and the star’s erratic pre-explosion behavior — suggest that the dying star was under extreme gravitational stress, likely from a nearby, compact companion such as a black hole.

    Foley said he and Gagliano had several conversations about the spectra, leading to the eventual interpretation of the binary system with a black hole. Gagliano led the charge in that area, while Foley played the role of “spectroscopy expert” and served as a sounding board — and often, skeptic.
    At first, the idea that the black hole triggered the supernova almost sounded like science fiction, Foley recalled. So it was important to make sure all of the observations lined up with this explanation, and Foley said Gagliano methodically demonstrated that they did.
    “Our team also built the software platform that we use to consolidate data and manage observations. The AI tools used for this study are integrated into this software ecosystem,” Foley said. “Similarly, our research collaboration brings together the variety of expertise necessary to make these discoveries.”
    Co-author Enrico Ramirez-Ruiz, also a professor of astronomy and astrophysics, leads the theory team at UC Santa Cruz. Fellow co-author V. Ashley Villar, an assistant professor of astronomy in the Harvard Faculty of Arts and Sciences, provided AI expertise. The team behind this discovery was led by the Center for Astrophysics | Harvard & Smithsonian and the Massachusetts Institute of Technology as part of YSE.
    This work was funded by the National Science Foundation, NASA, the Moore Foundation, and the Packard Foundation. Several students, including Gagliano, are or were NSF graduate research fellows, Foley said.
    Societal costs of uncertainty
    But currently, Foley said the funding situation and outlook for continued support is very uncertain, forcing the collaboration to take fewer risks, resulting in decreased science output overall. “The uncertainty means we are shrinking,” he said, “reducing the number of students who are admitted to our graduate program — many of them being forced out of the field or to take jobs outside the U.S.”
    Although predicting the path this AI approach will take is difficult, Foley said this research is cutting edge. “You can easily imagine similar techniques being used to screen for diseases, focus attention for terrorist attacks, treat mental health issues early, and detect financial fraud,” he explained. “Anywhere real-time detection of anomalies could be useful, these techniques will likely eventually play a role.” More

  • in

    Scientists just cracked the quantum code hidden in a single atom

    To build a large-scale quantum computer that works, scientists and engineers need to overcome the spontaneous errors that quantum bits, or qubits, create as they operate.
    Scientists encode these building blocks of quantum information to suppress errors in other qubits so that a minority can operate in a way that produces useful outcomes.
    As the number of useful (or logical) qubits grows, the number of physical qubits required grows even further. As this scales up, the sheer number of qubits needed to create a useful quantum machine becomes an engineering nightmare.
    Now, for the first time, quantum scientists at the Quantum Control Laboratory at the University of Sydney Nano Institute have demonstrated a type of quantum logic gate that drastically reduces the number physical qubits needed for its operation.
    To do this, they built an entangling logic gate on a single atom using an error-correcting code nicknamed the ‘Rosetta stone’ of quantum computing. It earns that name because it translates smooth, continuous quantum oscillations into clean, digital-like discrete states, making errors easier to spot and fix, and importantly, allowing a highly compact way to encode logical qubits.
    GKP Codes: A Rosetta Stone for Quantum Computing
    This curiously named Gottesman-Kitaev-Preskill (GKP) code has for many years offered a theoretical possibility for significantly reducing the physical number of qubits needed to produce a functioning ‘logical qubit’. Albeit by trading efficiency for complexity, making the codes very difficult to control.

    Research published on August 21 in Nature Physics demonstrates this as a physical reality, tapping into the natural oscillations of a trapped ion (a charged atom of ytterbium) to store GKP codes and, for the first time, realizing quantum entangling gates between them.
    Led by Sydney Horizon Fellow Dr Tingrei Tan at the University of Sydney Nano Institute, scientists have used their exquisite control over the harmonic motion of a trapped ion to bridge the coding complexity of GKP qubits, allowing a demonstration of their entanglement.
    “Our experiments have shown the first realization of a universal logical gate set for GKP qubits,” Dr Tan said. “We did this by precisely controlling the natural vibrations, or harmonic oscillations, of a trapped ion in such a way that we can manipulate individual GKP qubits or entangle them as a pair.”
    Quantum Logic Gate and Software Innovation
    A logic gate is an information switch that allows computers – quantum and classical – to be programmable to perform logical operations. Quantum logic gates use the entanglement of qubits to produce a completely different sort of operational system to that used in classical computing, underpinning the great promise of quantum computers.
    First author Vassili Matsos is a PhD student in the School of Physics and Sydney Nano. He said: “Effectively, we store two error-correctable logical qubits in a single trapped ion and demonstrate entanglement between them.

    “We did this using quantum control software developed by Q-CTRL, a spin-off start-up company from the Quantum Control Laboratory, with a physics-based model to design quantum gates that minimize the distortion of GKP logical qubits, so they maintain the delicate structure of the GKP code while processing quantum information.”
    A Milestone in Quantum Technology
    What Mr Matsos did is entangle two ‘quantum vibrations’ of a single atom. The trapped atom vibrates in three dimensions. Movement in each dimension is described by quantum mechanics and each is considered a ‘quantum state’. By entangling two of these quantum states realized as qubits, Mr Matsos created a logic gate using just a single atom, a milestone in quantum technology.
    This result massively reduces the quantum hardware required to create these logic gates, which allow quantum machines to be programmed.
    Dr Tan said: “GKP error correction codes have long promised a reduction in hardware demands to address the resource overhead challenge for scaling quantum computers. Our experiments achieved a key milestone, demonstrating that these high-quality quantum controls provide a key tool to manipulate more than just one logical qubit.
    “By demonstrating universal quantum gates using these qubits, we have a foundation to work towards large-scale quantum-information processing in a highly hardware-efficient fashion.”
    Across three experiments described in the paper, Dr Tan’s team used a single ytterbium ion contained in what is known as a Paul trap. This uses a complex array of lasers at room temperature to hold the single atom in the trap, allowing its natural vibrations to be controlled and utilized to produce the complex GKP codes.
    This research represents an important demonstration that quantum logic gates can be developed with a reduced physical number of qubits, increasing their efficiency.
    The authors declare no competing interests. Funding was received from the Australian Research Council, Sydney Horizon Fellowship, the US Office of Naval Research, the US Army Research Office, the US Air Force Office of Scientific Research, Lockheed Martin, Sydney Quantum Academy and private funding from H. and A. Harley. More

  • in

    This simple magnetic trick could change quantum computing forever

    The entry of quantum computers into society is currently hindered by their sensitivity to disturbances in the environment. Researchers from Chalmers University of Technology in Sweden, and Aalto University and the University of Helsinki in Finland, now present a new type of exotic quantum material, and a method that uses magnetism to create stability. This breakthrough can make quantum computers significantly more resilient – paving the way for them to be robust enough to tackle quantum calculations in practice.
    At the atomic scale, the laws of physics deviate from those in our ordinary large-scale world. There, particles adhere to the laws of quantum physics, which means they can exist in multiple states simultaneously and influence each other in ways that are not possible within classical physics. These peculiar but powerful phenomena hold the key to quantum computing and quantum computers, which have the potential to solve problems that no conventional supercomputer can handle today.
    But before quantum calculations can benefit society in practice, physicists need to solve a major challenge. Qubits, the basic units of a quantum computer, are extremely delicate. The slightest change in temperature, magnetic field, or even microscopic vibrations causes the qubits to lose their quantum states – and thus also their ability to perform complex calculations reliably.
    To solve the problem, researchers in recent years have begun exploring the possibility of creating materials that can provide better protection against these types of disturbances and noise in their fundamental structure – their topology. Quantum states that arise and are maintained through the structure of the material used in qubits are called topological excitations and are significantly more stable and resilient than others. However, the challenge remains to find materials that naturally support such robust quantum states.
    Newly developed material protects against disturbances
    Now, a research team from Chalmers University of Technology, Aalto University, and the University of Helsinki has developed a new quantum material for qubits that exhibits robust topological excitations. The breakthrough is an important step towards realising practical topological quantum computing by constructing stability directly into the material’s design.
    “This is a completely new type of exotic quantum material that can maintain its quantum properties when exposed to external disturbances. It can contribute to the development of quantum computers robust enough to tackle quantum calculations in practice,” says Guangze Chen, postdoctoral researcher in applied quantum physics at Chalmers and lead author of the study published in Physical Review Letters.

    ‘Exotic quantum materials’ is an umbrella term for several novel classes of solids with extreme quantum properties. The search for such materials, with special resilient properties, has been a long-standing challenge.
    Magnetism is the key in the new strategy
    Traditionally, researchers have followed a well-established ‘recipe’ based on spin-orbit coupling, a quantum interaction that links the electron’s spin to its movement orbit around the atomic nucleus to create topological excitations. However, this ‘ingredient’ is relatively rare, and the method can therefore only be used on a limited number of materials.
    In the study, the research team presents a completely new method that uses magnetism – a much more common and accessible ingredient – to achieve the same effect. By harnessing magnetic interactions, the researchers were able to engineer the robust topological excitations required for topological quantum computing.
    “The advantage of our method is that magnetism exists naturally in many materials. You can compare it to baking with everyday ingredients rather than using rare spices,” explains Guangze Chen. “This means that we can now search for topological properties in a much broader spectrum of materials, including those that have previously been overlooked.”
    Paving the way for next-generation quantum computer platforms
    To accelerate the discovery of new materials with useful topological properties, the research team has also developed a new computational tool. The tool can directly calculate how strongly a material exhibits topological behaviour.
    “Our hope is that this approach can help guide the discovery of many more exotic materials,” says Guangze Chen. “Ultimately, this can lead to next-generation quantum computer platforms, built on materials that are naturally resistant to the kind of disturbances that plague current systems.” More

  • in

    Cornell researchers build first ‘microwave brain’ on a chip

    Cornell University researchers have developed a low-power microchip they call a “microwave brain,” the first processor to compute on both ultrafast data signals and wireless communication signals by harnessing the physics of microwaves.
    Detailed today in the journal Nature Electronics, the processor is the first, true microwave neural network and is fully integrated on a silicon microchip. It performs real-time frequency domain computation for tasks like radio signal decoding, radar target tracking and digital data processing, all while consuming less than 200 milliwatts of power.
    “Because it’s able to distort in a programmable way across a wide band of frequencies instantaneously, it can be repurposed for several computing tasks,” said lead author Bal Govind, a doctoral student who conducted the research with Maxwell Anderson, also a doctoral student. “It bypasses a large number of signal processing steps that digital computers normally have to do.”
    That capability is enabled by the chip’s design as a neural network, a computer system modeled on the brain, using interconnected modes produced in tunable waveguides. This allows it to recognize patterns and learn from data. But unlike traditional neural networks that rely on digital operations and step-by-step instructions timed by a clock, this network uses analog, nonlinear behavior in the microwave regime, allowing it to handle data streams in the tens of gigahertz – much faster than most digital chips.
    “Bal threw away a lot of conventional circuit design to achieve this,” said Alyssa Apsel, professor of engineering, who was co-senior author with Peter McMahon, associate professor of applied and engineering physics. “Instead of trying to mimic the structure of digital neural networks exactly, he created something that looks more like a controlled mush of frequency behaviors that can ultimately give you high-performance computation.”
    The chip can perform both low-level logic functions and complex tasks like identifying bit sequences or counting binary values in high-speed data. It achieved at or above 88% accuracy on multiple classification tasks involving wireless signal types, comparable to digital neural networks but with a fraction of the power and size.
    “In traditional digital systems, as tasks get more complex, you need more circuitry, more power and more error correction to maintain accuracy,” Govind said. “But with our probabilistic approach, we’re able to maintain high accuracy on both simple and complex computations, without that added overhead.”
    The chip’s extreme sensitivity to inputs makes it well-suited for hardware security applications like sensing anomalies in wireless communications across multiple bands of microwave frequencies, according to the researchers.

    “We also think that if we reduce the power consumption more, we can deploy it to applications like edge computing,” Apsel said, “You could deploy it on a smartwatch or a cellphone and build native models on your smart device instead of having to depend on a cloud server for everything.”
    Though the chip is still experimental, the researchers are optimistic about its scalability. They are experimenting with ways to improve its accuracy and integrate it into existing microwave and digital processing platforms.
    The work emerged from an exploratory effort within a larger project supported by the Defense Advanced Research Projects Agency and the Cornell NanoScale Science and Technology Facility, which is funded in part by the National Science Foundation. More