More stories

  • in

    Caltech breakthrough makes quantum memory last 30 times longer

    While conventional computers store information in the form of bits, fundamental pieces of logic that take a value of either 0 or 1, quantum computers are based on qubits. These can have a state that is simultaneously both 0 and 1. This odd property, a quirk of quantum physics known as superposition, lies at the heart of quantum computing’s promise to ultimately solve problems that are intractable for classical computers.
    Many existing quantum computers are based on superconducting electronic systems in which electrons flow without resistance at extremely low temperatures. In these systems, the quantum mechanical nature of electrons flowing through carefully designed resonators creates superconducting qubits. These qubits are excellent at quickly performing the logical operations needed for computing. However, storing information — in this case quantum states, mathematical descriptors of particular quantum systems — is not their strong suit. Quantum engineers have been seeking a way to boost the storage times of quantum states by constructing so-called “quantum memories” for superconducting qubits.
    Now a team of Caltech scientists has used a hybrid approach for quantum memories, effectively translating electrical information into sound so that quantum states from superconducting qubits can survive in storage for a period up to 30 times longer than in other techniques.
    The new work, led by Caltech graduate students Alkim Bozkurt and Omid Golami, supervised by Mohammad Mirhosseini, assistant professor of electrical engineering and applied physics, appears in a paper published in the journal Nature Physics.
    “Once you have a quantum state, you might not want to do anything with it immediately,” Mirhosseini says. “You need to have a way to come back to it when you do want to do a logical operation. For that, you need a quantum memory.”
    Previously, Mirhosseini’s group showed that sound, specifically phonons, which are individual particles of vibration (in the way that photons are individual particles of light) could provide a convenient method for storing quantum information. The devices they tested in classical experiments seemed ideal for pairing with superconducting qubits because they worked at the same extremely high gigahertz frequencies (humans hear at hertz and kilohertz frequencies that are at least a million times slower). They also performed well at the low temperatures needed to preserve quantum states with superconducting qubits and had long lifetimes.
    Now Mirhosseini and his colleagues have fabricated a superconducting qubit on a chip and connected it to a tiny device that scientists call a mechanical oscillator. Essentially a miniature tuning fork, the oscillator consists of flexible plates that are vibrated by sound waves at gigahertz frequencies. When an electric charge is placed on those plates, the plates can interact with electrical signals carrying quantum information. This allows information to be piped into the device for storage as a “memory” and be piped out, or “remembered,” later.

    The researchers carefully measured how long it took for the oscillator to lose its valuable quantum content once information entered the device. “It turns out that these oscillators have a lifetime about 30 times longer than the best superconducting qubits out there,” Mirhosseini says.
    This method of constructing a quantum memory offers several advantages over previous strategies. Acoustic waves travel much slower than electromagnetic waves, enabling much more compact devices. Moreover, mechanical vibrations, unlike electromagnetic waves, do not propagate in free space, which means that energy does not leak out of the system. This allows for extended storage times and mitigates undesirable energy exchange between nearby devices. These advantages point to the possibility that many such tuning forks could be included in a single chip, providing a potentially scalable way of making quantum memories.
    Mirhosseini says this work has demonstrated the minimum amount of interaction between electromagnetic and acoustic waves needed to probe the value of this hybrid system for use as a memory element. “For this platform to be truly useful for quantum computing, you need to be able to put quantum data in the system and take it out much faster. And that means that we have to find ways of increasing the interaction rate by a factor of three to 10 beyond what our current system is capable of,” Mirhosseini says. Luckily, his group has ideas about how that can be done.
    Additional authors of the paper, “A mechanical quantum memory for microwave photons” are Yue Yu, a former visiting undergraduate student in the Mirhosseini lab; and Hao Tian, an Institute for Quantum Information and Matter postdoctoral scholar research associate in electrical engineering at Caltech. The work was supported by funding from the Air Force Office of Scientific Research and the National Science Foundation. Bozkurt was supported by an Eddleman Graduate Fellowship. More

  • in

    Useful metals get unearthed in U.S. mines, then they’re tossed

    Many useful metals unearthed from U.S. mines are discarded.

    When mining operations dig for valuable metals, they often exhume ore containing other metals too. These by-product elements are usually treated as waste, but recovering even small fractions could offset the need to import them, researchers report August 21 in Science. For instance, recovering just 1 percent of rare earth elements from this material could replace imports.

    “We’re used to skimming cream off the top,” says Elizabeth Holley, a mining geologist from the Colorado School of Mines in Golden. “We need to be better at recovering more from what we’re using.” More

  • in

    Google’s quantum computer just simulated the hidden strings of the Universe

    The research, published in the academic journal Nature, represents an essential step in quantum computing and demonstrates its potential by directly simulating fundamental interactions with Google’s quantum processor. In the future, researchers could use this approach to gain deeper insights into particle physics, quantum materials, and even the nature of space and time itself. The aim is to understand how nature works at its most fundamental level, described by so-called gauge theories.
    “Our work shows how quantum computers can help us explore the fundamental rules that govern our universe,” says co-author Michael Knap, Professor of Collective Quantum Dynamics at the TUM School of Natural Sciences. “By simulating these interactions in the laboratory, we can test theories in new ways.”
    Pedram Roushan, co-author of this work from Google Quantum AI emphasizes: “Harnessing the power of the quantum processor, we studied the dynamics of a specific type of gauge theory and observed how particles and the invisible ‘strings’ that connect them evolve over time.”
    Tyler Cochran, first author and graduate student at Princeton, says: “By adjusting effective parameters in the model, we could tune properties of the strings. They can fluctuate strongly, become tightly confined, or even break.” He explains that the data from the quantum processor reveals the hallmark behaviors of such strings, which have direct analogs to phenomena in high-energy particle physics. The results underscore the potential for quantum computers to facilitate scientific discovery in fundamental physics and beyond.
    The research was supported, in part, by the UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee [grant number EP/Y036069/1], the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy–EXC–2111–390814868, TRR 360 – 492547816, DFG grants No. KN1254/1-2, KN1254/2-1, DFG FOR 5522 Research Unit (project id 499180199), the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 851161 and No. 771537), the European Union (grant agreement No 101169765), as well as the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. More

  • in

    Scientists turn spin loss into energy, unlocking ultra-low-power AI chips

    Dr. Dong-Soo Han’s research team at the Korea Institute of Science and Technology (KIST) Semiconductor Technology Research Center, in collaboration with the research teams of Prof. Jung-Il Hong at DGIST and Prof. Kyung-Hwan Kim at Yonsei University, has developed a device principle that can utilize “spin loss,” which was previously thought of as a simple loss, as a new power source for magnetic control.
    Spintronics is a technology that utilizes the “spin” property of electrons to store and control information, and it is being recognized as a key foundation for next-generation information processing technologies such as ultra-low-power memory, neuromorphic chips, and computational devices for stochastic computation, as it consumes less power and is more non-volatile than conventional semiconductors. This research is significant because it presents a new approach that can significantly improve the efficiency of these spintronics devices.
    A team of researchers has identified a new physical phenomenon that allows magnetic materials to spontaneously switch their internal magnetization direction without external stimuli. Magnetic materials are key to the next generation of information processing devices that store information or perform computations by changing the direction of their internal magnetization. For example, if the magnetization direction is upward, it is recognized as ‘1’, and if it is downward, it is recognized as ‘0’, and data can be stored or computed.
    Traditionally, to reverse the direction of magnetization, a large current is applied to force the spin of electrons into the magnet. However, this process results in spin loss, where some of the spin does not reach the magnet and is dissipated, which has been considered a major source of power waste and poor efficiency.
    Researchers have focused on material design and process improvements to reduce spin loss. But now, the team has found that spin loss actually has the opposite effect, altering magnetization. This means that spin loss induces a spontaneous magnetization switch within the magnetic material, just as the balloon moves as a reaction to the wind being taken out of it.
    In their experiments, the team demonstrated the paradox that the greater the spin loss, the less power is required to switch magnetization. As a result, the energy efficiency is up to three times higher than conventional methods, and it can be realized without special materials or complex device structures, making it highly practical and industrially scalable.
    In addition, the technology adopts a simple device structure that is compatible with existing semiconductor processes, making it highly feasible for mass production, and it is also advantageous for miniaturization and high integration. This enables applications in various fields such as AI semiconductors, ultra-low power memory, neuromorphic computing, and probability-based computing devices. In particular, the development of high-efficiency computing devices for AI and edge computing is expected to be in full swing.
    “Until now, the field of spintronics has focused only on reducing spin losses, but we have presented a new direction by using the losses as energy to induce magnetization switching,” said Dr. Dong-Soo Han, a senior researcher at KIST. “We plan to actively develop ultra-small and low-power AI semiconductor devices, as they can serve as the basis for ultra-low-power computing technologies that are essential in the AI era.”
    This research was supported by the Ministry of Science and ICT (Minister Bae Kyung-hoon) through the KIST Institutional Program, the Global TOP Research and Development Project (GTL24041-000), and the Basic Research Project of the National Research Foundation of Korea (2020R1A2C2005932). The results of this research were published in the latest issue of the international journal Nature Communications (IF 15.7, JCR field 7%). More

  • in

    Scientists discover a strange new magnet that bends light like magic

    Researchers have uncovered the magnetic properties and underlying mechanisms of a novel magnet using advanced optical techniques. Their study focused on an organic crystal believed to be a promising candidate for an “altermagnet”- a recently proposed third class of magnetic materials. Unlike conventional ferromagnets and antiferromagnets, altermagnets exhibit unique magnetic behavior.
    Details of their breakthrough were published recently in the journal Physical Review Research.
    “Unlike typical magnets that attract each other, altermagnets do not exhibit net magnetization, yet they can still influence the polarization of reflected light,” points out Satoshi Iguchi, associate professor at Tohoku University’s Institute for Materials Research. “This makes them difficult to study using conventional optical techniques.”
    To overcome this, Iguchi and his colleagues applied a newly derived general formula for light reflection to the organic crystal, successfully clarifying its magnetic properties and origin.
    The group also comprised Yuka Ikemoto and Taro Moriwaki from the Japan Synchrotron Radiation Research Institute; Hirotake Itoh from the Department of Physics and Astronomy at Kwansei Gakuin University; Shinichiro Iwai from the Department of Physics at Tohoku University; and Tetsuya Furukawa and Takahiko Sasaki, also from the Institute for Materials Research.
    The team’s newly derived general formula for light reflection was based on Maxwell’s equations and is applicable to a wide range of materials, including those with low crystal symmetry, such as the organic compound studied here.
    This new theoretical framework also allowed the team to develop a precise optical measurement method and apply it to the organic crystal κ-(BEDT-TTF)2Cu[N(CN)2]Cl. They successfully measured the magneto-optical Kerr effect (MOKE) and extracted the off-diagonal optical conductivity spectrum, which provides detailed information about the material’s magnetic and electronic properties.
    The results revealed three key features in the spectrum: (1) edge peaks indicating spin band splitting, (2) a real component associated with crystal distortion and piezomagnetic effects, and (3) an imaginary component linked to rotational currents. These findings not only confirm the altermagnetic nature of the material but also demonstrate the power of the newly developed optical method.
    “This research opens the door to exploring magnetism in a broader class of materials, including organic compounds, and lays the groundwork for future development of high-performance magnetic devices based on lightweight, flexible materials,” adds Iguchi. More

  • in

    Scientists discover flaws that make electronics faster, smarter, and more efficient

    Scientists have turned a longstanding challenge in electronics — material defects — into a quantum-enhanced solution, paving the way for new-generation ultra-low-power spintronic devices.
    Spintronics, short for “spin electronics,” is a field of technology that aims to go beyond the limits of conventional electronics. Traditional devices rely only on the electric charge of electrons to store and process information. Spintronics takes advantage of two additional quantum properties: spin angular momentum, which can be imagined as a built-in “up” or “down” orientation of the electron, and orbital angular momentum, which describes how electrons move around atomic nuclei. By using these extra degrees of freedom, spintronic devices can store more data in smaller spaces, operate faster, consume less energy, and retain information even when the power is switched off.
    A longstanding challenge in spintronics has been the role of material defects. Introducing imperfections into a material can sometimes make it easier to “write” data into memory bits by reducing the current needed, but this typically comes at a cost: electrical resistance increases, spin Hall conductivity declines, and overall power consumption goes up. This trade-off has been a major obstacle to developing ultra-low-power spintronic devices.
    Now, the Flexible Magnetic-Electronic Materials and Devices Group from the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences have found a way to turn this problem into an advantage. Their study, published in Nature Materials, focused on the orbital Hall effect in strontium ruthenate (SrRuO3), a transition metal oxide whose properties can be finely tuned. This quantum phenomenon causes electrons to move in a way determined by their orbital angular momentum.
    Using custom-designed devices and precision measurement techniques, the researchers uncovered an unconventional scaling law that achieves a “two birds with one stone” outcome: Defect engineering simultaneously boosts both orbital Hall conductivity and orbital Hall angle, a stark contrast to conventional spin-based systems.
    To explain this finding, the team linked it to the Dyakonov-Perel-like orbital relaxation mechanism. “Scattering processes that typically degrade performance actually extend the lifetime of orbital angular momentum, thereby enhancing orbital current,” said Dr. Xuan Zheng, a co-first author of the study.
    “This work essentially rewrites the rulebook for designing these devices,” said Prof. Zhiming Wang, a corresponding author of the study. “Instead of fighting material imperfections, we can now exploit them.”
    Experimental measurements confirm the technology’s potential: tailored conductivity modulation yielded a threefold improvement in switching energy efficiency.
    This study not only provides new insights into orbital transport physics but also redefines design strategies for energy-efficient spintronics.
    This study received support from the National Key Research and Development Program of China, the National Natural Science Foundation of China, and other funding bodies. More

  • in

    Why tiny bee brains could hold the key to smarter AI

    A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.iversity of Sheffield built a digital model of a bee’s brain that explains how these movements create clear, efficient brain signals, allowing bees to easily understand what they see This discovery could revolutionize AI and robotics, suggesting that future robots can be smarter and more efficient by using movement to gather relevant information, rather than relying on huge computer networks The study highlights a big idea: intelligence comes from how brains, bodies and the environment work together. It demonstrates how even tiny insect brains can solve complex visual tasks using very few brain cells, which has major implications for both biology and AIA new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.
    By building a computational model — or a digital version of a bee’s brain — researchers have discovered how the way bees move their bodies during flight helps shape visual input and generates unique electrical messages in their brains. These movements generate neural signals that allow bees to easily and efficiently identify predictable features of the world around them. This ability means bees demonstrate remarkable accuracy in learning and recognizing complex visual patterns during flight, such as those found in a flower.
    The model not only deepens our understanding of how bees learn and recognize complex patterns through their movements, but also paves the way for next-generation AI. It demonstrates that future robots can be smarter and more efficient by using movement to gather information, rather than relying on massive computing power.
    Professor James Marshall, Director of the Centre of Machine Intelligence at the University of Sheffield and senior author on the study, said:”In this study we’ve successfully demonstrated that even the tiniest of brains can leverage movement to perceive and understand the world around them. This shows us that a small, efficient system — albeit the result of millions of years of evolution — can perform computations vastly more complex than we previously thought possible.
    “Harnessing nature’s best designs for intelligence opens the door for the next generation of AI, driving advancements in robotics, self-driving vehicles and real-world learning.”
    The study, a collaboration with Queen Mary University of London, is published recently in the journal eLife. It builds on the team’s previous research into how bees use active vision — the process where their movements help them collect and process visual information. While their earlier work observed how bees fly around and inspect specific patterns, this new study provides a deeper understanding of the underlying brain mechanisms driving that behavior.

    The sophisticated visual pattern learning abilities of bees, such as differentiating between human faces, have long been understood; however the study’s findings shed new light on how pollinators navigate the world with such seemingly simple efficiency.
    Dr. HaDi MaBouDi, lead author and researcher at the University of Sheffield, said: “In our previous work, we were fascinated to discover that bees employ a clever scanning shortcut to solve visual puzzles. But that just told us what they do; for this study, we wanted to understand how.
    “Our model of a bee’s brain demonstrates that its neural circuits are optimized to process visual information not in isolation, but through active interaction with its flight movements in the natural environment, supporting the theory that intelligence comes from how the brain, bodies and the environment work together.
    “We’ve learnt that bees, despite having brains no larger than a sesame seed, don’t just see the world — they actively shape what they see through their movements. It’s a beautiful example of how action and perception are deeply intertwined to solve complex problems with minimal resources. This is something that has major implications for both biology and AI.”
    The model shows that bee neurons become finely tuned to specific directions and movements as their brain networks gradually adapt through repeated exposure to various stimuli, refining their responses without relying on associations or reinforcement. This lets the bee’s brain adapt to its environment simply by observing while flying, without requiring instant rewards. This means the brain is incredibly efficient, using only a few active neurons to recognize things, conserving both energy and processing power.
    To validate their computational model, the researchers subjected it to the same visual challenges encountered by real bees. In a pivotal experiment, the model was tasked with differentiating between a ‘plus’ sign and a ‘multiplication’ sign. The model exhibited significantly improved performance when it mimicked the real bees’ strategy of scanning only the lower half of the patterns, a behaviour observed by the research team in a previous study.

    Even with just a small network of artificial neurons, the model successfully showed how bees can recognise human faces, underscoring the strength and flexibility of their visual processing.
    Professor Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary University of London, added: ‘Scientists have been fascinated by the question of whether brain size predicts intelligence in animals. But such speculations make no sense unless one knows the neural computations that underpin a given task.
    “Here we determine the minimum number of neurons required for difficult visual discrimination tasks and find that the numbers are staggeringly small, even for complex tasks such as human face recognition. Thus insect microbrains are capable of advanced computations.”
    Professor Mikko Juusola, Professor in System Neuroscience from the University of Sheffield’s School of Biosciences and Neuroscience Institute said: “This work strengthens a growing body of evidence that animals don’t passively receive information — they actively shape it.
    “Our new model extends this principle to higher-order visual processing in bees, revealing how behaviorally driven scanning creates compressed, learnable neural codes. Together, these findings support a unified framework where perception, action and brain dynamics co-evolve to solve complex visual tasks with minimal resources — offering powerful insights for both biology and AI.”
    By bringing together findings from how insects behave, how their brains work, and what the computational models show, the study shows how studying small insect brains can uncover basic rules of intelligence. These findings not only deepen our understanding of cognition but also have significant implications for developing new technologies. More

  • in

    Tiny quantum dots unlock the future of unbreakable encryption

    Physicists have developed a breakthrough concept in quantum encryption that makes private communication more secure over significantly longer distances, surpassing state-of-the-art technologies. For decades, experts believed such a technology upgrade required perfect optical hardware, namely, light sources that strictly emit one light particle (photon) at a time — something extremely difficult and expensive to build. But the new approach uses innovative encryption protocols applied to tiny, engineered materials called quantum dots to send encrypted information securely, even with imperfect light sources. Real-world tests show it can outperform even the best of current systems, potentially bringing quantum-safe communication closer to everyday use.
    A team of physicists has made a breakthrough that could bring secure quantum communication closer to everyday use — without needing flawless hardware.
    The research, led by PhD students Yuval Bloom and Yoad Ordan, under the guidance of Professor Ronen Rapaport from theRacah Institute of Physics at Hebrew University in collaboration with researchers from Los-Alamos National Labs, and published in PRX Quantum, introduces a new practical approach that significantly improve how we send quantum encrypted information using light particles — even when using imperfect equipment.
    Cracking a 40-Year-Old Challenge in Quantum Communication
    For four decades, the holy grail of quantum key distribution (QKD) — the science of creating unbreakable encryption using quantum mechanics — has hinged on one elusive requirement: perfectly engineered single-photon sources. These are tiny light sources that can emit one particle of light (photon) at a time. But in practice, building such devices with absolute precision has proven extremely difficult and expensive.
    To work around that, the field has relied heavily on lasers, which are easier to produce but not ideal. These lasers send faint pulses of light that contain a small, but unpredictable, number of photons — a compromise that limits both security and the distance over which data can be safely transmitted, as a smart eavesdropper can “steal” the information bits that are encoded simultaneously on more than one photon.
    A Better Way with Imperfect Tools
    Bloom, Ordan, and their team flipped the script. Instead of waiting for perfect photon sources, they developed two new protocols that work with what we have now — sub-Poissonian photon sources based on quantum dots, which are tiny semiconductor particles that behave like artificial atoms.

    By dynamically engineering the optical behavior of these quantum dots and pairing them with nanoantennas, the team was able to tweak how the photons are emitted. This fine-tuning allowed them to suggest and demonstrate two advanced encryption strategies: A truncated decoy state protocol: A new version of a widely used quantum encryption approach, tailored for imperfect single photon sources, that weeds out potential hacking attempts due to multi-photon events. A heralded purification protocol: A new method that dramatically improves signal security by “filtering” the excess photons in real time, ensuring that only true single photon bits are recorded.In simulations and lab experiments, these techniques outperformed even the best versions of traditional laser-based QKD methods — extending the distance over which a secure key can be exchanged by more than 3 decibels, a substantial leap in the field.
    A Real-World Test and a Step Toward Practical Quantum Networks
    To prove it wasn’t just theory, the team built a real-world quantum communication setup using a room-temperature quantum dot source. They ran their new reinforced version of the well-known BB84 encryption protocol — the backbone of many quantum key distribution systems — and showed that their approach was not only feasible but superior to existing technologies.
    What’s more, their approach is compatible with a wide range of quantum light sources, potentially lowering the cost and technical barriers to deploying quantum-secure communication on a large scale.
    “This is a significant step toward practical, accessible quantum encryption,” said Professor Rapaport. “It shows that we don’t need perfect hardware to get exceptional performance — we just need to be smarter about how we use what we have.”
    Co-Lead author Yuval Bloom added, “We hope this work helps open the door to real-world quantum networks that are both secure and affordable. The cool thing is that we don’t have to wait, it can be implemented with what we already have in many labs world-wide” More