More stories

  • in

    It takes three to tangle: Long-range quantum entanglement needs three-way interaction

    A theoretical study shows that long-range entanglement can indeed survive at temperatures above absolute zero, if the correct conditions are met.
    Quantum computing has been earmarked as the next revolutionary step in computing. However current systems are only practically stable at temperatures close to absolute zero. A new theorem from a Japanese research collaboration provides an understanding of what types of long-range quantum entanglement survive at non-zero temperatures, revealing a fundamental aspect of macroscopic quantum phenomena and guiding the way towards further understanding of quantum systems and designing new room-temperature stable quantum devices.
    When things get small, right down to the scale of one-thousandth the width of a human hair, the laws of classical physics get replaced by those of quantum physics. The quantum world is weird and wonderful, and there is much about it that scientists are yet to understand. Large-scale or “macroscopic” quantum effects play a key role in extraordinary phenomena such as superconductivity, which is a potential game-changer in future energy transport, as well for the continued development of quantum computers.
    It is possible to observe and measure “quantumness” at this scale in particular systems with the help of long-range quantum entanglement. Quantum entanglement, which Albert Einstein once famously described as “spooky action at a distance,” occurs when a group of particles cannot be described independently from each other. This means that their properties are linked: if you can fully describe one particle, you will also know everything about the particles it is entangled with.
    Long-range entanglement is central to quantum information theory, and its further understanding could lead to a breakthrough in quantum computing technologies. However, long-range quantum entanglement is stable at specific conditions, such as between three or more parties and at temperatures close to absolute zero (-273°C). What happens to two-party entangled systems at non-zero temperatures? To answer this question, researchers from the RIKEN Center for Advanced Intelligence Project, Tokyo, and Keio University, Yokohama, recently presented a theoretical study in Physical Review X describing long-range entanglement at temperatures above absolute zero in bipartite systems.
    “The purpose of our study was to identify a limitation on the structure of long-range entanglement at arbitrary non-zero temperatures,” explains RIKEN Hakubi Team Leader Tomotaka Kuwahara, one of the authors of the study, who performed the research while at the RIKEN Center for Advanced Intelligence Project. “We provide simple no-go theorems that show what kinds of long-range entanglement can survive at non-zero temperatures. At temperatures above absolute zero, particles in a material vibrate and move around due to thermal energy, which acts against quantum entanglement. At arbitrary non-zero temperatures, no long-range entanglement can persist between only two subsystems.”
    The researchers’ findings are consistent with previous observations that long-range entanglement survives at a non-zero temperature only when more than three subsystems are involved. The results suggest this is a fundamental aspect of macroscopic quantum phenomena at room temperatures, and that quantum devices need to be engineered to have multipartite entangled states.
    “This result has opened the door to a deeper understanding of quantum entanglement over large distances, so this is just the beginning.,” states Keio University’s Professor Keijo Saito, the co-author of the study. “We aim to deepen our understanding of the relationship between quantum entanglement and temperature in the future. This knowledge will spark and drive the development of future quantum devices that work at room temperatures, making them practical.”
    Story Source:
    Materials provided by RIKEN. Note: Content may be edited for style and length. More

  • in

    In balance: Quantum computing needs the right combination of order and disorder

    Research conducted within the Cluster of Excellence ‘Matter and Light for Quantum Computing’ (ML4Q) has analysed cutting-edge device structures of quantum computers to demonstrate that some of them are indeed operating dangerously close to a threshold of chaotic meltdown. The challenge is to walk a thin line between too high, but also too low disorder to safeguard device operation. The study ‘Transmon platform for quantum computing challenged by chaotic fluctuations’ has been published today in Nature Communications.
    In the race for what may become a key future technology, tech giants like IBM and Google are investing enormous resources into the development of quantum computing hardware. However, current platforms are not yet ready for practical applications. There remain multiple challenges, among them the control of device imperfections (‘disorder’).
    It’s an old stability precaution: When large groups of people cross bridges, they need to avoid marching in step to prevent the formation of resonances destabilizing the construction. Perhaps counterintuitively, the superconducting transmon qubit processor — a technologically advanced platform for quantum computing favoured by IBM, Google, and other consortia — relies on the same principle: intentionally introduced disorder blocks the formation of resonant chaotic fluctuations, thus becoming an essential part of the production of multi-qubit processors.
    To understand this seemingly paradoxical point, one should think of a transmon qubit as a kind of pendulum. Qubits interlinked to form a computing structure define a system of coupled pendulums — a system that, like classical pendulums, can easily be excited to uncontrollably large oscillations with disastrous consequences. In the quantum world, such uncontrollable oscillations lead to the destruction of quantum information; the computer becomes unusable. Intentionally introduced local ‘detunings’ of single pendulums keep such phenomena at bay.
    ‘The transmon chip not only tolerates but actually requires effectively random qubit-to-qubit device imperfections,’ explained Christoph Berke, final-year doctoral student in the group of Simon Trebst at the University of Cologne and first author of the paper. ‘In our study, we ask just how reliable the “stability by randomness” principle is in practice. By applying state-of-the-art diagnostics of the theory of disordered systems, we were able to find that at least some of the industrially pursued system architectures are dangerously close to instability.’
    From the point of view of fundamental quantum physics, a transmon processor is a many-body quantum system with quantized energy levels. State-of-the-art numerical tools allow one to compute these discrete levels as a function of relevant system parameters, to obtain patterns superficially resembling a tangle of cooked spaghetti. A careful analysis of such structures for realistically modelled Google and IBM chips was one out of several diagnostic tools applied in the paper to map out a stability diagram for transmon quantum computing.
    ‘When we compared the Google to the IBM chips, we found that in the latter case qubit states may be coupled to a degree that controlled gate operations may be compromised,’ said Simon Trebst, head of the Computational Condensed Matter Physics group at the University of Cologne. In order to secure controlled gate operations, one thus needs to strike the subtle balance between stabilizing qubit integrity and enabling inter-qubit coupling. In the parlance of pasta preparation, one needs to prepare the quantum computer processor into perfection, keeping the energy states ‘al dente’ and avoiding their tangling by overcooking.
    The study of disorder in transmon hardware was performed as part of the Cluster of Excellence ML4Q in a collaborative work among the research groups of Simon Trebst and Alexander Altland at the University of Cologne and the group of David DiVincenzo at RWTH Aachen University and Forschungszentrum Jülich. “This collaborative project is quite unique,” says Alexander Altland from the Institute for Theoretical Physics in Cologne. “Our complementary knowledge of transmon hardware, numerical simulation of complex many-body systems, and quantum chaos was the perfect prerequisite to understand how quantum information with disorder can be protected. It also indicates how insights obtained for small reference systems can be transferred to application-relevant design scales.”
    David DiVincenzo, founding director of the JARA-Institute for Quantum Information at RWTH Aachen University, draws the following conclusion: ‘Our study demonstrates how important it is for hardware developers to combine device modelling with state-of-the-art quantum randomness methodology and to integrate “chaos diagnostics” as a routine part of qubit processor design in the superconducting platform.’
    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More

  • in

    'Digital twins,' an aid to give individual patients the right treatment at the right time

    An international team of researchers have developed advanced computer models, or “digital twins,” of diseases, with the goal of improving diagnosis and treatment. They used one such model to identify the most important disease protein in hay fever. The study, which has just been published in the open access journal Genome Medicine, underlines the complexity of disease and the necessity of using the right treatment at the right time.
    Why is a drug effective against a certain illness in some individuals, but not in others? With common diseases, medication is ineffective in 40-70 percent of the patients. One reason for this is that diseases are seldom caused by a single “fault” that can be easily treated. Instead, in most diseases the symptoms are the result of altered interactions between thousands of genes in many different cell types. The timing is also important. Disease processes often evolve over long periods. We are often not aware of disease development until symptoms appear, and diagnosis and treatment are thus often delayed, which may contribute to insufficient medical efficacy.
    In a recent study, an international research team aimed to bridge the gap between this complexity and modern health care by constructing computational disease models of the altered gene interactions across many cell types at different time points. The researchers’ long-term goal is to develop such computational models into “digital twins” of individual patients’ diseases. Such medical digital twins might be used to tailor medication so that each patient could be treated with the right drug at the right time. Ideally, each twin could be matched with and treated with thousands of drugs in the computer, before actual treatment on the patient begins.
    The researchers started by developing methods to construct digital twins of patients with hay fever. They used a technique, single-cell RNA sequencing, to determine all gene activity in each of thousands of individual immune cells — more specifically white blood cells. Since these interactions between genes and cell types may differ between different time points in the same patient, the researchers measured gene activity at different time points before and after stimulating white blood cells with pollen.
    In order to construct computer models of all the data, the researchers used network analyses. Networks can be used to describe and analyse complex systems. For example, a football team could be analysed as a network based on the passes between the players. The player that passes most to other players during the whole match may be most important in that network. Similar principles were applied to construct the computer models, or “twins,” as well as to identify the most important disease protein.
    In the current study, the researchers found that multiple proteins and signalling cascades were important in seasonal allergies, and that these varied greatly across cell types and at different stages of the disease.
    “We can see that these are extremely complicated changes that occur in different phases of a disease. The variation between different times points means that you have to treat the patient with the right medicine at the right time,” says Dr Mikael Benson, professor at Linköping University, who led the study.
    Finally, the researchers identified the most important protein in the twin model of hay fever. They show that inhibiting this protein, called PDGF-BB, in experiments with cells was more effective than using a known allergy drug directed against another protein, called IL-4.
    The study also demonstrated that the methods could potentially be applied to give the right treatment at the right time in other immunological diseases, like rheumatism or inflammatory bowel diseases. Clinical implementation will require international collaborations between universities, hospitals and companies.
    The study is based on an interdisciplinary collaboration between 15 researchers in Sweden, the US, Korea and China. The research has received financial support from the EU, NIH, the Swedish and Nordic Research Councils, and the Swedish Cancer Society.
    Story Source:
    Materials provided by Linköping University. Original written by Karin Söderlund Leifler. Note: Content may be edited for style and length. More

  • in

    Self-propelled, endlessly programmable artificial cilia

    For years, scientists have been attempting to engineer tiny, artificial cilia for miniature robotic systems that can perform complex motions, including bending, twisting, and reversing. Building these smaller-than-a-human-hair microstructures typically requires multi-step fabrication processes and varying stimuli to create the complex movements, limiting their wide-scale applications.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a single-material, single-stimuli microstructure that can outmaneuver even living cilia. These programmable, micron-scale structures could be used for a range of applications, including soft robotics, biocompatible medical devices, and even dynamic information encryption.
    The research is published in Nature.
    “Innovations in adaptive self-regulated materials that are capable of a diverse set of programmed motions represent a very active field, which is being tackled by interdisciplinary teams of scientists and engineers,” said Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS and senior author of the paper. “Advances achieved in this field may significantly impact the ways we design materials and devices for a variety of applications, including robotics, medicine and information technologies.”
    Unlike previous research, which relied mostly on complex multi-component materials to achieve programmable movement of reconfigurable structural elements, Aizenberg and her team designed a microstructure pillar made of a single material — a photoresponsive liquid crystal elastomer. Because of the way the fundamental building blocks of the liquid crystal elastomer are aligned, when light hits the microstructure, those building blocks realign and the structure changes shape.
    As this shape change occurs, two things happen. First, the spot where the light hits becomes transparent, allowing the light to penetrate further into the material, causing additional deformations. Second, as the material deforms and the shape moves, a new spot on the pillar is exposed to light, causing that area to also change shape. More

  • in

    Replacing some meat with microbial protein could help fight climate change

    “Fungi Fridays” could save a lot of trees — and take a bite out of greenhouse gas emissions. Eating one-fifth less red meat and instead munching on microbial proteins derived from fungi or algae could cut annual deforestation in half by 2050, researchers report May 5 in Nature.

    Raising cattle and other ruminants contributes methane and nitrous oxide to the atmosphere, while clearing forests for pasture lands adds carbon dioxide (SN: 4/4/22; SN: 7/13/21). So the hunt is on for environmentally friendly substitutes, such as lab-grown hamburgers and cricket farming (SN: 9/20/18; SN: 5/2/19).

    Another alternative is microbial protein, made from cells cultivated in a laboratory and nurtured with glucose. Fermented fungal spores, for example, produce a dense, doughy substance called mycoprotein, while fermented algae produce spirulina, a dietary supplement.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Cell-cultured foods do require sugar from croplands, but studies show that mycoprotein produces fewer greenhouse gas emissions and uses less land and water than raising cattle, says Florian Humpenöder, a climate modeler at Potsdam Institute for Climate Impact Research in Germany. However, a full comparison of foods’ future environmental impacts also requires accounting for changes in population, lifestyle, dietary patterns and technology, he says.

    So Humpenöder and colleagues incorporated projected socioeconomic changes into computer simulations of land use and deforestation from 2020 through 2050. Then they simulated four scenarios, substituting microbial protein for 0 percent, 20 percent, 50 percent or 80 percent of the global red meat diet by 2050.

    A little substitution went a long way, the team found: Just 20 percent microbial protein substitution cut annual deforestation rates — and associated CO2 emissions — by 56 percent from 2020 to 2050.

    Eating more microbial proteins could be part of a portfolio of strategies to address the climate and biodiversity crises — alongside measures to protect forests and decarbonize electricity generation, Humpenöder says. More

  • in

    Scientists observe quantum speed-up in optimization problems

    A collaboration between Harvard University with scientists at QuEra Computing, MIT, University of Innsbruck and other institutions has demonstrated a breakthrough application of neutral-atom quantum processors to solve problems of practical use.
    The study was co-led by Mikhail Lukin, the George Vasmer Leverett Professor of Physics at Harvard and co-director of the Harvard Quantum Initiative, Markus Greiner, George Vasmer Leverett Professor of Physics, and Vladan Vuletic, Lester Wolfe Professor of Physics at MIT. Titled “Quantum Optimization of Maximum Independent Set using Rydberg Atom Arrays,” was published on May 5th, 2022, in Science Magazine.
    Previously, neutral-atom quantum processors had been proposed to efficiently encode certain hard combinatorial optimization problems. In this landmark publication, the authors not only deploy the first implementation of efficient quantum optimization on a real quantum computer, but also showcase unprecedented quantum hardware power.
    The calculations were performed on Harvard’s quantum processor of 289 qubits operating in the analog mode, with effective circuit depths up to 32. Unlike in previous examples of quantum optimization, the large system size and circuit depth used in this work made it impossible to use classical simulations to pre-optimize the control parameters. A quantum-classical hybrid algorithm had to be deployed in a closed loop, with direct, automated feedback to the quantum processor.
    This combination of system size, circuit depth, and outstanding quantum control culminated in a quantum leap: problem instances were found with empirically better-than-expected performance on the quantum processor versus classical heuristics. Characterizing the difficulty of the optimization problem instances with a “hardness parameter,” the team identified cases that challenged classical computers, but that were more efficiently solved with the neutral-atom quantum processor. A super-linear quantum speed-up was found compared to a class of generic classical algorithms. QuEra’s open-source packages GenericTensorNetworks.jl and Bloqade.jl were instrumental in discovering hard instances and understanding quantum performance.
    “A deep understanding of the underlying physics of the quantum algorithm as well as the fundamental limitations of its classical counterpart allowed us to realize ways for the quantum machine to achieve a speedup,” says Madelyn Cain, Harvard graduate student and one of the lead authors. The importance of match-making between problem and quantum hardware is central to this work: “In the near future, to extract as much quantum power as possible, it is critical to identify problems that can be natively mapped to the specific quantum architecture, with little to no overhead,” said Shengtao Wang, Senior Scientist at QuEra Computing and one of the coinventors of the quantum algorithms used in this work, “and we achieved exactly that in this demonstration.”
    The “maximum independent set” problem, solved by the team, is a paradigmatic hard task in computer science and has broad applications in logistics, network design, finance, and more. The identification of classically challenging problem instances with quantum-accelerated solutions paves the path for applying quantum computing to cater to real-world industrial and social needs.
    “These results represent the first step towards bringing useful quantum advantage to hard optimization problems relevant to multiple industries.,” added Alex Keesling CEO of QuEra Computing and co-author on the published work. “We are very happy to see quantum computing start to reach the necessary level of maturity where the hardware can inform the development of algorithms beyond what can be predicted in advance with classical compute methods. Moreover, the presence of a quantum speedup for hard problem instances is extremely encouraging. These results help us develop better algorithms and more advanced hardware to tackle some of the hardest, most relevant computational problems.”
    Story Source:
    Materials provided by Harvard University. Note: Content may be edited for style and length. More

  • in

    Mechanism 'splits' electron spins in magnetic material

    Holding the right material at the right angle, Cornell researchers have discovered a strategy to switch the magnetization in thin layers of a ferromagnet — a technique that could eventually lead to the development of more energy-efficient magnetic memory devices.
    The team’s paper, “Tilted Spin Current Generated by the Collinear Antiferromagnet Ruthenium Dioxide,” published May 5 in Nature Electronics. The paper’s co-lead authors are postdoctoral researcher Arnab Bose and doctoral students Nathaniel Schreiber and Rakshit Jain.
    For decades, physicists have tried to change the orientation of electron spins in magnetic materials by manipulating them with magnetic fields. But researchers including Dan Ralph, the F.R. Newman Professor of Physics in the College of Arts and Sciences and the paper’s senior author, have instead looked to using spin currents carried by electrons, which exist when electrons have spins generally oriented in one direction.
    When these spin currents interact with a thin magnetic layer, they transfer their angular momentum and generate enough torque to switch the magnetization 180 degrees. (The process of switching this magnetic orientation is how one writes information in magnetic memory devices.)
    Ralph’s group has focused on finding ways to control the direction of the spin in spin currents by generating them with antiferromagnetic materials. In antiferromagnets, every other electron spin points in the opposite direction, hence there is no net magnetization.
    “Essentially, the antiferromagnetic order can lower the symmetries of the samples enough to allow unconventional orientations of spin current to exist,” Ralph said. “The mechanism of antiferromagnets seems to give a way of actually getting fairly strong spin currents, too.”
    The team had been experimenting with the antiferromagnet ruthenium dioxide and measuring the ways its spin currents tilted the magnetization in a thin layer of a nickel-iron magnetic alloy called Permalloy, which is a soft ferromagnet. In order to map out the different components of the torque, they measured its effects at a variety of magnetic field angles. More

  • in

    Using AI to analyze large amounts of biological data

    Researchers at the University of Missouri are applying a form of artificial intelligence (AI) — previously used to analyze how National Basketball Association (NBA) players move their bodies — to now help scientists develop new drug therapies for medical treatments targeting cancers and other diseases.
    The type of AI, called a graph neural network, can help scientists with speeding up the time it takes to sift through large amounts of data generated by studying protein dynamics. This approach can provide new ways to identify target sites on proteins for drugs to work effectively, said Dong Xu, a Curators’ Distinguished Professor in the Department of Electrical Engineering and Computer Science at the MU College of Engineering and one of the study’s authors.
    “Previously, drug designers may have known about a couple places on a protein’s structure to target with their therapies,” said Xu, who is also the Paul K. and Dianne Shumaker Professor in bioinformatics. “A novel outcome of this method is that we identified a pathway between different areas of the protein structure, which could potentially allow scientists who are designing drugs to see additional possible target sites for delivering their targeted therapies. This can increase the chances that the therapy may be successful.”
    Xu said they can also simulate how proteins can change in relation to different conditions, such as the development of cancer, and then use that information to infer their relationships with other bodily functions.
    “With machine learning we can really study what are the important interactions within different areas of the protein structure,” Xu said. “Our method provides a systematic review of the data involved when studying proteins, as well as a protein’s energy state, which could help when identifying any possible mutation’s effect. This is important because protein mutations can enhance the possibility of cancers and other diseases developing in the body.”
    “Neural relational inference to learn long-range allosteric interactions in proteins from molecular dynamics simulations” was published in Nature Communications. Juexin Wang at MU; and Jingxuan Zhu and Weiwei Han at Jilin University in China, also contributed to this study. Funding was provided by the China Scholarship Council and the Overseas Cooperation Project of Jilin Province, which were used to support Jingxuan Zhu to conduct this research at MU, as well as the National Institute of General Medical Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
    Story Source:
    Materials provided by University of Missouri-Columbia. Note: Content may be edited for style and length. More