More stories

  • in

    Tiny quantum drumhead sends sound with 1-in-a-million loss—poised to rewrite tech

    When a drummer plays a drum, she sets the drumhead into vibration by hitting it. The vibration contains a signal that we can decode as music. When the drumhead stops vibrating, the signal is lost.
    Now imagine a drumhead that is ultra-thin, about 10 mm wide, and perforated with many triangular holes.
    Researchers at the Niels Bohr Institute, University of Copenhagen, in collaboration with the University of Konstanz and ETH Zurich, have managed to get vibrations to travel around this membrane, almost without any loss. In fact, so little loss that it is far better than even electronic circuit signal handling. The result is now published in the journal Nature.
    Phonons – Sound Signals or Vibrations That Spread Through a Solid Material
    The signal consists of phonons – which can be translated to what one might call vibrations in a solid material. The atoms vibrate and push each other, so to speak, so a given signal can move through the material. It is not far-fetched to imagine encoding a signal, which is then sent through the material, and here signal loss comes into play.
    If the signal loses strength or parts of the signal are lost in heat or incorrect vibrations, one ends up not being able to decode it correctly.
    System Reliability is Crucial
    The signals that researchers have succeeded in sending through the membrane are distinguished by being almost lossless. The membrane as a platform for sending information is incredibly reliable.

    Loss is measured as a decrease in the amplitude of the sound wave as it moves around the membrane. When researchers direct the signal through the material and around the holes in the membrane – where the signal even changes direction – the loss is about one phonon out of a million.
    The amplitude of current fluctuations in a similar electronic circuit decreases about a hundred thousand times faster.
    Basic Research with Perspectives
    Researchers at the Niels Bohr Institute, Assistant Professor Xiang Xi and Professor Albert Schliesser, explain that the result should not be thought of in a specific, future application – but there are still rich possibilities. Currently, there is a global effort to build a quantum computer, which is dependent on super-precise transfer of signals between its different parts.
    Another field within quantum research deals with sensors that, for example, can measure the smallest biological fluctuations in our own body – here too, signal transfer is crucial.
    But Xiang Xi and Albert Schliesser are currently most interested in exploring the possibilities even further.
    “Right now, we want to experiment with the method to see what we can do with it. For example, we want to build more complex structures and see how we can get phonons to move around them, or build structures where we get phonons to collide like cars at an intersection. This will give us a better understanding of what is ultimately possible and what new applications there are,” says Albert Schliesser. As they say: “Basic research is about producing new knowledge.” More

  • in

    AI spots deadly heart risk most doctors can’t see

    A new AI model is much better than doctors at identifying patients likely to experience cardiac arrest.
    The linchpin is the system’s ability to analyze long-underused heart imaging, alongside a full spectrum of medical records, to reveal previously hidden information about a patient’s heart health.
    The federally-funded work, led by Johns Hopkins University researchers, could save many lives and also spare many people unnecessary medical interventions, including the implantation of unneeded defibrillators.
    “Currently we have patients dying in the prime of their life because they aren’t protected and others who are putting up with defibrillators for the rest of their lives with no benefit,” said senior author Natalia Trayanova, a researcher focused on using artificial intelligence in cardiology. “We have the ability to predict with very high accuracy whether a patient is at very high risk for sudden cardiac death or not.”
    The findings are published today in Nature Cardiovascular Research.
    Hypertrophic cardiomyopathy is one of the most common inherited heart diseases, affecting one in every 200 to 500 individuals worldwide, and is a leading cause of sudden cardiac death in young people and athletes.
    Many patients with hypertrophic cardiomyopathy will live normal lives, but a percentage are at significant increased risk for sudden cardiac death. It’s been nearly impossible for doctors to determine who those patients are.

    Current clinical guidelines used by doctors across the United States and Europe to identify the patients most at risk for fatal heart attacks have about a 50% chance of identifying the right patients, “not much better than throwing dice,” Trayanova says.
    The team’s model significantly outperformed clinical guidelines across all demographics.
    Multimodal AI for ventricular Arrhythmia Risk Stratification (MAARS), predicts individual patients’ risk for sudden cardiac death by analyzing a variety of medical data and records, and, for the first time, exploring all the information contained in the contrast-enhanced MRI images of the patient’s heart.
    People with hypertrophic cardiomyopathy develop fibrosis, or scarring, across their heart and it’s the scarring that elevates their risk of sudden cardiac death. While doctors haven’t been able to make sense of the raw MRI images, the AI model zeroed right in on the critical scarring patterns.
    “People have not used deep learning on those images,” Trayanova said. “We are able to extract this hidden information in the images that is not usually accounted for.”
    The team tested the model against real patients treated with the traditional clinical guidelines at Johns Hopkins Hospital and Sanger Heart & Vascular Institute in North Carolina.

    Compared to the clinical guidelines that were accurate about half the time, the AI model was 89% accurate across all patients and, critically, 93% accurate for people 40 to 60 years old, the population among hypertrophic cardiomyopathy patients most at-risk for sudden cardiac death.
    The AI model also can describe why patients are high risk so that doctors can tailor a medical plan to fit their specific needs.
    “Our study demonstrates that the AI model significantly enhances our ability to predict those at highest risk compared to our current algorithms and thus has the power to transform clinical care,” says co-author Jonathan Crispin, a Johns Hopkins cardiologist.
    In 2022, Trayanova’s team created a different multi-modal AI model that offered personalized survival assessment for patients with infarcts, predicting if and when someone would die of cardiac arrest.
    The team plans to further test the new model on more patients and expand the new algorithm to use with other types of heart diseases, including cardiac sarcoidosis and arrhythmogenic right ventricular cardiomyopathy.
    Authors include Changxin Lai, Minglang Yin, Eugene G. Kholmovski, Dan M. Popescu, Edem Binka, Stefan L. Zimmerman, Allison G. Hays, all of Johns Hopkins; Dai-Yin Luand M. Roselle Abrahamof the Hypertrophic Cardiomyopathy Center of Excellence at University of California San Francisco; and Erica Schererand Dermot M. Phelanof Atrium Health. More

  • in

    Climate change could separate vanilla plants and their pollinators

    Vanilla plants could have a future that’s not so sweet.

    Wild relatives of the vanilla plant — which could be essential if the original cash crop disappears — may someday live in different places than their usual pollinators, according to two climate change predictions. The result could be a major mismatch, with habitat overlap between one vanilla species and its pollinator decreasing by up to 90 percent, researchers report July 3 in Frontiers in Plant Science. More

  • in

    Scientists just simulated the “impossible” — fault-tolerant quantum code cracked at last

    Quantum computers still face a major hurdle on their pathway to practical use cases: their limited ability to correct the arising computational errors. To develop truly reliable quantum computers, researchers must be able to simulate quantum computations using conventional computers to verify their correctness – a vital yet extraordinarily difficult task. Now, in a world-first, researchers from Chalmers University of Technology in Sweden, the University of Milan, the University of Granada, and the University of Tokyo have unveiled a method for simulating specific types of error-corrected quantum computations – a significant leap forward in the quest for robust quantum technologies.
    Quantum computers have the potential to solve complex problems that no supercomputer today can handle. In the foreseeable future, quantum technology’s computing power is expected to revolutionise fundamental ways of solving problems in medicine, energy, encryption, AI, and logistics.
    Despite these promises, the technology faces a major challenge: the need for correcting the errors arising in a quantum computation. While conventional computers also experience errors, these can be quickly and reliably corrected using well-established techniques before they can cause problems. In contrast, quantum computers are subject to far more errors, which are additionally harder to detect and correct. Quantum systems are still not fault-tolerant and therefore not yet fully reliable.
    To verify the accuracy of a quantum computation, researchers simulate – or mimic – the calculations using conventional computers. One particularly important type of quantum computation that researchers are therefore interested in simulating is one that can withstand disturbances and effectively correct errors. However, the immense complexity of quantum computations makes such simulations extremely demanding – so much so that, in some cases, even the world’s best conventional supercomputer would take the age of the universe to reproduce the result.
    Researchers from Chalmers University of Technology, the University of Milan, the University of Granada and the University of Tokyo have now become the first in the world to present a method for accurately simulating a certain type of quantum computation that is particularly suitable for error correction, but which thus far has been very difficult to simulate. The breakthrough tackles a long-standing challenge in quantum research.
    “We have discovered a way to simulate a specific type of quantum computation where previous methods have not been effective. This means that we can now simulate quantum computations with an error correction code used for fault tolerance, which is crucial for being able to build better and more robust quantum computers in the future,” says Cameron Calcluth, PhD in Applied Quantum Physics at Chalmers and first author of a study recently published in Physical Review Letters.
    Error-correcting quantum computations – demanding yet crucial
    The limited ability of quantum computers to correct errors stems from their fundamental building blocks – qubits – which have the potential for immense computational power but are also highly sensitive. The computational power of quantum computers relies on the quantum mechanical phenomenon of superposition, meaning qubits can simultaneously hold the values 1 and 0, as well as all intermediate states, in any combination. The computational capacity increases exponentially with each additional qubit, but the trade-off is their extreme susceptibility to disturbances.

    “The slightest noise from the surroundings in the form of vibrations, electromagnetic radiation, or a change in temperature can cause the qubits to miscalculate or even lose their quantum state, their coherence, thereby also losing their capacity to continue calculating,” says Calcluth.
    To address this issue, error correction codes are used to distribute information across multiple subsystems, allowing errors to be detected and corrected without destroying the quantum information. One way is to encode the quantum information of a qubit into the multiple – possibly infinite – energy levels of a vibrating quantum mechanical system. This is called a bosonic code. However, simulating quantum computations with bosonic codes is particularly challenging because of the multiple energy levels, and researchers have been unable to reliably simulate them using conventional computers – until now.
    New mathematical tool key in the researchers’ solution
    The method developed by the researchers consists of an algorithm capable of simulating quantum computations that use a type of bosonic code known as the Gottesman-Kitaev-Preskill (GKP) code. This code is commonly used in leading implementations of quantum computers.
    “The way it stores quantum information makes it easier for quantum computers to correct errors, which in turn makes them less sensitive to noise and disturbances. Due to their deeply quantum mechanical nature, GKP codes have been extremely difficult to simulate using conventional computers. But now we have finally found a unique way to do this much more effectively than with previous methods,” says Giulia Ferrini, Associate Professor of Applied Quantum Physics at Chalmers and co-author of the study.
    The researchers managed to use the code in their algorithm by creating a new mathematical tool. Thanks to the new method, researchers can now more reliably test and validate a quantum computer’s calculations.

    “This opens up entirely new ways of simulating quantum computations that we have previously been unable to test but are crucial for being able to build stable and scalable quantum computers,” says Ferrini.
    More about the research
    The article Classical simulation of circuits with realistic odd-dimensional Gottesman-Kitaev-Preskill states has been published in Physical Review Letters. The authors are Cameron Calcluth, Giulia Ferrini, Oliver Hahn, Juani Bermejo-Vega and Alessandro Ferraro. The researchers are active at Chalmers University of Technology, Sweden, the University of Milan, Italy, the University of Granada, Spain, and the University of Tokyo, Japan. More

  • in

    Quantum computers just beat classical ones — Exponentially and unconditionally

    Quantum computers have the potential to speed up computation, help design new medicines, break codes, and discover exotic new materials — but that’s only when they are truly functional.
    One key thing that gets in the way: noise or the errors that are produced during computations on a quantum machine — which in fact makes them less powerful than classical computers – until recently.
    Daniel Lidar, holder of the Viterbi Professorship in Engineering and Professor of Electrical & Computing Engineering at the USC Viterbi School of Engineering, has been iterating on quantum error correction, and in a new study along with collaborators at USC and Johns Hopkins, has been able to demonstrate a quantum exponential scaling advantage, using two 127-qubit IBM Quantum Eagle processor-powered quantum computers, over the cloud. The paper, “Demonstration of Algorithmic Quantum Speedup for an Abelian Hidden Subgroup Problem,” was published in APS flagship journal Physical Review X.
    “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.”
    The key milestone for quantum computing, Lidar says, has always been to demonstrate that we can execute entire algorithms with a scaling speedup relative to ordinary “classical” computers.
    He clarifies that a scaling speedup doesn’t mean that you can do things, say, 100 times faster. “Rather, it’s that as you increase a problem’s size by including more variables, the gap between the quantum and the classical performance keeps growing. And an exponential speedup means that the performance gap roughly doubles for every additional variable. Moreover, the speedup we demonstrated is unconditional.”
    What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.

    Simon’s problem involves finding a hidden repeating pattern in a mathematical function and is considered the precursor to what’s known as Shor’s factoring algorithm, which can be used to break codes and launched the entire field of quantum computing. Simon’s problem is like a guessing game, where the players try to guess a secret number known only to the game host (the “oracle”). Once a player guesses two numbers for which the answers returned by the oracle are identical, the secret number is revealed, and that player wins. Quantum players can win this game exponentially faster than classical players.
    So, how did the team achieve their exponential speedup? Phattharaporn Singkanipa, USC doctoral researcher and first author, says, “The key was squeezing every ounce of performance from the hardware: shorter circuits, smarter pulse sequences, and statistical error mitigation.”
    The researchers achieved this in four different ways:
    First, they limited the data input by restricting how many secret numbers would be allowed (technically, by limiting the number of 1’s in the binary representation of the set of secret numbers). This resulted in fewer quantum logic operations than would be needed otherwise, which reduced the opportunity for error buildup.
    Second, they compressed the number of required quantum logic operations as much as possible using a method known as transpilation.
    Third, and most crucially, the researchers applied a method called “dynamical decoupling,” which means applying sequences of carefully designed pulses to detach the behavior of qubits within the quantum computer from their noisy environment and keep the quantum processing on track. Dynamical decoupling had the most dramatic impact on their ability to demonstrate a quantum speedup.

    Finally, they applied “measurement error mitigation,” a method that finds and corrects certain errors that are left over after dynamical decoupling due to imperfections in measuring the qubits’ state at the end of the algorithm.
    Says Lidar, who is also a professor of Chemistry and Physics at the USC Dornsife College of Letters, Arts and Science, “The quantum computing community is showing how quantum processors are beginning to outperform their classical counterparts in targeted tasks, and are stepping into a territory classical computing simply can’t reach., Our result shows that already today’s quantum computers firmly lie on the side of a scaling quantum advantage.”
    He adds that with this new research, The performance separation cannot be reversed because the exponential speedup we’ve demonstrated is, for the first time, unconditional.” In other words, the quantum performance advantage is becoming increasingly difficult to dispute.
    Next steps:
    Lidar cautions that “this result doesn’t have practical applications beyond winning guessing games, and much more work remains to be done before quantum computers can be claimed to have solved a practical real-world problem.”
    This will require demonstrating speedups that don’t rely on “oracles” that know the answer in advance and making significant advances in methods for further reducing noise and decoherence in ever larger quantum computers. Nevertheless, quantum computers’ previously “on-paper promise” to provide exponential speedups has now been firmly demonstrated.
    Disclosure: USC is an IBM Quantum Innovation Center. Quantum Elements, Inc. Is a startup in the IBM Quantum Network. More

  • in

    Harmful heat doesn’t always come in waves

    In recent weeks, extreme heat waves have broiled the United States, China and Europe. But scientists are warning of another hazardous form of heat: chronic heat. In places like Miami and Phoenix, temperatures can soar for months at a time without reaching heat wave levels, potentially contributing to health issues such as kidney disfunction, sleep apnea and depression. But too little research has focused on how these impacts may compound over months of exposure, University of Miami climate and health researcher Mayra Cruz and colleagues report in the June Environmental Research Climate.

    “It’s the family that lives with conditions that are just a little bit too hot all the time and no air conditioning,” says Victoria Turner, an urban planner at UCLA who was not involved in the study. “The mother is pregnant in hot conditions, their children go to bed without air conditioning and go to schools without air conditioning, and then that’s changing their developmental physiology.” More

  • in

    AI sees what doctors miss: Fatty liver disease hidden in chest x-rays

    Fatty liver disease, caused by the accumulation of fat in the liver, is estimated to affect one in four people worldwide. If left untreated, it can lead to serious complications, such as cirrhosis and liver cancer, making it crucial to detect early and initiate treatment.
    Currently, standard tests for diagnosing fatty liver disease include ultrasounds, CTs, and MRIs, which require costly specialized equipment and facilities. In contrast, chest X-rays are performed more frequently, are relatively inexpensive, and involve low radiation exposure. Although this test is primarily used to examine the condition of the lungs and heart, it also captures part of the liver, making it possible to detect signs of fatty liver disease. However, the relationship between chest X-rays and fatty liver disease has rarely been a subject of in-depth study.
    Therefore, a research group led by Associate Professor Sawako Uchida-Kobayashi and Associate Professor Daiju Ueda at Osaka Metropolitan University’s Graduate School of Medicine developed an AI model that can detect the presence of fatty liver disease from chest X-ray images.
    In this retrospective study, a total of 6,599 chest X-ray images containing data from 4,414 patients were used to develop an AI model utilizing controlled attenuation parameter (CAP) scores. The AI model was verified to be highly accurate, with the area under the receiver operating characteristic curve (AUC) ranging from 0.82 to 0.83.
    “The development of diagnostic methods using easily obtainable and inexpensive chest X-rays has the potential to improve fatty liver detection. We hope it can be put into practical use in the future,” stated Professor Uchida-Kobayashi. More

  • in

    ‘Magic’ states empower error-resistant quantum computing

    Senior physics writer Emily Conover has a Ph.D. in physics from the University of Chicago. She is a two-time winner of the D.C. Science Writers’ Association Newsbrief award and a winner of the Acoustical Society of America’s Science Communication Award. More