More stories

  • in

    Researchers find a way to check that quantum computers return accurate answers

    Quantum computers are advancing at a rapid pace and are already starting to push the limits of the world’s largest supercomputers. Yet, these devices are extremely sensitive to external influences and thus prone to errors which can change the result of the computation. This is particularly challenging for quantum computations that are beyond the reach of our trusted classical computers, where we can no longer independently verify the results through simulation. “In order to take full advantage of future quantum computers for critical calculations we need a way to ensure the output is correct, even if we cannot perform the calculation in question by other means,” says Chiara Greganti from the University of Vienna.
    Let the quantum computers check each other
    To address this challenge, the team developed and implemented a new cross-check procedure that allows the results of a calculation performed on one device to be verified through a related but fundamentally different calculation on another device. “We ask different quantum computers to perform different random-looking computations,” explains Martin Ringbauer from the University of Innsbruck. “What the quantum computers don’t know is that there is a hidden connection between the computations they are doing.” Using an alternative model of quantum computing that is built on graph structures, the team is able to generate many different computations from a common source. “While the results may appear random and the computations are different, there are certain outputs that must agree if the devices are working correctly.”
    A simple and efficient technique
    The team implemented their method on 5 current quantum computers using 4 distinct hardware technologies: superconducting circuits, trapped ions, photonics, and nuclear magnetic resonance. This goes to show that the method works on current hardware without any special requirements. The team also demonstrated that the technique could be used to check a single device against itself. Since the two computations are so different, the two results will only agree if they are also correct. Another key advantage of the new approach is that the researchers do not have to look at the full result of the computation, which can be very time consuming. “It is enough to check how often the different devices agree for the cases where they should, which can be done even for very large quantum computers,” says Tommaso Demarie from Entropica Labs in Singapore. With more and more quantum computers becoming available, this technique may be key to making sure they are doing what is advertised
    Academia and industry joining forces to make quantum computers trustworthy
    The research aiming to make quantum computers trustworthy is a joint effort of university researchers and quantum computing industry experts from multiple companies. “This close collaboration of academia and industry is what makes this paper unique from a sociological perspective,” shares Joe Fitzsimons from Horizon Quantum Computing in Singapore. “While there’s a progressive shift with some researchers moving to companies, they keep contributing to the common effort making quantum computing reliable and useful.”
    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    Researchers use artificial intelligence to predict which COVID-19 patients will need a ventilator to breathe

    Researchers at Case Western Reserve University have developed an online tool to help medical staff quickly determine which COVID-19 patients will need help breathing with a ventilator.
    The tool, developed through analysis of CT scans from nearly 900 COVID-19 patients diagnosed in 2020, was able to predict ventilator need with 84% accuracy.
    “That could be important for physicians as they plan how to care for a patient — and, of course, for the patient and their family to know,” said Anant Madabhushi, the Donnell Institute Professor of Biomedical Engineering at Case Western Reserve and head of the Center for Computational Imaging and Personalized Diagnostics (CCIPD). “It could also be important for hospitals as they determine how many ventilators they’ll need.”
    Next, Madabhushi said he hopes to use those results to try out the computational tool in real time at University Hospitals and Louis Stokes Cleveland VA Medical Center with COVID-19 patients.
    If successful, he said medical staff at the two hospitals could upload a digitized image of the chest scan to a cloud-based application, where the AI at Case Western Reserve would analyze it and predict whether that patient would likely need a ventilator.
    Dire need for ventilators
    Among the more common symptoms of severe COVID-19 cases is the need for patients to be placed on ventilators to ensure they will be able to continue to take in enough oxygen as they breathe. More

  • in

    Optimal lifting of COVID-19 restrictions would follow pace of vaccination, study suggests

    A new analysis suggests that, in order to boost freedoms and protect against overwhelming new waves of COVID-19, the pace at which restrictions to reduce spread are lifted must be directly tied to the pace of vaccination. Simon Bauer, Viola Priesemann, and colleagues of the Max Planck Institute for Dynamics and Self-Organization, Germany, present these findings in the open-access journal PLOS Computational Biology.
    More than a year after the COVID-19 pandemic began, vaccination programs now hold promise to ease many burdens caused by the disease — including necessary restrictions that have had negative social and economic consequences. Much research has focused on vaccine allocation and prioritization, and optimal ways to control spread. However, how to execute a smooth transition between an unprotected population to eventual population immunity remained an open question.
    To address that question, Bauer and colleagues applied mathematical modeling to epidemiological and vaccination data from Germany, France, the U.K., and other European countries. Specifically, they quantified the pace at which restrictions could be lifted during vaccine rollout in order to mitigate the risk of rebound COVID-19 waves that overwhelm intensive care units.
    After considering various plausible scenarios, the researchers concluded that further severe waves can only be avoided if restrictions are lifted no faster than the pace dictated by vaccination progress, and that there is basically no gain in freedom if one eases restrictions too quickly. The findings suggest that, even after 80 percent of the adult population has been vaccinated, novel, more infectious variants could trigger a new wave and overwhelm intensive care units if lifting all restrictions.
    “In such an event, restrictions would quickly have to be reinstated, thus quickly vanishing the mirage of freedom,” Priesemann says. “Furthermore, an early lift would have high morbidity and mortality costs. Meanwhile, relaxing restrictions at the pace of vaccination shows almost the same progress in ‘freedom’ while maintaining low incidence.”
    The researchers say their findings suggest that, despite public pressure, policymakers should not rush relaxation of restrictions, and a high vaccination rate — especially among high-risk populations — is necessary. Further research will be needed to design optimal scenarios from a global perspective.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Surprise result for solid state physicists hints at unusual electron behavior

    While studying the behavior of electrons in iron-based superconducting materials, researchers at the University of Tokyo observed a strange signal relating to the way electrons are arranged. The signal implies a new arrangement of electrons the researchers call a nematicity wave, and they hope to collaborate with theoretical physicists to better understand it. The nematicity wave could help researchers understand the way electrons interact with each other in superconductors.
    A long-standing dream of solid state physicists is to fully understand the phenomenon of superconductivity — essentially electronic conduction without the resistance that creates heat and drains power. It would usher in a whole new world of incredibly efficient or powerful devices and is already being used on Japan’s experimental magnetic levitation bullet train. But there is much to explore in this complex topic, and it often surprises researchers with unexpected results and observations.
    Professor Shik Shin from the Institute for Solid State Physics at the University of Tokyo and his team study the way electrons behave in iron-based superconducting materials, or IBSCs. These materials show a lot of promise as they could work at higher temperatures than some other superconducting materials which is an important concern. They also use less exotic material components so can be easier and cheaper to work with. To activate a sample’s superconducting ability, the material needs to be cooled down to several hundreds of degrees below zero. And interesting things happen during this cooling process.
    “As IBSCs cool down to a certain level, they express a state we call electronic nematicity,” said Shin. “This is where the crystal lattice of the material and the electrons within it appear to be arranged differently depending on the angle you look at them, otherwise known as anisotropy. We expect the way electrons are arranged to be tightly coupled to the way the surrounding crystal lattice is arranged. But our recent observation shows something very different and actually quite surprising.”
    Shin and his team used a special technique developed by their group called laser-PEEM (photoemission electron microscopy) to visualize their IBSC sample on the microscopic scale. They expected to see a familiar pattern that repeats every few nanometers (billionths of a meter). And sure enough the crystal lattice did show this pattern. But to their surprise, the team found that the pattern of electrons was repeating every few hundred nanometers instead.
    This disparity between the electron nematicity wave and the crystalline structure of the IBSC was unexpected, so its implications are still under investigation. But the result could open the door to theoretical and experimental explorations into something fundamental to the phenomenon of superconductivity, and that is the way that electrons form pairs at low temperatures. Knowledge of this process could be crucial to the development of high-temperature superconductivity. So if nematicity waves are related, it is important to know how.
    “Next, I hope we can work with theoretical physicists to further our understanding of nematicity waves,” said Shin. “We also wish to use laser-PEEM to study other related materials such as metal oxides like copper oxide. It may not always be obvious where the applications lie, but working on problems of fundamental physics really fascinates me.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Putting a new theory of many-particle quantum systems to the test

    New experiments using trapped one-dimensional gases — atoms cooled to the coldest temperatures in the universe and confined so that they can only move in a line — fit with the predictions of the recently developed theory of “generalized hydrodynamics.” Quantum mechanics is necessary to describe the novel properties of these gases. Achieving a better understanding of how such systems with many particles evolve in time is a frontier of quantum physics. The result could greatly simplify the study of quantum systems that have been excited out of equilibrium. Besides its fundamental importance, it could eventually inform the development of quantum-based technologies, which include quantum computers and simulators, quantum communication, and quantum sensors. A paper describing the experiments by a team led by Penn State physicists appears September 2, 2021 in the journal Science.
    Even within classical physics, where the additional complexities of quantum mechanics can be ignored, it is impossible to simulate the motion of all the atoms in a moving fluid. To approximate these systems of particles, physicists use hydrodynamics descriptions.
    “The basic idea behind hydrodynamics is to forget about the atoms and consider the fluid as a continuum,” said Marcos Rigol, professor of physics at Penn State and one of the leaders of the research team. “To simulate the fluid, one ends up writing coupled equations that result from imposing a few constraints, such as the conservation of mass and energy. These are the same types of equations solved, for example, to simulate how air flows when you open windows to improve ventilation in a room.”
    Matter becomes more complicated if quantum mechanics is involved, as is the case when one wants to simulate quantum many-body systems that are out of equilibrium.
    “Quantum many body systems — which are composed of many interacting particles, such as atoms — are at the heart of atomic, nuclear, and particle physics,” said David Weiss, Distinguished Professor of Physics at Penn State and one of the leaders of the research team. “It used to be that except in extreme limits you couldn’t do a calculation to describe out-of-equilibrium quantum many-body systems. That recently changed.”
    The change was motivated by the development of a theoretical framework known as generalized hydrodynamics.
    “The problem with those quantum many-body systems in one dimension is that they have so many constraints on their motion that regular hydrodynamics descriptions cannot be used,” said Rigol. “Generalized hydrodynamics was developed to keep track of all those constraints.”
    Until now, generalized hydrodynamics had only previously been experimentally tested under conditions where the strength of interactions among particles was weak.
    “We set out to test the theory further, by looking at the dynamics of one dimensional gases with a wide range of interaction strengths,” said Weiss. “The experiments are extremely well controlled, so the results can be precisely compared to the predictions of this theory.
    The research team uses one dimensional gases of interacting atoms that are initially confined in a very shallow trap in equilibrium. They then very suddenly increase the depth of the trap by 100 times, which forces the particles to collapse into the center of the trap, causing their collective properties to change. Throughout the collapse, the team precisely measures their properties, which they can then compare to the predictions of generalized hydrodynamics.
    “Our measurements matched the prediction of theory across dozens of trap oscillations,” said Weiss. “There currently aren’t other ways to study out-of-equilibrium quantum systems for long periods of time with reasonable accuracy, especially with a lot of particles. Generalized hydrodynamics allow us to do this for some systems like the one we tested, but how generally applicable it is still needs to be determined.”
    Story Source:
    Materials provided by Penn State. Original written by Sam Sholtis. Note: Content may be edited for style and length. More

  • in

    New ‘vortex beams’ of atoms and molecules are the first of their kind

    Like soft serve ice cream, beams of atoms and molecules now come with a swirl.

    Scientists already knew how to dish up spiraling beams of light or electrons, known as vortex beams (SN: 1/14/11). Now, the first vortex beams of atoms and molecules are on the menu, researchers report in the Sept. 3 Science.

    Vortex beams made of light or electrons have shown promise for making special types of microscope images and for transmitting information using quantum physics (SN: 8/5/15). But vortex beams of larger particles such as atoms or molecules are so new that the possible applications aren’t yet clear, says physicist Sonja Franke-Arnold of the University of Glasgow in Scotland, who was not involved with the research. “It’s maybe too early to really know what we can do with it.”

    In quantum physics, particles are described by a wave function, a wavelike pattern that allows scientists to calculate the probability of finding a particle in a particular place (SN: 6/8/11). But vortex beams’ waves don’t slosh up and down like ripples on water. Instead, the beams’ particles have wave functions that move in a corkscrewing motion as a beam travels through space. That means the beam carries a rotational oomph known as orbital angular momentum. “This is something really very strange, very nonintuitive,” says physicist Edvardas Narevicius of the Weizmann Institute of Science in Rehovot, Israel.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!
    There was a problem signing you up.

    Narevicius and colleagues created the new beams by passing helium atoms through a grid of specially shaped slit patterns, each just 600 nanometers wide. The team detected a hallmark of vortex beams: a row of doughnut-shaped rings imprinted on a detector by the atoms, in which each doughnut corresponds to a beam with a different orbital angular momentum.

    Another set of doughnuts revealed the presence of vortex beams of helium excimers, molecules created when a helium atom in an excited, or energized, state pairs up with another helium atom.

    A pattern of rings reveals the presence of vortex beams of atoms and molecules. Each doughnut shape corresponds to a beam of helium atoms with a different angular momentum. Two hard-to-see circles from helium molecules sit in between the center dot and the first two doughnuts left and right of the center.A. Luski et al/Science 2021

    A pattern of rings reveals the presence of vortex beams of atoms and molecules. Each doughnut shape corresponds to a beam of helium atoms with a different angular momentum. Two hard-to-see circles from helium molecules sit in between the center dot and the first two doughnuts left and right of the center.A. Luski et al/Science 2021

    Next, scientists might investigate what happens when vortex beams of molecules or atoms collide with light, electrons or other atoms or molecules. Such collisions are well-understood for normal particle beams, but not for those with orbital angular momentum. Similar vortex beams made with protons might also serve as a method for probing the subatomic particle’s mysterious innards (SN: 4/18/17).

    In physics, “most important things are achieved when we are revisiting known phenomena with a fresh perspective,” says physicist Ivan Madan of EPFL, the Swiss Federal Institute of Technology in Lausanne, who was not involved with the research. “And, for sure, this experiment allows us to do that.” More

  • in

    Astronomers may have seen a star gulp down a black hole and explode

    For the first time, astronomers have captured solid evidence of a rare double cosmic cannibalism — a star swallowing a compact object such as a black hole or neutron star. In turn, that object gobbled the star’s core, causing it to explode and leave behind only a black hole.

    The first hints of the gruesome event, described in the Sept. 3 Science, came from the Very Large Array (VLA), a radio telescope consisting of 27 enormous dishes in the New Mexican desert near Socorro. During the observatory’s scans of the night sky in 2017, a burst of radio energy as bright as the brightest exploding star — or supernova — as seen from Earth appeared in a dwarf star–forming galaxy approximately 500 million light-years away.

    “We thought, ‘Whoa, this is interesting,’” says Dillon Dong, an astronomer at Caltech.

    He and his colleagues made follow-up observations of the galaxy using the VLA and one of the telescopes at the W.M. Keck Observatory in Hawaii, which sees in the same optical light as our eyes. The Keck telescope caught a luminous outflow of material spewing in all directions at 3.2 million kilometers per hour from a central location, suggesting that an energetic explosion had occurred there in the past.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!
    There was a problem signing you up.

    The team then found an extremely bright X-ray source in archival data from the Monitor of All Sky X-ray Image (MAXI) telescope, a Japanese instrument that sits on the International Space Station. This X-ray burst was in the same place as the radio one but had been observed back in 2014.  

    Piecing the data together, Dong and his colleagues think this is what happened: Long ago, a binary pair of stars were born orbiting each other; one died in a spectacular supernova and became either a neutron star or a black hole. As gravity brought the two objects closer together, the dead star actually entered the outer layers of its larger stellar sibling.

    The compact object spiraled inside the still-living star for hundreds of years, eventually making its way down to and then eating its partner’s core. During this time, the larger star shed huge amounts of gas and dust, forming a shell of material around the duo.

    In the living star’s center, gravitational forces and complex magnetic interactions from the dead star’s munching launched enormous jets of energy — picked up as an X-ray flash in 2014 — as well as causing the larger star to explode. Debris from the detonation smashed with colossal speed into the surrounding shell of material, generating the optical and radio light.

    While theorists have previously envisioned such a scenario, dubbed a merger-triggered core collapse supernova, this appears to represent the first direct observation of this phenomenon, Dong says.

    “They’ve done some pretty good detective work using these observations,” says Adam Burrows, an astrophysicist at Princeton University who was not involved in the new study. He says the findings should help constrain the timing of a process called common envelope evolution, in which one star becomes immersed inside another. Such stages in stars’ lives are relatively short-lived in cosmic time and difficult to both observe and simulate. Most of the time, the engulfing partner dies before its core is consumed, leading to two compact objects like white dwarfs, neutron stars or black holes orbiting one another.

    The final stages of these systems are exactly what observatories like the Advanced Laser Interferometer Gravitational-Wave Observatory, or LIGO, detect when capturing spacetime’s ripples, Dong says (SN: 8/4/21). Now that astronomers know to look for these multiple lines of evidence, he expects them to find more examples this strange phenomenon. More

  • in

    Scientists create a labor-saving automated method for studying electronic health records

    In an article published in the journal Patterns, scientists at the Icahn School of Medicine at Mount Sinai described the creation of a new, automated, artificial intelligence-based algorithm that can learn to read patient data from electronic health records. In a side-by-side comparison, they showed that their method, called Phe2vec (FEE-to-vek), accurately identified patients with certain diseases as well as the traditional, “gold-standard” method, which requires much more manual labor to develop and perform.
    “There continues to be an explosion in the amount and types of data electronically stored in a patient’s medical record. Disentangling this complex web of data can be highly burdensome, thus slowing advancements in clinical research,” said Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai (HPIMS), and a senior author of the study. “In this study, we created a new method for mining data from electronic health records with machine learning that is faster and less labor intensive than the industry standard. We hope that this will be a valuable tool that will facilitate further, and less biased, research in clinical informatics.”
    The study was led by Jessica K. De Freitas, a graduate student in Dr. Glicksberg lab.
    Currently, scientists rely on a set of established computer programs, or algorithms, to mine medical records for new information. The development and storage of these algorithms is managed by a system called the Phenotype Knowledgebase (PheKB). Although the system is highly effective at correctly identifying a patient diagnosis, the process of developing an algorithm can be very time-consuming and inflexible. To study a disease, researchers first have to comb through reams of medical records looking for pieces of data, such as certain lab tests or prescriptions, which are uniquely associated with the disease. They then program the algorithm that guides the computer to search for patients who have those disease-specific pieces of data, which constitute a “phenotype.” In turn, the list of patients identified by the computer needs to be manually double-checked by researchers. Each time researchers want to study a new disease, they have to restart the process from scratch.
    In this study, the researchers tried a different approach — one in which the computer learns, on its own, how to spot disease phenotypes and thus save researchers time and effort. This new, Phe2vec method was based on studies the team had already conducted.
    “Previously, we showed that unsupervised machine learning could be a highly efficient and effective strategy for mining electronic health records,” said Riccardo Miotto, PhD, a former Assistant Professor at the HPIMS and a senior author of the study. “The potential advantage of our approach is that it learns representations of diseases from the data itself. Therefore, the machine does much of the work experts would normally do to define the combination of data elements from health records that best describes a particular disease.”
    Essentially, a computer was programmed to scour through millions of electronic health records and learn how to find connections between data and diseases. This programming relied on “embedding” algorithms that had been previously developed by other researchers, such as linguists, to study word networks in various languages. One of the algorithms, called word2vec, was particularly effective. Then, the computer was programmed to use what it learned to identify the diagnoses of nearly 2 million patients whose data was stored in the Mount Sinai Health System.
    Finally, the researchers compared the effectiveness between the new and the old systems. For nine out of ten diseases tested, they found that the new Phe2vec system was as effective as, or performed slightly better than, the gold standard phenotyping process at correctly identifying a diagnoses from electronic health records. A few examples of the diseases included dementia, multiple sclerosis, and sickle cell anemia.
    “Overall our results are encouraging and suggest that Phe2vec is a promising technique for large-scale phenotyping of diseases in electronic health record data,” Dr. Glicksberg said. “With further testing and refinement, we hope that it could be used to automate many of the initial steps of clinical informatics research, thus allowing scientists to focus their efforts on downstream analyses like predictive modeling.”
    This study was supported by the Hasso Plattner Foundation, the Alzheimer’s Drug Discovery Foundation, and a courtesy graphics processing unit donation from the NVIDIA Corporation. More