More stories

  • in

    Development of an ensemble model to anticipate short-term COVID-19 hospital demand

    For the past two years, the COVID-19 pandemic has exerted pressure on the hospital system, with consequences for patients’ care pathways. To support hospital planning strategies, it is important to anticipate COVID-19 health care demand and to continue to improve predictive models.
    In this study published in the Proceedings of the National Academy of Sciences, scientists from the Mathematical Modeling of Infectious Diseases Unit at the Institut Pasteur identified the most relevant predictive variables for anticipating hospital demand and proposed using an ensemble model based on the average of the predictions of several individual models.
    The scientists began by evaluating the performance of 12 individual models and 19 predictive variables, or “predictors,” such as epidemiological data (for example the number of cases) and meteorological or mobility data (for example the use of public transport). The scientists showed that the models incorporating these early predictive variables performed better. The average prediction error was halved for 14-day-ahead predictions. “These early variables detect changes in epidemic dynamics more quickly,” explains Simon Cauchemez, Head of the Mathematical Modeling of Infectious Diseases Unit at the Institut Pasteur and last author of the study. “The models that performed best used at least one epidemiological predictor and one mobility predictor,” he continues. The addition of a meteorological variable also improved forecasts but with a more limited impact.
    The scientists then built an ensemble model, taking the average of several individual models, and tested the model retrospectively using epidemiological data from March to July 2021. This approach is already used in climate forecasting. “Our study shows that it is preferable to develop an ensemble model, as this reduces the risk of the predicted trajectory being overly influenced by the assumptions of a specific model,” explains Juliette Paireau, a research engineer in the Mathematical Modeling of Infectious Diseases Unit at the Institut Pasteur and joint first author of the study.
    This ensemble model has been used to monitor the epidemic in France since January 15, 2021.
    The study demonstrates an approach that can be used to better anticipate hospital demand for COVID-19 patients by combining different prediction models based on early predictors.
    The full results of the study can be found on the Modeling page : https://modelisation-covid19.pasteur.fr/realtime-analysis/hospital/
    Story Source:
    Materials provided by Institut Pasteur. Note: Content may be edited for style and length. More

  • in

    New computational tool to interpret clinical significance of cancer mutations

    Researchers at Children’s Hospital of Philadelphia (CHOP) have developed a new tool to help researchers interpret the clinical significance of somatic mutations in cancer. The tool, known as CancerVar, incorporates machine learning frameworks to go beyond merely identifying somatic cancer mutations and interpret the potential significance of those mutations in terms of cancer diagnosis, prognosis, and targetability. A paper describing CancerVar was published today in Science Advances.
    “CancerVar will not replace human interpretation in a clinical setting, but it will significantly reduce the manual work of human reviewers in classifying variants identified through sequencing and drafting clinical reports in the practice of precision oncology,” said Kai Wang, PhD, Professor of Pathology and Laboratory Medicine at CHOP and senior author of the paper. “CancerVar documents and harmonizes various types of clinical evidence including drug information, publications, and pathways for somatic mutations in detail. By providing standardized, reproducible, and precise output for interpreting somatic variants, CancerVar can help researchers and clinicians prioritize mutations of concern.”
    “Somatic variant classification and interpretation are the most time-consuming steps of tumor genomic profiling,” said Marilyn M. Li, MD, Professor of Pathology and Laboratory Medicine, Director of Cancer Genomic Diagnostics and co-author of the paper. “CancerVar provides a powerful tool that automates these two critical steps. Clinical implementation of this tool will significantly improve test turnaround time and performance consistency, making the tests more impactful and affordable to all pediatric cancer patients.”
    The growth of next-generation sequencing (NGS) and precision medicine has led to the identification of millions of somatic cancer variants. To better understand whether those mutations are related to or impact the clinical course of disease, researchers have established several databases that catalogue these variants. However, those databases did not provide standardized interpretations of somatic variants, so in 2017, the Association for Molecular Pathology (AMP), American Society of Clinical Oncology (ASCO), and College of American Pathologists (CAP) jointly proposed standards and guidelines for interpreting, reporting, and scoring somatic variants.
    Yet even with these guidelines, the AMP/ASCO/CAP classification scheme did not specify how to implement these standards, so different knowledge bases were providing different results. To solve this problem, the CHOP researchers, including CHOP data scientist and co-senior author of the paper Yunyun Zhou, PhD, developed CancerVar, an improved somatic variant interpretation tool using command-line software called Python with an accompanying web server. With a user-friendly web server, CancerVar includes clinical evidence for 13 million somatic cancer variants from 1,911 cancer census genes that were mined through existing studies and databases.
    In addition to including millions of somatic mutations, whether of known significance or not, the tool uses deep learning to improve clinical interpretation of those mutations. Users can query clinical interpretations for variants using information such as the chromosome position or protein change and interactively fine-tune how specific scoring features are weighted, based on prior knowledge or additional user-specified criteria. The CancerVar web server generates automated descriptive interpretations, such as whether the mutation is relevant for diagnosis or prognosis or to an ongoing clinical trial.
    “This tool shows how we can use computational tools to automate human generated guidelines, and also how machine learning can guide decision making,” Wang said. “Future research should explore applying this framework to other areas of pathology as well.”
    The research was supported by the National Institutes of Health (NIH)/National Library of Medicine (NLM)/ National Human Genome Research Institute (NHGRI) (grant number LM012895), NIH/National Institute of General Medical Sciences (NIGMS) (grant number GM120609 and GM132713), CHOP Pathology diagnostic innovation fund, and the CHOP Research Institute.
    Story Source:
    Materials provided by Children’s Hospital of Philadelphia. Note: Content may be edited for style and length. More

  • in

    Powerful family of two-dimensional materials discovered

    A team from the Tulane University School of Science and Engineering has developed a new family of two-dimensional materials that researchers say has promising applications, including in advanced electronics and high-capacity batteries.
    Led by Michael Naguib, an assistant professor in the Department of Physics and Engineering Physics, the study has been published in the journal Advanced Materials.
    “Two-dimensional materials are nanomaterials with thickness in the nanometer size (nanometer is one millionth of a millimeter) and lateral dimensions thousands of times the thickness,” Naguib said. “Their flatness offers unique set of properties compared to bulk materials.”
    The name of the new family of 2D materials is transition metal carbo-chalcogenides, or TMCC. It combines the characteristics of two families of 2D materials — transition metal carbides and transition metal dichalcogenides.
    Naguib, the Ken & Ruth Arnold Early Career Professor in Science and Engineering, said the latter is a large family of materials that has been explored extensively and found to be very promising, especially for electrochemical energy storage and conversion. But he said one of the challenges in utilizing them is their low electrical conductivity and stability.
    On the other hand, he said, transition metal carbides are excellent electrical conductors with much more powerful conductivity. Merging the two families into one is anticipated to have great potential for many applications such as batteries and supercapacitors, catalysis, sensors and electronics.
    “Instead of stacking the two different materials like Lego building blocks with many problematic interfaces, here we develop a new 2D material that has the combination of both compositions without any interface,” he said.
    “We used an electrochemical-assisted exfoliation process by inserting lithium ions in-between the layers of bulk transition metals carbo-chalcogenides followed by agitation in water,” said Ahmad Majed, the first author of the article and a doctoral candidate in Materials Physics and Engineering at Tulane working in Naguib’s group.
    Unlike other exotic nanomaterials, Majed said, the process of making these 2D TMCC nanomaterials is simple and scalable.
    In addition to Naguib and Majed, the team includes Jiang Wei, an associate professor in physics and engineering physics; Jianwei Sun, an assistant professor in physics and engineering physics; PhD candidates Kaitlyn Prenger, Manish Kothakonda and Fei Wang at Tulane; and Dr Eric N. Tseng and professor Per O.A. Persson of Linkoping University in Sweden.
    This study was supported by Naguib’s National Science Foundation Career Award that he received less than a year ago.
    Story Source:
    Materials provided by Tulane University. Note: Content may be edited for style and length. More

  • in

    It takes three to tangle: Long-range quantum entanglement needs three-way interaction

    A theoretical study shows that long-range entanglement can indeed survive at temperatures above absolute zero, if the correct conditions are met.
    Quantum computing has been earmarked as the next revolutionary step in computing. However current systems are only practically stable at temperatures close to absolute zero. A new theorem from a Japanese research collaboration provides an understanding of what types of long-range quantum entanglement survive at non-zero temperatures, revealing a fundamental aspect of macroscopic quantum phenomena and guiding the way towards further understanding of quantum systems and designing new room-temperature stable quantum devices.
    When things get small, right down to the scale of one-thousandth the width of a human hair, the laws of classical physics get replaced by those of quantum physics. The quantum world is weird and wonderful, and there is much about it that scientists are yet to understand. Large-scale or “macroscopic” quantum effects play a key role in extraordinary phenomena such as superconductivity, which is a potential game-changer in future energy transport, as well for the continued development of quantum computers.
    It is possible to observe and measure “quantumness” at this scale in particular systems with the help of long-range quantum entanglement. Quantum entanglement, which Albert Einstein once famously described as “spooky action at a distance,” occurs when a group of particles cannot be described independently from each other. This means that their properties are linked: if you can fully describe one particle, you will also know everything about the particles it is entangled with.
    Long-range entanglement is central to quantum information theory, and its further understanding could lead to a breakthrough in quantum computing technologies. However, long-range quantum entanglement is stable at specific conditions, such as between three or more parties and at temperatures close to absolute zero (-273°C). What happens to two-party entangled systems at non-zero temperatures? To answer this question, researchers from the RIKEN Center for Advanced Intelligence Project, Tokyo, and Keio University, Yokohama, recently presented a theoretical study in Physical Review X describing long-range entanglement at temperatures above absolute zero in bipartite systems.
    “The purpose of our study was to identify a limitation on the structure of long-range entanglement at arbitrary non-zero temperatures,” explains RIKEN Hakubi Team Leader Tomotaka Kuwahara, one of the authors of the study, who performed the research while at the RIKEN Center for Advanced Intelligence Project. “We provide simple no-go theorems that show what kinds of long-range entanglement can survive at non-zero temperatures. At temperatures above absolute zero, particles in a material vibrate and move around due to thermal energy, which acts against quantum entanglement. At arbitrary non-zero temperatures, no long-range entanglement can persist between only two subsystems.”
    The researchers’ findings are consistent with previous observations that long-range entanglement survives at a non-zero temperature only when more than three subsystems are involved. The results suggest this is a fundamental aspect of macroscopic quantum phenomena at room temperatures, and that quantum devices need to be engineered to have multipartite entangled states.
    “This result has opened the door to a deeper understanding of quantum entanglement over large distances, so this is just the beginning.,” states Keio University’s Professor Keijo Saito, the co-author of the study. “We aim to deepen our understanding of the relationship between quantum entanglement and temperature in the future. This knowledge will spark and drive the development of future quantum devices that work at room temperatures, making them practical.”
    Story Source:
    Materials provided by RIKEN. Note: Content may be edited for style and length. More

  • in

    In balance: Quantum computing needs the right combination of order and disorder

    Research conducted within the Cluster of Excellence ‘Matter and Light for Quantum Computing’ (ML4Q) has analysed cutting-edge device structures of quantum computers to demonstrate that some of them are indeed operating dangerously close to a threshold of chaotic meltdown. The challenge is to walk a thin line between too high, but also too low disorder to safeguard device operation. The study ‘Transmon platform for quantum computing challenged by chaotic fluctuations’ has been published today in Nature Communications.
    In the race for what may become a key future technology, tech giants like IBM and Google are investing enormous resources into the development of quantum computing hardware. However, current platforms are not yet ready for practical applications. There remain multiple challenges, among them the control of device imperfections (‘disorder’).
    It’s an old stability precaution: When large groups of people cross bridges, they need to avoid marching in step to prevent the formation of resonances destabilizing the construction. Perhaps counterintuitively, the superconducting transmon qubit processor — a technologically advanced platform for quantum computing favoured by IBM, Google, and other consortia — relies on the same principle: intentionally introduced disorder blocks the formation of resonant chaotic fluctuations, thus becoming an essential part of the production of multi-qubit processors.
    To understand this seemingly paradoxical point, one should think of a transmon qubit as a kind of pendulum. Qubits interlinked to form a computing structure define a system of coupled pendulums — a system that, like classical pendulums, can easily be excited to uncontrollably large oscillations with disastrous consequences. In the quantum world, such uncontrollable oscillations lead to the destruction of quantum information; the computer becomes unusable. Intentionally introduced local ‘detunings’ of single pendulums keep such phenomena at bay.
    ‘The transmon chip not only tolerates but actually requires effectively random qubit-to-qubit device imperfections,’ explained Christoph Berke, final-year doctoral student in the group of Simon Trebst at the University of Cologne and first author of the paper. ‘In our study, we ask just how reliable the “stability by randomness” principle is in practice. By applying state-of-the-art diagnostics of the theory of disordered systems, we were able to find that at least some of the industrially pursued system architectures are dangerously close to instability.’
    From the point of view of fundamental quantum physics, a transmon processor is a many-body quantum system with quantized energy levels. State-of-the-art numerical tools allow one to compute these discrete levels as a function of relevant system parameters, to obtain patterns superficially resembling a tangle of cooked spaghetti. A careful analysis of such structures for realistically modelled Google and IBM chips was one out of several diagnostic tools applied in the paper to map out a stability diagram for transmon quantum computing.
    ‘When we compared the Google to the IBM chips, we found that in the latter case qubit states may be coupled to a degree that controlled gate operations may be compromised,’ said Simon Trebst, head of the Computational Condensed Matter Physics group at the University of Cologne. In order to secure controlled gate operations, one thus needs to strike the subtle balance between stabilizing qubit integrity and enabling inter-qubit coupling. In the parlance of pasta preparation, one needs to prepare the quantum computer processor into perfection, keeping the energy states ‘al dente’ and avoiding their tangling by overcooking.
    The study of disorder in transmon hardware was performed as part of the Cluster of Excellence ML4Q in a collaborative work among the research groups of Simon Trebst and Alexander Altland at the University of Cologne and the group of David DiVincenzo at RWTH Aachen University and Forschungszentrum Jülich. “This collaborative project is quite unique,” says Alexander Altland from the Institute for Theoretical Physics in Cologne. “Our complementary knowledge of transmon hardware, numerical simulation of complex many-body systems, and quantum chaos was the perfect prerequisite to understand how quantum information with disorder can be protected. It also indicates how insights obtained for small reference systems can be transferred to application-relevant design scales.”
    David DiVincenzo, founding director of the JARA-Institute for Quantum Information at RWTH Aachen University, draws the following conclusion: ‘Our study demonstrates how important it is for hardware developers to combine device modelling with state-of-the-art quantum randomness methodology and to integrate “chaos diagnostics” as a routine part of qubit processor design in the superconducting platform.’
    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More

  • in

    'Digital twins,' an aid to give individual patients the right treatment at the right time

    An international team of researchers have developed advanced computer models, or “digital twins,” of diseases, with the goal of improving diagnosis and treatment. They used one such model to identify the most important disease protein in hay fever. The study, which has just been published in the open access journal Genome Medicine, underlines the complexity of disease and the necessity of using the right treatment at the right time.
    Why is a drug effective against a certain illness in some individuals, but not in others? With common diseases, medication is ineffective in 40-70 percent of the patients. One reason for this is that diseases are seldom caused by a single “fault” that can be easily treated. Instead, in most diseases the symptoms are the result of altered interactions between thousands of genes in many different cell types. The timing is also important. Disease processes often evolve over long periods. We are often not aware of disease development until symptoms appear, and diagnosis and treatment are thus often delayed, which may contribute to insufficient medical efficacy.
    In a recent study, an international research team aimed to bridge the gap between this complexity and modern health care by constructing computational disease models of the altered gene interactions across many cell types at different time points. The researchers’ long-term goal is to develop such computational models into “digital twins” of individual patients’ diseases. Such medical digital twins might be used to tailor medication so that each patient could be treated with the right drug at the right time. Ideally, each twin could be matched with and treated with thousands of drugs in the computer, before actual treatment on the patient begins.
    The researchers started by developing methods to construct digital twins of patients with hay fever. They used a technique, single-cell RNA sequencing, to determine all gene activity in each of thousands of individual immune cells — more specifically white blood cells. Since these interactions between genes and cell types may differ between different time points in the same patient, the researchers measured gene activity at different time points before and after stimulating white blood cells with pollen.
    In order to construct computer models of all the data, the researchers used network analyses. Networks can be used to describe and analyse complex systems. For example, a football team could be analysed as a network based on the passes between the players. The player that passes most to other players during the whole match may be most important in that network. Similar principles were applied to construct the computer models, or “twins,” as well as to identify the most important disease protein.
    In the current study, the researchers found that multiple proteins and signalling cascades were important in seasonal allergies, and that these varied greatly across cell types and at different stages of the disease.
    “We can see that these are extremely complicated changes that occur in different phases of a disease. The variation between different times points means that you have to treat the patient with the right medicine at the right time,” says Dr Mikael Benson, professor at Linköping University, who led the study.
    Finally, the researchers identified the most important protein in the twin model of hay fever. They show that inhibiting this protein, called PDGF-BB, in experiments with cells was more effective than using a known allergy drug directed against another protein, called IL-4.
    The study also demonstrated that the methods could potentially be applied to give the right treatment at the right time in other immunological diseases, like rheumatism or inflammatory bowel diseases. Clinical implementation will require international collaborations between universities, hospitals and companies.
    The study is based on an interdisciplinary collaboration between 15 researchers in Sweden, the US, Korea and China. The research has received financial support from the EU, NIH, the Swedish and Nordic Research Councils, and the Swedish Cancer Society.
    Story Source:
    Materials provided by Linköping University. Original written by Karin Söderlund Leifler. Note: Content may be edited for style and length. More

  • in

    Self-propelled, endlessly programmable artificial cilia

    For years, scientists have been attempting to engineer tiny, artificial cilia for miniature robotic systems that can perform complex motions, including bending, twisting, and reversing. Building these smaller-than-a-human-hair microstructures typically requires multi-step fabrication processes and varying stimuli to create the complex movements, limiting their wide-scale applications.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a single-material, single-stimuli microstructure that can outmaneuver even living cilia. These programmable, micron-scale structures could be used for a range of applications, including soft robotics, biocompatible medical devices, and even dynamic information encryption.
    The research is published in Nature.
    “Innovations in adaptive self-regulated materials that are capable of a diverse set of programmed motions represent a very active field, which is being tackled by interdisciplinary teams of scientists and engineers,” said Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS and senior author of the paper. “Advances achieved in this field may significantly impact the ways we design materials and devices for a variety of applications, including robotics, medicine and information technologies.”
    Unlike previous research, which relied mostly on complex multi-component materials to achieve programmable movement of reconfigurable structural elements, Aizenberg and her team designed a microstructure pillar made of a single material — a photoresponsive liquid crystal elastomer. Because of the way the fundamental building blocks of the liquid crystal elastomer are aligned, when light hits the microstructure, those building blocks realign and the structure changes shape.
    As this shape change occurs, two things happen. First, the spot where the light hits becomes transparent, allowing the light to penetrate further into the material, causing additional deformations. Second, as the material deforms and the shape moves, a new spot on the pillar is exposed to light, causing that area to also change shape. More

  • in

    Scientists observe quantum speed-up in optimization problems

    A collaboration between Harvard University with scientists at QuEra Computing, MIT, University of Innsbruck and other institutions has demonstrated a breakthrough application of neutral-atom quantum processors to solve problems of practical use.
    The study was co-led by Mikhail Lukin, the George Vasmer Leverett Professor of Physics at Harvard and co-director of the Harvard Quantum Initiative, Markus Greiner, George Vasmer Leverett Professor of Physics, and Vladan Vuletic, Lester Wolfe Professor of Physics at MIT. Titled “Quantum Optimization of Maximum Independent Set using Rydberg Atom Arrays,” was published on May 5th, 2022, in Science Magazine.
    Previously, neutral-atom quantum processors had been proposed to efficiently encode certain hard combinatorial optimization problems. In this landmark publication, the authors not only deploy the first implementation of efficient quantum optimization on a real quantum computer, but also showcase unprecedented quantum hardware power.
    The calculations were performed on Harvard’s quantum processor of 289 qubits operating in the analog mode, with effective circuit depths up to 32. Unlike in previous examples of quantum optimization, the large system size and circuit depth used in this work made it impossible to use classical simulations to pre-optimize the control parameters. A quantum-classical hybrid algorithm had to be deployed in a closed loop, with direct, automated feedback to the quantum processor.
    This combination of system size, circuit depth, and outstanding quantum control culminated in a quantum leap: problem instances were found with empirically better-than-expected performance on the quantum processor versus classical heuristics. Characterizing the difficulty of the optimization problem instances with a “hardness parameter,” the team identified cases that challenged classical computers, but that were more efficiently solved with the neutral-atom quantum processor. A super-linear quantum speed-up was found compared to a class of generic classical algorithms. QuEra’s open-source packages GenericTensorNetworks.jl and Bloqade.jl were instrumental in discovering hard instances and understanding quantum performance.
    “A deep understanding of the underlying physics of the quantum algorithm as well as the fundamental limitations of its classical counterpart allowed us to realize ways for the quantum machine to achieve a speedup,” says Madelyn Cain, Harvard graduate student and one of the lead authors. The importance of match-making between problem and quantum hardware is central to this work: “In the near future, to extract as much quantum power as possible, it is critical to identify problems that can be natively mapped to the specific quantum architecture, with little to no overhead,” said Shengtao Wang, Senior Scientist at QuEra Computing and one of the coinventors of the quantum algorithms used in this work, “and we achieved exactly that in this demonstration.”
    The “maximum independent set” problem, solved by the team, is a paradigmatic hard task in computer science and has broad applications in logistics, network design, finance, and more. The identification of classically challenging problem instances with quantum-accelerated solutions paves the path for applying quantum computing to cater to real-world industrial and social needs.
    “These results represent the first step towards bringing useful quantum advantage to hard optimization problems relevant to multiple industries.,” added Alex Keesling CEO of QuEra Computing and co-author on the published work. “We are very happy to see quantum computing start to reach the necessary level of maturity where the hardware can inform the development of algorithms beyond what can be predicted in advance with classical compute methods. Moreover, the presence of a quantum speedup for hard problem instances is extremely encouraging. These results help us develop better algorithms and more advanced hardware to tackle some of the hardest, most relevant computational problems.”
    Story Source:
    Materials provided by Harvard University. Note: Content may be edited for style and length. More