More stories

  • in

    Researchers use artificial intelligence to predict which COVID-19 patients will need a ventilator to breathe

    Researchers at Case Western Reserve University have developed an online tool to help medical staff quickly determine which COVID-19 patients will need help breathing with a ventilator.
    The tool, developed through analysis of CT scans from nearly 900 COVID-19 patients diagnosed in 2020, was able to predict ventilator need with 84% accuracy.
    “That could be important for physicians as they plan how to care for a patient — and, of course, for the patient and their family to know,” said Anant Madabhushi, the Donnell Institute Professor of Biomedical Engineering at Case Western Reserve and head of the Center for Computational Imaging and Personalized Diagnostics (CCIPD). “It could also be important for hospitals as they determine how many ventilators they’ll need.”
    Next, Madabhushi said he hopes to use those results to try out the computational tool in real time at University Hospitals and Louis Stokes Cleveland VA Medical Center with COVID-19 patients.
    If successful, he said medical staff at the two hospitals could upload a digitized image of the chest scan to a cloud-based application, where the AI at Case Western Reserve would analyze it and predict whether that patient would likely need a ventilator.
    Dire need for ventilators
    Among the more common symptoms of severe COVID-19 cases is the need for patients to be placed on ventilators to ensure they will be able to continue to take in enough oxygen as they breathe. More

  • in

    Optimal lifting of COVID-19 restrictions would follow pace of vaccination, study suggests

    A new analysis suggests that, in order to boost freedoms and protect against overwhelming new waves of COVID-19, the pace at which restrictions to reduce spread are lifted must be directly tied to the pace of vaccination. Simon Bauer, Viola Priesemann, and colleagues of the Max Planck Institute for Dynamics and Self-Organization, Germany, present these findings in the open-access journal PLOS Computational Biology.
    More than a year after the COVID-19 pandemic began, vaccination programs now hold promise to ease many burdens caused by the disease — including necessary restrictions that have had negative social and economic consequences. Much research has focused on vaccine allocation and prioritization, and optimal ways to control spread. However, how to execute a smooth transition between an unprotected population to eventual population immunity remained an open question.
    To address that question, Bauer and colleagues applied mathematical modeling to epidemiological and vaccination data from Germany, France, the U.K., and other European countries. Specifically, they quantified the pace at which restrictions could be lifted during vaccine rollout in order to mitigate the risk of rebound COVID-19 waves that overwhelm intensive care units.
    After considering various plausible scenarios, the researchers concluded that further severe waves can only be avoided if restrictions are lifted no faster than the pace dictated by vaccination progress, and that there is basically no gain in freedom if one eases restrictions too quickly. The findings suggest that, even after 80 percent of the adult population has been vaccinated, novel, more infectious variants could trigger a new wave and overwhelm intensive care units if lifting all restrictions.
    “In such an event, restrictions would quickly have to be reinstated, thus quickly vanishing the mirage of freedom,” Priesemann says. “Furthermore, an early lift would have high morbidity and mortality costs. Meanwhile, relaxing restrictions at the pace of vaccination shows almost the same progress in ‘freedom’ while maintaining low incidence.”
    The researchers say their findings suggest that, despite public pressure, policymakers should not rush relaxation of restrictions, and a high vaccination rate — especially among high-risk populations — is necessary. Further research will be needed to design optimal scenarios from a global perspective.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Surprise result for solid state physicists hints at unusual electron behavior

    While studying the behavior of electrons in iron-based superconducting materials, researchers at the University of Tokyo observed a strange signal relating to the way electrons are arranged. The signal implies a new arrangement of electrons the researchers call a nematicity wave, and they hope to collaborate with theoretical physicists to better understand it. The nematicity wave could help researchers understand the way electrons interact with each other in superconductors.
    A long-standing dream of solid state physicists is to fully understand the phenomenon of superconductivity — essentially electronic conduction without the resistance that creates heat and drains power. It would usher in a whole new world of incredibly efficient or powerful devices and is already being used on Japan’s experimental magnetic levitation bullet train. But there is much to explore in this complex topic, and it often surprises researchers with unexpected results and observations.
    Professor Shik Shin from the Institute for Solid State Physics at the University of Tokyo and his team study the way electrons behave in iron-based superconducting materials, or IBSCs. These materials show a lot of promise as they could work at higher temperatures than some other superconducting materials which is an important concern. They also use less exotic material components so can be easier and cheaper to work with. To activate a sample’s superconducting ability, the material needs to be cooled down to several hundreds of degrees below zero. And interesting things happen during this cooling process.
    “As IBSCs cool down to a certain level, they express a state we call electronic nematicity,” said Shin. “This is where the crystal lattice of the material and the electrons within it appear to be arranged differently depending on the angle you look at them, otherwise known as anisotropy. We expect the way electrons are arranged to be tightly coupled to the way the surrounding crystal lattice is arranged. But our recent observation shows something very different and actually quite surprising.”
    Shin and his team used a special technique developed by their group called laser-PEEM (photoemission electron microscopy) to visualize their IBSC sample on the microscopic scale. They expected to see a familiar pattern that repeats every few nanometers (billionths of a meter). And sure enough the crystal lattice did show this pattern. But to their surprise, the team found that the pattern of electrons was repeating every few hundred nanometers instead.
    This disparity between the electron nematicity wave and the crystalline structure of the IBSC was unexpected, so its implications are still under investigation. But the result could open the door to theoretical and experimental explorations into something fundamental to the phenomenon of superconductivity, and that is the way that electrons form pairs at low temperatures. Knowledge of this process could be crucial to the development of high-temperature superconductivity. So if nematicity waves are related, it is important to know how.
    “Next, I hope we can work with theoretical physicists to further our understanding of nematicity waves,” said Shin. “We also wish to use laser-PEEM to study other related materials such as metal oxides like copper oxide. It may not always be obvious where the applications lie, but working on problems of fundamental physics really fascinates me.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Putting a new theory of many-particle quantum systems to the test

    New experiments using trapped one-dimensional gases — atoms cooled to the coldest temperatures in the universe and confined so that they can only move in a line — fit with the predictions of the recently developed theory of “generalized hydrodynamics.” Quantum mechanics is necessary to describe the novel properties of these gases. Achieving a better understanding of how such systems with many particles evolve in time is a frontier of quantum physics. The result could greatly simplify the study of quantum systems that have been excited out of equilibrium. Besides its fundamental importance, it could eventually inform the development of quantum-based technologies, which include quantum computers and simulators, quantum communication, and quantum sensors. A paper describing the experiments by a team led by Penn State physicists appears September 2, 2021 in the journal Science.
    Even within classical physics, where the additional complexities of quantum mechanics can be ignored, it is impossible to simulate the motion of all the atoms in a moving fluid. To approximate these systems of particles, physicists use hydrodynamics descriptions.
    “The basic idea behind hydrodynamics is to forget about the atoms and consider the fluid as a continuum,” said Marcos Rigol, professor of physics at Penn State and one of the leaders of the research team. “To simulate the fluid, one ends up writing coupled equations that result from imposing a few constraints, such as the conservation of mass and energy. These are the same types of equations solved, for example, to simulate how air flows when you open windows to improve ventilation in a room.”
    Matter becomes more complicated if quantum mechanics is involved, as is the case when one wants to simulate quantum many-body systems that are out of equilibrium.
    “Quantum many body systems — which are composed of many interacting particles, such as atoms — are at the heart of atomic, nuclear, and particle physics,” said David Weiss, Distinguished Professor of Physics at Penn State and one of the leaders of the research team. “It used to be that except in extreme limits you couldn’t do a calculation to describe out-of-equilibrium quantum many-body systems. That recently changed.”
    The change was motivated by the development of a theoretical framework known as generalized hydrodynamics.
    “The problem with those quantum many-body systems in one dimension is that they have so many constraints on their motion that regular hydrodynamics descriptions cannot be used,” said Rigol. “Generalized hydrodynamics was developed to keep track of all those constraints.”
    Until now, generalized hydrodynamics had only previously been experimentally tested under conditions where the strength of interactions among particles was weak.
    “We set out to test the theory further, by looking at the dynamics of one dimensional gases with a wide range of interaction strengths,” said Weiss. “The experiments are extremely well controlled, so the results can be precisely compared to the predictions of this theory.
    The research team uses one dimensional gases of interacting atoms that are initially confined in a very shallow trap in equilibrium. They then very suddenly increase the depth of the trap by 100 times, which forces the particles to collapse into the center of the trap, causing their collective properties to change. Throughout the collapse, the team precisely measures their properties, which they can then compare to the predictions of generalized hydrodynamics.
    “Our measurements matched the prediction of theory across dozens of trap oscillations,” said Weiss. “There currently aren’t other ways to study out-of-equilibrium quantum systems for long periods of time with reasonable accuracy, especially with a lot of particles. Generalized hydrodynamics allow us to do this for some systems like the one we tested, but how generally applicable it is still needs to be determined.”
    Story Source:
    Materials provided by Penn State. Original written by Sam Sholtis. Note: Content may be edited for style and length. More

  • in

    Scientists create a labor-saving automated method for studying electronic health records

    In an article published in the journal Patterns, scientists at the Icahn School of Medicine at Mount Sinai described the creation of a new, automated, artificial intelligence-based algorithm that can learn to read patient data from electronic health records. In a side-by-side comparison, they showed that their method, called Phe2vec (FEE-to-vek), accurately identified patients with certain diseases as well as the traditional, “gold-standard” method, which requires much more manual labor to develop and perform.
    “There continues to be an explosion in the amount and types of data electronically stored in a patient’s medical record. Disentangling this complex web of data can be highly burdensome, thus slowing advancements in clinical research,” said Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai (HPIMS), and a senior author of the study. “In this study, we created a new method for mining data from electronic health records with machine learning that is faster and less labor intensive than the industry standard. We hope that this will be a valuable tool that will facilitate further, and less biased, research in clinical informatics.”
    The study was led by Jessica K. De Freitas, a graduate student in Dr. Glicksberg lab.
    Currently, scientists rely on a set of established computer programs, or algorithms, to mine medical records for new information. The development and storage of these algorithms is managed by a system called the Phenotype Knowledgebase (PheKB). Although the system is highly effective at correctly identifying a patient diagnosis, the process of developing an algorithm can be very time-consuming and inflexible. To study a disease, researchers first have to comb through reams of medical records looking for pieces of data, such as certain lab tests or prescriptions, which are uniquely associated with the disease. They then program the algorithm that guides the computer to search for patients who have those disease-specific pieces of data, which constitute a “phenotype.” In turn, the list of patients identified by the computer needs to be manually double-checked by researchers. Each time researchers want to study a new disease, they have to restart the process from scratch.
    In this study, the researchers tried a different approach — one in which the computer learns, on its own, how to spot disease phenotypes and thus save researchers time and effort. This new, Phe2vec method was based on studies the team had already conducted.
    “Previously, we showed that unsupervised machine learning could be a highly efficient and effective strategy for mining electronic health records,” said Riccardo Miotto, PhD, a former Assistant Professor at the HPIMS and a senior author of the study. “The potential advantage of our approach is that it learns representations of diseases from the data itself. Therefore, the machine does much of the work experts would normally do to define the combination of data elements from health records that best describes a particular disease.”
    Essentially, a computer was programmed to scour through millions of electronic health records and learn how to find connections between data and diseases. This programming relied on “embedding” algorithms that had been previously developed by other researchers, such as linguists, to study word networks in various languages. One of the algorithms, called word2vec, was particularly effective. Then, the computer was programmed to use what it learned to identify the diagnoses of nearly 2 million patients whose data was stored in the Mount Sinai Health System.
    Finally, the researchers compared the effectiveness between the new and the old systems. For nine out of ten diseases tested, they found that the new Phe2vec system was as effective as, or performed slightly better than, the gold standard phenotyping process at correctly identifying a diagnoses from electronic health records. A few examples of the diseases included dementia, multiple sclerosis, and sickle cell anemia.
    “Overall our results are encouraging and suggest that Phe2vec is a promising technique for large-scale phenotyping of diseases in electronic health record data,” Dr. Glicksberg said. “With further testing and refinement, we hope that it could be used to automate many of the initial steps of clinical informatics research, thus allowing scientists to focus their efforts on downstream analyses like predictive modeling.”
    This study was supported by the Hasso Plattner Foundation, the Alzheimer’s Drug Discovery Foundation, and a courtesy graphics processing unit donation from the NVIDIA Corporation. More

  • in

    These geckos crash-land on rainforest trees but don't fall, thanks to their tails

    A gecko’s tail is a wondrous and versatile thing.
    In more than 15 years of research on geckos, scientists at the University of California, Berkeley, and, more recently, the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have shown that geckos use their tails to maneuver in midair when gliding between trees, to right themselves when falling, to keep from falling off a tree when they lose their grip and even to propel themselves across the surface of a pond, as if walking on water.
    Many of these techniques have been implemented in agile, gecko-like robots.
    But Robert Full, UC Berkeley professor of integrative biology, and Ardian Jusufi, faculty member at the Max Planck Research School for Intelligent Systems and former UC Berkeley doctoral student, were blown away by a recent discovery: Geckos also use their tails to help recover when they take a header into a tree.
    Those head-first crashes are probably not the geckos’ preferred landing, but Jusufi documented many such hard landings in 37 glides over several field seasons in a Singapore rainforest, using high-speed video cameras to record their trajectories and wince-inducing landings. He clocked their speed upon impact at about 6 meters per second, or 21 kilometers per hour — more than 200 feet per second, or about 120 gecko body lengths per second.
    “Observing the geckos from elevation in the rainforest canopy was eye-opening. Before take-off, they would move their head up-and-down, and side-to-side to view the landing target prior to jumping off, as if to estimate the travel distance,” Jusufi said. More

  • in

    Nano ‘camera’ made using molecular glue allows real-time monitoring of chemical reactions

    Researchers have made a tiny camera, held together with ‘molecular glue’ that allows them to observe chemical reactions in real time.
    The device, made by a team from the University of Cambridge, combines tiny semiconductor nanocrystals called quantum dots and gold nanoparticles using molecular glue called cucurbituril (CB). When added to water with the molecule to be studied, the components self-assemble in seconds into a stable, powerful tool that allows the real-time monitoring of chemical reactions.
    The camera harvests light within the semiconductors, inducing electron transfer processes like those that occur in photosynthesis, which can be monitored using incorporated gold nanoparticle sensors and spectroscopic techniques. They were able to use the camera to observe chemical species which had been previously theorised but not directly observed.
    The platform could be used to study a wide range of molecules for a variety of potential applications, such as the improvement of photocatalysis and photovoltaics for renewable energy. The results are reported in the journal Nature Nanotechnology.
    Nature controls the assemblies of complex structures at the molecular scale through self-limiting processes. However, mimicking these processes in the lab is usually time-consuming, expensive and reliant on complex procedures.
    “In order to develop new materials with superior properties, we often combine different chemical species together to come up with a hybrid material that has the properties we want,” said Professor Oren Scherman from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research. “But making these hybrid nanostructures is difficult, and you often end up with uncontrolled growth or materials that are unstable.”
    The new method that Scherman and his colleagues from Cambridge’s Cavendish Laboratory and University College London developed uses cucurbituril — a molecular glue which interacts strongly with both semiconductor quantum dots and gold nanoparticles. The researchers used small semiconductor nanocrystals to control the assembly of larger nanoparticles through a process they coined interfacial self-limiting aggregation. The process leads to permeable and stable hybrid materials that interact with light. The camera was used to observe photocatalysis and track light-induced electron transfer. More

  • in

    Brain-inspired memory device

    Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability.
    Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.
    “This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.
    The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.
    Brain-inspired technology
    “This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed. More