More stories

  • in

    Early warning system forecasts who needs critical care for COVID-19

    Scientists have developed and validated an algorithm that can help healthcare professionals identify who is most at risk of dying from COVID-19 when admitted to a hospital, reports a study published today in eLife.
    The tool, which uses artificial intelligence (AI), could help doctors direct critical care resources to those who need them most, and will be especially valuable to resource-limited countries.
    “The appearance of new SARS-CoV-2 variants, waning immune protection and relaxation of mitigation measures means we are likely to continue seeing surges of infections and hospitalisations,” explains the leader of this international project and senior author David Gómez-Varela, former Max Planck Group Leader and current Senior Scientist at the Division of Pharmacology and Toxicology, University of Vienna, Austria. “There is a need for clinically valuable and generalisable triage tools to assist the allocation of hospital resources for COVID-19, particularly in places where resources are scarce. But these tools need to be able to cope with the ever-changing scenario of a global pandemic and must be easy to implement.”
    To develop such a tool, the team used biochemical data from routine blood draws performed on nearly 30,000 patients hospitalised in over 150 hospitals in Spain, the US, Honduras, Bolivia and Argentina between March 2020 and February 2022. This means they were able to capture data from people with different immune statuses — vaccinated, unvaccinated and those with natural immunity — and from people infected with every SARS-CoV-2 variant, from the virus that emerged in Wuhan, China, to the latest Omicron variant. “The intrinsic variability in such a diverse dataset is a great challenge for AI-based prediction models,” says lead author Riku Klén, Associate Professor at the University of Turku, Finland.
    The resulting algorithm — called COVID-19 Disease Outcome Predictor (CODOP) — uses measurements of 12 blood molecules that are normally collected during admission. This means the predictive tool can be easily integrated into the clinical care of any hospital.
    CODOP was developed in a multistep process, initially using data from patients hospitalised in more than 120 hospitals in Spain, to ‘train’ the AI system to predict hallmarks of a poor prognosis.
    The next step was to ensure the tool worked regardless of patients’ immune status or COVID-19 variant, so they tested the algorithm in several subgroups of geographically dispersed patients. The tool still performed well at predicting the risk of in-hospital death during this fluctuating scenario of the pandemic, suggesting the measurements CODOP is based on are truly meaningful biomarkers of whether a patient with COVID-19 is likely to deteriorate.
    To test whether the time of taking blood tests affects the tool’s performance, the team compared data from different time points of blood drawn before patients either recovered or died. They found that the algorithm can predict the survival or death of hospitalised patients with high accuracy until nine days before either outcome occurs.
    Finally, they created two different versions of the tool for use in scenarios where healthcare resources are either operating normally or are under severe pressure. Under normal operational burden, doctors may opt to use an ‘overtriage’ version, which is highly sensitive at picking up people at increased risk of death, at the expense of detecting some people who did not require critical care. The alternative ‘undertriage’ model minimises the possibility of wrongly selecting people at lower risk of dying, providing doctors with greater certainty that they are directing care to those at the highest risk when resources are severely limited.
    “The performance of CODOP in diverse and geographically dispersed patient groups and the ease of use suggest it could be a valuable tool in the clinic, especially in resource-limited countries,” remarks Gómez-Varela. “We are now working on a follow-up dual model tailored to the current pandemic scenario of increasing infections and cumulative immune protection, which will predict the need for hospitalisation within 24 hours for patients within primary care, and intensive care admission within 48 hours for those already hospitalised. We hope to help healthcare systems restore previous standards of routine care before the pandemic took hold.”
    The CODOP predictor is freely accessible at: https://gomezvarelalab.em.mpg.de/codop/
    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    New silicon nanowires can really take the heat

    Scientists have demonstrated a new material that conducts heat 150% more efficiently than conventional materials used in advanced chip technologies.
    The device — an ultrathin silicon nanowire — could enable smaller, faster microelectronics with a heat-transfer-efficiency that surpasses current technologies. Electronic devices powered by microchips that efficiently dissipate heat would in turn consume less energy — an improvement that could help mitigate the consumption of energy produced by burning carbon-rich fossil fuels that have contributed to global warming.
    “By overcoming silicon’s natural limitations in its capacity to conduct heat, our discovery tackles a hurdle in microchip engineering,” said Junqiao Wu, the scientist who led the Physical Review Letters study reporting the new device. Wu is a faculty scientist in the Materials Sciences Division and professor of materials science and engineering at UC Berkeley.
    Heat’s slow flow through silicon
    Our electronics are relatively affordable because silicon — the material of choice for computer chips — is cheap and abundant. But although silicon is a good conductor of electricity, it is not a good conductor of heat when it is reduced to very small sizes — and when it comes to fast computing, that presents a big problem for tiny microchips.
    Within each microchip resides tens of billions of silicon transistors that direct the flow of electrons in and out of memory cells, encoding bits of data as ones and zeroes, the binary language of computers. Electrical currents run between these hard-working transistors, and these currents inevitably generate heat. More

  • in

    The numbers don't lie: Australia is failing at maths and we need to find a new formula to arrest the decline

    Divide, subtract, add, multiply: whatever way you cut it, Australia is heading in one direction when it comes to global maths rankings — downwards.
    From an OECD mathematics ranking of 11 in the world 20 years ago, Australian secondary students are now languishing in 29th place out of 38 countries, according to the most recent statistics.
    The sliding maths rankings have created widespread debate over whether curriculum changes are needed in our schools, but a new international paper co-authored by University of South Australia cognitive psychologist Dr Fernando Marmolejo-Ramos could provide part of the solution.
    In the latest edition of Integrative Psychology and Behavioural Science, Dr Marmolejo-Ramos and researchers from China and Iran explain why simple gestures such as hand motions are important in helping students understand mathematical concepts.
    “Many people struggle with mathematics and there is a lot of anxiety around it because it is an abstract topic,” Dr Marmolejo-Ramos says. “You see the numbers, equations and graphs, but unless you engage human motor and sensory skills, they can be very difficult to grasp.”
    To get maths concepts across, it is important to bring together language, speech intonation, facial expressions and hand gestures, particularly the latter, the researchers say. More

  • in

    New method melds data to make a 3-D map of cells' activities

    Just as it’s hard to understand a conversation without knowing its context, it can be difficult for biologists to grasp the significance of gene expression without knowing a cell’s environment. To solve that problem, researchers at Princeton Engineering have developed a method to elucidate a cell’s surroundings so that biologists can make more meaning of gene expression information.
    The researchers, led by Professor of Computer Science Ben Raphael, hope the new system will open the door to identifying rare cell types and choosing cancer treatment options with new precision. Raphael is the senior author of a paper describing the method published May 16 in Nature Methods.
    The basic technique of linking gene expression with a cell’s environment, called spatial transcriptomics (ST), has been around for several years. Scientists break down tissue samples onto a microscale grid and link each spot on the grid with information about gene expression. The problem is that current computational tools can only analyze spatial patterns of gene expression in two dimensions. Experiments that use multiple slices from a single tissue sample — such as a region of a brain, heart or tumor — are difficult to synthesize into a complete picture of the cell types in the tissue.
    The Princeton researchers’ method, called PASTE (for Probabilistic Alignment of ST Experiments), integrates information from multiple slices taken from the same tissue sample, providing a three-dimensional view of gene expression within a tumor or a developing organ. When sequence coverage in an experiment is limited due to technical or cost issues, PASTE can also merge information from multiple tissue slices into a single two-dimensional consensus slice with richer gene expression information.
    “Our method was motivated by the observation that oftentimes biologists will perform multiple experiments from the same tissue,” said Raphael. “Now, these replicate experiments are not exactly the same cells, but they’re from the same tissue and therefore should be highly similar.”
    The team’s technique can align multiple slices from a single tissue sample, categorizing cells based on their gene expression profiles while preserving the physical location of the cells within the tissue. More

  • in

    Scientists identify characteristics to better define long COVID

    A research team supported by the National Institutes of Health has identified characteristics of people with long COVID and those likely to have it. Scientists, using machine learning techniques, analyzed an unprecedented collection of electronic health records (EHRs) available for COVID-19 research to better identify who has long COVID. Exploring de-identified EHR data in the National COVID Cohort Collaborative (N3C), a national, centralized public database led by NIH’s National Center for Advancing Translational Sciences (NCATS), the team used the data to find more than 100,000 likely long COVID cases as of October 2021 (as of May 2022, the count is more than 200,000). The findings appeared May 16 in The Lancet Digital Health.
    Long COVID is marked by wide-ranging symptoms, including shortness of breath, fatigue, fever, headaches, “brain fog” and other neurological problems. Such symptoms can last for many months or longer after an initial COVID-19 diagnosis. One reason long COVID is difficult to identify is that many of its symptoms are similar to those of other diseases and conditions. A better characterization of long COVID could lead to improved diagnoses and new therapeutic approaches.
    “It made sense to take advantage of modern data analysis tools and a unique big data resource like N3C, where many features of long COVID can be represented,” said co-author Emily Pfaff, Ph.D., a clinical informaticist at the University of North Carolina at Chapel Hill.
    The N3C data enclave currently includes information representing more than 13 million people nationwide, including nearly 5 million COVID-19-positive cases. The resource enables rapid research on emerging questions about COVID-19 vaccines, therapies, risk factors and health outcomes.
    The new research is part of a related, larger trans-NIH initiative, Researching COVID to Enhance Recovery (RECOVER), which aims to improve the understanding of the long-term effects of COVID-19, called post-acute sequelae of SARS-CoV-2 infection (PASC). RECOVER will accurately identify people with PASC and develop approaches for its prevention and treatment. The program also will answer critical research questions about the long-term effects of COVID through clinical trials, longitudinal observational studies, and more.
    In the Lancet study, Pfaff, Melissa Haendel, Ph.D., at the University of Colorado Anschutz Medical Campus, and their colleagues examined patient demographics, health care use, diagnoses and medications in the health records of 97,995 adult COVID-19 patients in the N3C. They used this information, along with data on nearly 600 long COVID patients from three long COVID clinics, to create three machine learning models to identify long COVID patients.
    In machine learning, scientists “train” computational methods to rapidly sift through large amounts of data to reveal new insights — in this case, about long COVID. The models looked for patterns in the data that could help researchers both understand patient characteristics and better identify individuals with the condition.
    The models focused on identifying potential long COVID patients among three groups in the N3C database: All COVID-19 patients, patients hospitalized with COVID-19, and patients who had COVID-19 but were not hospitalized. The models proved to be accurate, as people identified as at risk for long COVID were similar to patients seen at long COVID clinics. The machine learning systems classified approximately 100,000 patients in the N3C database whose profiles were close matches to those with long COVID.
    “Once you’re able to determine who has long COVID in a large database of people, you can begin to ask questions about those people,” said Josh Fessel, M.D., Ph.D., senior clinical advisor at NCATS and a scientific program lead in RECOVER. “Was there something different about those people before they developed long COVID? Did they have certain risk factors? Was there something about how they were treated during acute COVID that might have increased or decreased their risk for long COVID?”
    The models searched for common features, including new medications, doctor visits and new symptoms, in patients with a positive COVID diagnosis who were at least 90 days out from their acute infection. The models identified patients as having long COVID if they went to a long COVID clinic or demonstrated long COVID symptoms and likely had the condition but hadn’t been diagnosed.
    “We want to incorporate the new patterns we’re seeing with the diagnosis code for COVID and include it in our models to try to improve their performance,” said the University of Colorado’s Haendel. “The models can learn from a greater variety of patients and become more accurate. We hope we can use our long COVID patient classifier for clinical trial recruitment.”
    This study was funded by NCATS, which contributed to the design, maintenance and security of the N3C Enclave, and the NIH RECOVER Initiative, supported by NIH OT2HL161847. RECOVER is coordinating, among others, the participant recruitment protocol to which this work contributes. The analyses were conducted with data and tools accessed through the NCATS N3C Data Enclave and supported by NCATS U24TR002306. More

  • in

    Shaping the future of light through reconfigurable metasurfaces

    The technological advancement of optical lenses has long been a significant marker of human scientific achievement. Eyeglasses, telescopes, cameras, and microscopes have all literally and figuratively allowed us to see the world in a new light. Lenses are also a fundamental component of manufacturing nanoelectronics by the semiconductor industry.
    One of the most impactful breakthroughs of lens technology in recent history has been the development of photonic metasurfaces — artificially engineered nano-scale materials with remarkable optical properties. Georgia Tech researchers at the forefront of this technology have recently demonstrated the first-ever electrically tunable photonic metasurface platform in a recent study published by Nature Communications.
    “Metasurfaces can make the optical systems very thin, and as they become easier to control and tune, you’ll soon find them in cell phone cameras and similar electronic imaging systems,” said Ali Adibi, professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology.
    The pronounced tuning measures achieved through the new platform represent a critical advancement towards the development of miniaturized reconfigurable metasurfaces. The results of the study have shown a record eleven-fold change in the reflective properties, a large range of spectral tuning for operation, and much faster tuning speed.
    Heating Up Metasurfaces
    Metasurfaces are a class of nanophotonic materials in which a large range of miniaturized elements are engineered to affect the transmission and reflection of light at different frequencies in a controlled way. More

  • in

    The way of water: Making advanced electronics with H2O

    Water is the secret ingredient in a simple way to create key components for solar cells, X-ray detectors and other optoelectronics devices.
    The next generation of photovoltaics, semiconductors and LEDs could be made using perovskites — an exciting and versatile nanomaterial with a crystal structure.
    Perovskites have already shown similar efficiency to silicon, are cheaper to make, and feature a tuneable bandgap, meaning the energy they are able to absorb, reflect or conduct can be changed to suit different purposes.
    Ordinarily, water is kept as far away as possible during the process of creating perovskites. The presence of moisture can lead to defects in the materials, causing them to fall apart more quickly when they’re being used in a device.
    That’s why perovskites for scientific research are often made via spin coating in the sealed environment of a nitrogen glove box.
    Now, though, members of the ARC Centre of Excellence in Exciton Science have found a simple way to control the growth of phase-pure perovskite crystals by harnessing water as a positive factor. This liquid-based mechanism works at room temperature, so the approach remains cost effective. More

  • in

    Electronic skin: Physicist develops multisensory hybrid material

    The “smart skin” developed by Anna Maria Coclite is very similar to human skin. It senses pressure, humidity and temperature simultaneously and produces electronic signals. More sensitive robots or more intelligent prostheses are thus conceivable.
    The skin is the largest sensory organ and at the same time the protective coat of the human being. It “feels” several sensory inputs at the same time and reports information about humidity, temperature and pressure to the brain. For Anna Maria Coclite, a material with such multisensory properties is “a kind of ‘holy grail’ in the technology of intelligent artificial materials. In particular, robotics and smart prosthetics would benefit from a better integrated, more precise sensing system similar to human skin.” The ERC grant winner and researcher at the Institute of Solid State Physics at TU Graz has succeeded in developing the three-in-one hybrid material “smart skin” for the next generation of artificial, electronic skin using a novel process. The result of this pioneering research has now been published in the journal Advanced Materials Technologies.
    As delicate as a fingertip
    For almost six years, the team worked on the development of smart skin as part of Coclite’s ERC project Smart Core. With 2,000 individual sensors per square millimetre, the hybrid material is even more sensitive than a human fingertip. Each of these sensors consists of a unique combination of materials: an smart polymer in the form of a hydrogel inside and a shell of piezoelectric zinc oxide. Coclite explains: “The hydrogel can absorb water and thus expands upon changes in humidity and temperature. In doing so, it exerts pressure on the piezoelectric zinc oxide, which responds to this and all other mechanical stresses with an electrical signal.” The result is a wafer-thin material that reacts simultaneously to force, moisture and temperature with extremely high spatial resolution and emits corresponding electronic signals. “The first artificial skin samples are six micrometres thin, or 0.006 millimetres. But it could be even thinner,” says Anna Maria Coclite. In comparison, the human epidermis is 0.03 to 2 millimetres thick. The human skin perceives things from a size of about one square millimetre. The smart skin has a resolution that is a thousand times smaller and can register objects that are too small for human skin (such as microorganisms).
    Material processing at the nanoscale
    The individual sensor layers are very thin and at the same time equipped with sensor elements covering the entire surface. This was possible in a worldwide unique process for which the researchers combined three known methods from physical chemistry for the first time: a chemical vapour deposition for the hydrogel material, an atomic layer deposition for the zinc oxide and nanoprint lithography for the polymer template. The lithographic preparation of the polymer template was the responsibility of the research group “Hybrid electronics and structuring” headed by Barbara Stadlober. The group is part of Joanneum Research’s Materials Institute based in Weiz.
    Several fields of application are now opening up for the skin-like hybrid material. In healthcare, for example, the sensor material could independently detect microorganisms and report them accordingly. Also conceivable are prostheses that give the wearer information about temperature or humidity, or robots that can perceive their environment more sensitively. On the path to application,smart skin scores with a decisive advantage: the sensory nanorods — the “smart core” of the material — are produced using a vapor-based manufacturing process. This process is already well established in production plants for integrated circuits, for example. The production of smart skin can thus be easily scaled and implemented in existing production lines.
    The properties of smart skin are now being optimized even further. Anna Maria Coclite and her team — here in particular the PhD student Taher Abu Ali — want to extend the temperature range to which the material reacts and improve the flexibility of the artificial skin.
    Story Source:
    Materials provided by Graz University of Technology. Original written by Susanne Filzwieser. Note: Content may be edited for style and length. More