More stories

  • in

    New data registry collects evidence in cardiogenic shock patients

    Cardiogenic shock — a life threatening condition when a person’s heart can’t pump enough blood to meet the needs of the body — is most often caused by serious heart attack or advanced heart failure. Historically, data related to cardiogenic shock have been limited, inconsistent and challenging to interpret. As a result, varying treatment recommendations exist around best practices.
    To address this need, the American Heart Association, the leading voluntary organization devoted to longer, healthier lives for all, created the Cardiogenic Shock Registry powered by Get With The Guidelines®. The new registry will help researchers, clinicians and regulators to better understand the clinical symptoms of shock types, treatment patterns and outcomes. The registry will provide a foundation for working toward improving the quality and consistency of care in patients in U.S. hospitals with cardiogenic shock symptoms.
    “To understand how to improve care for cardiogenic shock patients, we first need a clearer view of the landscape of existing treatment practices for cardiogenic shock in U.S.-based acute care settings,” said Mitchell Krucoff, M.D., FAHA, volunteer expert for the American Heart Association and professor of medicine at Duke University, Durham, N.C.”No organization is better positioned to advance this critical public health question than the American Heart Association, with already established networks of sites entering data on heart failure, acute cardiac syndromes, cardiac arrest and COVID — all of which involve patients at risk of progressing to cardiogenic shock.”
    The Cardiogenic Shock Registry builds on more than 20 years of quality improvement and registry experience rooted in the Association’s Get With The Guidelines® platform. Data from this no-cost registry will help inform the larger medical community on how best to treat cardiogenic shock.
    The steering committee of the American Heart Association Cardiogenic Shock Registry provides guidance and expertise for establishing the registry and managing the data. The steering committee includes leading academic surgeons and cardiologists, representatives from founding funders, as well as representatives of the U.S. Food & Drug Administration and the U.S. Centers for Medicare & Medicaid Services.
    The American Heart Association’s Cardiogenic Shock Registry is made possible through the generous financial support of founding supporters Abbott and Getinge.
    “The new Cardiogenic Shock Registry will leverage the unparalleled reach of the American Heart Association in a unique collaboration between academic clinicians and researchers, federal agencies and funding supporters’ experts to provide high-quality evidence and promote best practices for the treatment of patients with cardiogenic shock,” said David Morrow, M.D., M.P.H., FAHA, volunteer expert for the American Heart Association and professor of medicine, Harvard Medical School, Boston.
    Story Source:
    Materials provided by American Heart Association. Note: Content may be edited for style and length. More

  • in

    Number-crunching mathematical models may give policy makers major headache

    Mathematical models that predict policy-driving scenarios — such as how a new pandemic might spread or the future amount of irrigation water needed worldwide — may be too complex and delivering ‘wrong’ answers, a new study reveals.
    Experts are using increasingly detailed models to better predict phenomena or gain more accurate insights in a range of key areas, such as environmental/climate sciences, hydrology and epidemiology.
    But the pursuit of complex models as tools to produce more accurate projections and predictions may not deliver because more complicated models tend to produce more uncertain estimates.
    Researchers from the Universities of Birmingham, Princeton, Reading, Barcelona and Bergen published their findings today in Science Advances. They reveal that expanding models without checking how extra detail adds uncertainty limits the models’ usefulness as tools to inform policy decisions in the real world.
    Arnald Puy, Associate Professor in Social and Environmental Uncertainties at the University of Birmingham, commented: “As science keeps on unfolding secrets, models keep getting bigger — integrating new discoveries to better reflect the world around us. We assume that more detailed models produce better predictions because they better match reality.
    “And yet pursuing ever-complex models may not deliver the results we seek, because adding new parameters brings new uncertainties into the model. These new uncertainties pile on top of the uncertainties already there at every model upgrade stage, making the model’s output fuzzier at every step of the way.”
    This tendency to produce more inaccurate results affects any model without training or validation data used to check its output’s accuracy — affecting all global models such as those focused on climate-change, hydrology, food-production, and epidemiology, as well as models projecting estimates into the future, regardless of the scientific field.
    Researchers recommend that the drive to produce increasingly detailed mathematical models as a means to get sharper estimates should be reassessed.
    “We suggest that modelers should calculate the model’s effective dimensions (the number of influential parameters and their highest-order interaction) before making the model more complex. This allows to check how the addition of model complexity affects the uncertainty in the output. Such information is especially valuable for models aiming to play a role in policy making,” added Dr. Puy. “Both modelers and policy makers benefit from understanding any uncertainty generated when a model is upgraded with novel mechanisms.
    “Modelers tend not to submit their models to uncertainty and sensitivity analysis but keep on adding detail. Not many scholars are interested running such an analysis on their model if it risks showing that the emperor runs naked and its alleged sharp estimates are just a mirage.”
    Excess complexity prevents scholars and public alike to ponder the appropriateness of the models’ assumptions, often highly questionable. Puy and his team note, for example, that global hydrological models assume that irrigation optimises crop production and water use — a premise at odds with practices of traditional irrigators.
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Blocking the buzz: MXene composite could eliminate electromagnetic interference by absorbing it

    A recent discovery by materials science researchers in Drexel University’s College of Engineering might one day prevent electronic devices and components from going haywire when they’re too close to one another. A special coating that they developed, using a type of two-dimensional material called MXene, has shown to be capable of absorbing and disbursing the electromagnetic fields that are the source of the problem.
    Buzzing, feedback or static are the noticeable manifestations of electromagnetic interference, a collision of the electromagnetic fields generated by electronics devices. Aside from the sounds, this phenomenon can also diminish the performance of the devices and lead to overheating and malfunctions if left unchecked.
    While researchers and technologists have progressively reduced this problem with each generation of devices, their strategy thus far has been to encase vital components with a shielding that deflects electromagnetic waves. But according to the Drexel team, this isn’t a sustainable solution.
    “Because the number of electronics devices will continue to grow, deflecting the electromagnetic waves they produce is really just a short-term solution,” said Yury Gogotsi, PhD, Distinguished University and Bach professor in the College of Engineering, who led the research. “To truly solve this problem, we need to develop materials that will absorb and dissipate the interference. We believe we have found just such a material.”
    In the recent edition of Cell Reports Physical Science, Gogotsi’s team reported that combining MXene, a two-dimensional material they discovered more than a decade ago, with a conductive element called vanadium in a polymer solution, produces a coating that can absorb electromagnetic waves.
    While researchers have previously demonstrated that MXenes are highly effective at warding off electromagnetic interference by reflecting it, adding vanadium carbide in a polymer matrix enhances two key characteristics of the material that improve its shielding performance.
    According to the researchers, adding vanadium to MXene structure — a material known for its durability and corrosion-resistant properties, that is used in steel alloys for space vehicles and nuclear reactors — causes layers of the Mxene to form in sort of electrochemical grid that is perfect for trapping ions. Using microwave-transparent polymer, makes the material also more permeable to the electromagnetic waves.
    Combined, these properties produce a coating that can absorb, entrap and dissipate the energy of electromagnetic waves at greater than 90% efficiency, according to the research.
    “Remarkably, combining polyurethane, a common polymer used in common wall paint, with a tiny amount of MXene filler — about one part MXene in 50 parts polyurethane — can absorb more than 90% of incident electromagnetic waves covering the entire band of radar frequencies — known as X-band frequencies,” said Meikang Han, PhD, who participated in the research as a post-doctoral researcher at Drexel. “Radio waves just disappear inside the MXene-polymer composite film — of course, nothing disappears completely, the energy of the waves is transformed to a very small amount of heat which is easily dissipated by the material.”
    A thin coating of the vanadium-based MXene material — less than the width of a human hair — could render a material impermeable to any electromagnetic waves in the X-band spectrum, which includes microwave radiation and is the most common frequency produced by devices. Gogotsi predicts that this development could be important for high-stakes applications such as medical and military settings when maintaining technological performance is crucial.
    “Our results show that vanadium-based MXenes could play a key role in the expansion of Internet of Things technology and 5G and 6G communications.” Gogotsi said. “This study provides a new director for the development of thin, highly absorbent, MXene-based electromagnetic interference protection materials.”
    Story Source:
    Materials provided by Drexel University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence answers the call for quail information

    When states want to gauge quail populations, the process can be grueling, time-consuming and expensive.
    It means spending hours in the field listening for calls. Or leaving a recording device in the field to catch what sounds are made — only to spend hours later listening to that audio. Then, repeating this process until there’s enough information to start making population estimates.
    But a new model developed by researchers at the University of Georgia aims to streamline this process. By using artificial intelligence to analyze terabytes of recordings for quail calls, the process gives wildlife managers the ability to gather the data they need in a matter of minutes.
    “The model is very accurate, picking up between 80% and 100% of all calls even in the noisiest recordings. So, you could take a recording, put it through our model and it will tell you how many quail calls that the recorder heard,” said James Martin, an associate professor at the UGA Warnell School of Forestry and Natural Resources who has been working on the project, in collaboration with the Georgia Department of Natural Resources, for about five years. “This new model allows you to analyze terabytes of data in seconds, and what that will allow us to do is scale up monitoring, so you can literally put hundreds of these devices out and cover a lot more area and do so with a lot less effort than in the past.”
    The software represents about five years of work by Martin, postdoctoral researcher Victoria Nolan and numerous key contributors who have worked with a code writer to create the model. It’s also part of a larger shift taking place in the field of wildlife research, where computer algorithms are now assisting with work that once took humans thousands of hours to complete.
    Increasingly, computers are getting smarter at, for example, identifying specific noises or certain traits in photos and sound recordings. For researchers such as Martin, it means hours once spent on tasks such as listening to audio or looking at game camera images can now be done by a computer, freeing up valuable time to focus on other aspects of a project.
    The new tool can also be a valuable resource for state and federal agencies looking for information on their quail populations, but with limited funds to spend on any one project. “So, I think this is something states might jump on as far as replacing their current monitoring with acoustic recording devices,” added Martin.
    The software’s success was recently documented by the Journal of Remote Sensing in Ecology and Conservation.
    As the software gets more use and is exposed to sounds from new geographic areas, Martin said, it gets even “smarter.” As it is, quail offer several different kinds of calls. But when the software is exposed to a variety of sounds that aren’t quail, he said, it’s better able to distinguish the correct calls from the ambient noises of the grasses and trees around them.
    Over time, the software will grow more discerning.
    “So that’s why you have to keep giving it training data, and when you move geographies, you encounter new sounds that you didn’t train the model for,” he added. “It’s always about adaption.”
    Story Source:
    Materials provided by University of Georgia. Original written by Kristen Morales. Note: Content may be edited for style and length. More

  • in

    AI takes guesswork out of lateral flow testing

    An artificial intelligence app to read COVID-19 lateral flow tests helped to reduce false results in a new trial out today.
    Published in Cell Reports Medicine, a team of researchers from the University of Birmingham, Durham University and Oxford University tested whether a machine learning algorithm could improve the accuracy of results from antigen lateral flow devices for COVID-19.
    The LFD AI Consortium team worked at UK Health Security Agency assisted test centres and with health care workers conducting self-testing to trial the AI app. More than 100,000 images were submitted as part of the study, and the team found that the algorithm was able to increase the sensitivity of results, determining between a true positive and false negative, from 92% to 97.6% accuracy.
    Professor Andrew Beggs, Professor of Cancer Genetics & Surgery at the University of Birmingham and lead author of the study said:
    “The widespread use of antigen lateral flow devices was a significant moment not just during the pandemic, but has also introduced diagnostic testing to many more people in society. One of the drawbacks with LFD testing for Covid, pregnancy and any other future use is the ‘faint line’ question — where we can’t quite tell if it’s a positive or not.
    “The study looked at the feasibility of using machine learning to take the guesswork out of the faint line tests, and we’re pleased to see that the app saw an increase in sensitivity of the tests, reducing the numbers of false negatives. The promise of this type of technology could be used in lots of applications, both to reduce uncertainty about test results and provide a crucial support for visually impaired people.”
    Professor Camila Caiado, Professor of Statistics at Durham University and chief statistician on the project, said:
    “The increase in sensitivity and overall accuracy is significant and it shows the potential of this app by reducing the number of false negatives and future infections. Crucially, the method can also be easily adapted to the evaluation of other digital readers for lateral flow type devices.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Introducing FathomNet: New open-source image database unlocks the power of AI for ocean exploration

    A new collaborative effort between MBARI and other research institutions is leveraging the power of artificial intelligence and machine learning to accelerate efforts to study the ocean.
    In order to manage impacts from climate change and other threats, researchers urgently need to learn more about the ocean’s inhabitants, ecosystems, and processes. As scientists and engineers develop advanced robotics that can visualize marine life and environments to monitor changes in the ocean’s health, they face a fundamental problem: The collection of images, video, and other visual data vastly exceeds researchers’ capacity for analysis.
    FathomNet is an open-source image database that uses state-of-the-art data processing algorithms to help process the backlog of visual data. Using artificial intelligence and machine learning will alleviate the bottleneck for analyzing underwater imagery and accelerate important research around ocean health.
    “A big ocean needs big data. Researchers are collecting large quantities of visual data to observe life in the ocean. How can we possibly process all this information without automation? Machine learning provides a pathway forwards, however these approaches rely on massive datasets for training. FathomNet has been built to fill this gap,” said MBARI Principal Engineer Kakani Katija.
    Project co-founders Katija, Katy Croff Bell (Ocean Discovery League), and Ben Woodward (CVision AI), along with members of the extended FathomNet team, detailed the development of this new image database in a recent research publication in Scientific Reports.
    Recent advances in machine learning enable fast, sophisticated analysis of visual data, but the use of artificial intelligence in ocean research has been limited by the lack of a standard set of existing images that could be used to train the machines to recognize and catalog underwater objects and life. FathomNet addresses this need by aggregating images from multiple sources to create a publicly available, expertly curated underwater image training database. More

  • in

    A new AI model can accurately predict human response to novel drug compounds

    The journey between identifying a potential therapeutic compound and Food and Drug Administration approval of a new drug can take well over a decade and cost upwards of a billion dollars. A research team at the CUNY Graduate Center has created an artificial intelligence model that could significantly improve the accuracy and reduce the time and cost of the drug development process. Described in a newly published paper in Nature Machine Intelligence, the new model, called CODE-AE, can screen novel drug compounds to accurately predict efficacy in humans. In tests, it was also able to theoretically identify personalized drugs for over 9,000 patients that could better treat their conditions. Researchers expect the technique to significantly accelerate drug discovery and precision medicine.
    Accurate and robust prediction of patient-specific responses to a new chemical compound is critical to discover safe and effective therapeutics and select an existing drug for a specific patient. However, it is unethical and infeasible to do early efficacy testing of a drug in humans directly. Cell or tissue models are often used as a surrogate of the human body to evaluate the therapeutic effect of a drug molecule. Unfortunately, the drug effect in a disease model often does not correlate with the drug efficacy and toxicity in human patients. This knowledge gap is a major factor in the high costs and low productivity rates of drug discovery.
    “Our new machine learning model can address the translational challenge from disease models to humans,” said Lei Xie, a professor of computer science, biology and biochemistry at the CUNY Graduate Center and Hunter College and the paper’s senior author. “CODE-AE uses biology-inspired design and takes advantage of several recent advances in machine learning. For example, one of its components uses similar techniques in Deepfake image generation.”
    The new model can provide a workaround to the problem of having sufficient patient data to train a generalized machine learning model, said You Wu, a CUNY Graduate Center Ph.D. student and co-author of the paper. “Although many methods have been developed to utilize cell-line screens for predicting clinical responses, their performances are unreliable due to data incongruity and discrepancies,” Wu said. “CODE-AE can extract intrinsic biological signals masked by noise and confounding factors and effectively alleviated the data-discrepancy problem.”
    As a result, CODE-AE significantly improves accuracy and robustness over state-of-the-art methods in predicting patient-specific drug responses purely from cell-line compound screens.
    The research team’s next challenge in advancing the technology’s use in drug discovery is developing a way for CODE-AE to reliably predict the effect of a new drug’s concentration and metabolization in human bodies. The researchers also noted that the AI model could potentially be tweaked to accurately predict human side effects to drugs.
    This work was supported by the National Institute of General Medical Sciences and the National Institute on Aging.
    Story Source:
    Materials provided by The Graduate Center, CUNY. Note: Content may be edited for style and length. More

  • in

    Deep learning tool identifies bacteria in micrographs

    Omnipose, a deep learning software, is helping to solve the challenge of identifying varied and miniscule bacteria in microscopy images. It has gone beyond this initial goal to identify several other types of tiny objects in micrographs.
    The UW Medicine microbiology lab of Joseph Mougous and the University of Washington physics and bioengineering lab of Paul A. Wiggins tested the tool. It was developed by University of Washington physics graduate student Kevin J. Cutler and his team.
    Mougous said that Cutler, as a physics student, “demonstrated an unusual interest in immersing himself in a biology environment so that he could learn first-hand about problems in need of solution in this field. He came over to my lab and quickly found one that he solved in spectacular fashion.”
    Their results are reported in the Oct. 17 edition of Nature Methods.
    The scientists found that Omnipose, trained on a large database of bacterial images, performed well in characterizing and quantifying the myriad of bacteria in mixed microbial cultures and eliminated some of the errors that can occur in its predecessor, Cellpose.
    Moreover, the software wasn’t easily fooled by extreme changes in a cell’s shape due to antibiotic treatment or antagonism by chemicals produced during interbacterial aggression. In fact, the program showed that it could even detect cell intoxication in a trial using E. coli.
    In addition, Omnipose did well in overcoming recognition problems due to differences in the optical characteristics across diverse bacteria. More