More stories

  • in

    Heat waves in U.S. rivers are on the rise. Here’s why that’s a problem

    U.S. rivers are getting into hot water. The frequency of river and stream heat waves is on the rise, a new analysis shows.

    Like marine heat waves, riverine heat waves occur when water temperatures creep above their typical range for five or more days (SN: 2/1/22). Using 26 years of United States Geological Survey data, researchers compiled daily temperatures for 70 sites in rivers and streams across the United States, and then calculated how many days each site experienced a heat wave per year. From 1996 to 2021, the annual average number of heat wave days per river climbed from 11 to 25, the team reports October 3 in Limnology and Oceanography Letters.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The study is the first assessment of heat waves in rivers across the country, says Spencer Tassone, an ecosystem ecologist at the University of Virginia in Charlottesville. He and his colleagues tallied nearly 4,000 heat wave events — jumping from 82 in 1996 to 198 in 2021 — and amounting to over 35,000 heat wave days. The researchers found that the frequency of extreme heat increased at sites above reservoirs and in free-flowing conditions but not below reservoirs — possibly because dams release cooler water downstream.

    Most heat waves with temperatures the highest above typical ranges occurred outside of summer months between December and April, pointing to warmer wintertime conditions, Tassone says.

    Human-caused global warming plays a role in riverine heat waves, with heat waves partially tracking air temperatures — but other factors are probably also driving the trend. For example, less precipitation and lower water volume in rivers mean waterways warm up easier, the study says.

    “These very short, extreme changes in water temperature can quickly push organisms past their thermal tolerance,” Tassone says. Compared with a gradual increase in temperature, sudden heat waves can have a greater impact on river-dwelling plants and animals, he says. Fish like salmon and trout are particularly sensitive to heat waves because the animals rely on cold water to get enough oxygen, regulate their body temperature and spawn correctly.

    There are chemical consequences to the heat as well, says hydrologist Sujay Kaushal of the University of Maryland in College Park who was not involved with the study. Higher temperatures can speed up chemical reactions that contaminate water, in some cases contributing to toxic algal blooms (SN: 2/7/18). 

    The research can be used as a springboard to help mitigate heat waves in the future, Kaushal says, such as by increasing shade cover from trees or managing stormwater. In some rivers, beaver dams show promise for reducing water temperatures (SN: 8/9/22). “You can actually do something about this.” More

  • in

    Virtual reality experiences to aid substance use disorder recovery

    Indiana University researchers are combining psychological principles with innovative virtual reality technology to create a new immersive therapy for people with substance use disorders. They’ve recently received over $4.9 million from the National Institutes of Health and launched an IU-affiliated startup company to test and further develop the technology.
    Led by Brandon Oberlin, an assistant professor of psychiatry at the IU School of Medicine, IU researchers have built a virtual environment using “future-self avatars” to help people recover from substance use disorders. These avatars are life-sized, fully animated and nearly photo realistic. People can converse with their avatars, who speak in their same voice using personal details in alternate futures.
    “VR technology is clinically effective and increasingly common for treating a variety of mental health conditions, such as phobias, post-traumatic stress disorder and post-operative pain, but has yet to find wide use in substance use disorders intervention or recovery,” Oberlin said. “Capitalizing on VR’s ability to deliver an immersive experience showing otherwise-impossible scenarios, we created a way for people to interact with different versions of their future selves in the context of substance use and recovery.”
    After four years of development and testing in collaboration with Indianapolis-based treatment centers, Oberlin and his colleagues’ pilot study was published Sept. 15 in Discover Mental Health. Their findings suggest that virtual reality simulation of imagined realities can aid substance use disorder recovery by lowering the risk of relapse rates and increasing participants’ future self-connectedness.
    “This experience enables people in recovery to have a personalized virtual experience, in alternate futures resulting from the choices they made,” Oberlin said. “We believe this could be a revolutionary intervention for early substance use disorders recovery, with perhaps even further-reaching mental health applications.”
    The technology is particularly well-suited for people in early recovery — a crucial time as there is a high risk for relapse — because the immersive experiences can help them choose long-term rewards over immediate gratification by deepening connections to their future selves, he said. More

  • in

    New data registry collects evidence in cardiogenic shock patients

    Cardiogenic shock — a life threatening condition when a person’s heart can’t pump enough blood to meet the needs of the body — is most often caused by serious heart attack or advanced heart failure. Historically, data related to cardiogenic shock have been limited, inconsistent and challenging to interpret. As a result, varying treatment recommendations exist around best practices.
    To address this need, the American Heart Association, the leading voluntary organization devoted to longer, healthier lives for all, created the Cardiogenic Shock Registry powered by Get With The Guidelines®. The new registry will help researchers, clinicians and regulators to better understand the clinical symptoms of shock types, treatment patterns and outcomes. The registry will provide a foundation for working toward improving the quality and consistency of care in patients in U.S. hospitals with cardiogenic shock symptoms.
    “To understand how to improve care for cardiogenic shock patients, we first need a clearer view of the landscape of existing treatment practices for cardiogenic shock in U.S.-based acute care settings,” said Mitchell Krucoff, M.D., FAHA, volunteer expert for the American Heart Association and professor of medicine at Duke University, Durham, N.C.”No organization is better positioned to advance this critical public health question than the American Heart Association, with already established networks of sites entering data on heart failure, acute cardiac syndromes, cardiac arrest and COVID — all of which involve patients at risk of progressing to cardiogenic shock.”
    The Cardiogenic Shock Registry builds on more than 20 years of quality improvement and registry experience rooted in the Association’s Get With The Guidelines® platform. Data from this no-cost registry will help inform the larger medical community on how best to treat cardiogenic shock.
    The steering committee of the American Heart Association Cardiogenic Shock Registry provides guidance and expertise for establishing the registry and managing the data. The steering committee includes leading academic surgeons and cardiologists, representatives from founding funders, as well as representatives of the U.S. Food & Drug Administration and the U.S. Centers for Medicare & Medicaid Services.
    The American Heart Association’s Cardiogenic Shock Registry is made possible through the generous financial support of founding supporters Abbott and Getinge.
    “The new Cardiogenic Shock Registry will leverage the unparalleled reach of the American Heart Association in a unique collaboration between academic clinicians and researchers, federal agencies and funding supporters’ experts to provide high-quality evidence and promote best practices for the treatment of patients with cardiogenic shock,” said David Morrow, M.D., M.P.H., FAHA, volunteer expert for the American Heart Association and professor of medicine, Harvard Medical School, Boston.
    Story Source:
    Materials provided by American Heart Association. Note: Content may be edited for style and length. More

  • in

    Number-crunching mathematical models may give policy makers major headache

    Mathematical models that predict policy-driving scenarios — such as how a new pandemic might spread or the future amount of irrigation water needed worldwide — may be too complex and delivering ‘wrong’ answers, a new study reveals.
    Experts are using increasingly detailed models to better predict phenomena or gain more accurate insights in a range of key areas, such as environmental/climate sciences, hydrology and epidemiology.
    But the pursuit of complex models as tools to produce more accurate projections and predictions may not deliver because more complicated models tend to produce more uncertain estimates.
    Researchers from the Universities of Birmingham, Princeton, Reading, Barcelona and Bergen published their findings today in Science Advances. They reveal that expanding models without checking how extra detail adds uncertainty limits the models’ usefulness as tools to inform policy decisions in the real world.
    Arnald Puy, Associate Professor in Social and Environmental Uncertainties at the University of Birmingham, commented: “As science keeps on unfolding secrets, models keep getting bigger — integrating new discoveries to better reflect the world around us. We assume that more detailed models produce better predictions because they better match reality.
    “And yet pursuing ever-complex models may not deliver the results we seek, because adding new parameters brings new uncertainties into the model. These new uncertainties pile on top of the uncertainties already there at every model upgrade stage, making the model’s output fuzzier at every step of the way.”
    This tendency to produce more inaccurate results affects any model without training or validation data used to check its output’s accuracy — affecting all global models such as those focused on climate-change, hydrology, food-production, and epidemiology, as well as models projecting estimates into the future, regardless of the scientific field.
    Researchers recommend that the drive to produce increasingly detailed mathematical models as a means to get sharper estimates should be reassessed.
    “We suggest that modelers should calculate the model’s effective dimensions (the number of influential parameters and their highest-order interaction) before making the model more complex. This allows to check how the addition of model complexity affects the uncertainty in the output. Such information is especially valuable for models aiming to play a role in policy making,” added Dr. Puy. “Both modelers and policy makers benefit from understanding any uncertainty generated when a model is upgraded with novel mechanisms.
    “Modelers tend not to submit their models to uncertainty and sensitivity analysis but keep on adding detail. Not many scholars are interested running such an analysis on their model if it risks showing that the emperor runs naked and its alleged sharp estimates are just a mirage.”
    Excess complexity prevents scholars and public alike to ponder the appropriateness of the models’ assumptions, often highly questionable. Puy and his team note, for example, that global hydrological models assume that irrigation optimises crop production and water use — a premise at odds with practices of traditional irrigators.
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Blocking the buzz: MXene composite could eliminate electromagnetic interference by absorbing it

    A recent discovery by materials science researchers in Drexel University’s College of Engineering might one day prevent electronic devices and components from going haywire when they’re too close to one another. A special coating that they developed, using a type of two-dimensional material called MXene, has shown to be capable of absorbing and disbursing the electromagnetic fields that are the source of the problem.
    Buzzing, feedback or static are the noticeable manifestations of electromagnetic interference, a collision of the electromagnetic fields generated by electronics devices. Aside from the sounds, this phenomenon can also diminish the performance of the devices and lead to overheating and malfunctions if left unchecked.
    While researchers and technologists have progressively reduced this problem with each generation of devices, their strategy thus far has been to encase vital components with a shielding that deflects electromagnetic waves. But according to the Drexel team, this isn’t a sustainable solution.
    “Because the number of electronics devices will continue to grow, deflecting the electromagnetic waves they produce is really just a short-term solution,” said Yury Gogotsi, PhD, Distinguished University and Bach professor in the College of Engineering, who led the research. “To truly solve this problem, we need to develop materials that will absorb and dissipate the interference. We believe we have found just such a material.”
    In the recent edition of Cell Reports Physical Science, Gogotsi’s team reported that combining MXene, a two-dimensional material they discovered more than a decade ago, with a conductive element called vanadium in a polymer solution, produces a coating that can absorb electromagnetic waves.
    While researchers have previously demonstrated that MXenes are highly effective at warding off electromagnetic interference by reflecting it, adding vanadium carbide in a polymer matrix enhances two key characteristics of the material that improve its shielding performance.
    According to the researchers, adding vanadium to MXene structure — a material known for its durability and corrosion-resistant properties, that is used in steel alloys for space vehicles and nuclear reactors — causes layers of the Mxene to form in sort of electrochemical grid that is perfect for trapping ions. Using microwave-transparent polymer, makes the material also more permeable to the electromagnetic waves.
    Combined, these properties produce a coating that can absorb, entrap and dissipate the energy of electromagnetic waves at greater than 90% efficiency, according to the research.
    “Remarkably, combining polyurethane, a common polymer used in common wall paint, with a tiny amount of MXene filler — about one part MXene in 50 parts polyurethane — can absorb more than 90% of incident electromagnetic waves covering the entire band of radar frequencies — known as X-band frequencies,” said Meikang Han, PhD, who participated in the research as a post-doctoral researcher at Drexel. “Radio waves just disappear inside the MXene-polymer composite film — of course, nothing disappears completely, the energy of the waves is transformed to a very small amount of heat which is easily dissipated by the material.”
    A thin coating of the vanadium-based MXene material — less than the width of a human hair — could render a material impermeable to any electromagnetic waves in the X-band spectrum, which includes microwave radiation and is the most common frequency produced by devices. Gogotsi predicts that this development could be important for high-stakes applications such as medical and military settings when maintaining technological performance is crucial.
    “Our results show that vanadium-based MXenes could play a key role in the expansion of Internet of Things technology and 5G and 6G communications.” Gogotsi said. “This study provides a new director for the development of thin, highly absorbent, MXene-based electromagnetic interference protection materials.”
    Story Source:
    Materials provided by Drexel University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence answers the call for quail information

    When states want to gauge quail populations, the process can be grueling, time-consuming and expensive.
    It means spending hours in the field listening for calls. Or leaving a recording device in the field to catch what sounds are made — only to spend hours later listening to that audio. Then, repeating this process until there’s enough information to start making population estimates.
    But a new model developed by researchers at the University of Georgia aims to streamline this process. By using artificial intelligence to analyze terabytes of recordings for quail calls, the process gives wildlife managers the ability to gather the data they need in a matter of minutes.
    “The model is very accurate, picking up between 80% and 100% of all calls even in the noisiest recordings. So, you could take a recording, put it through our model and it will tell you how many quail calls that the recorder heard,” said James Martin, an associate professor at the UGA Warnell School of Forestry and Natural Resources who has been working on the project, in collaboration with the Georgia Department of Natural Resources, for about five years. “This new model allows you to analyze terabytes of data in seconds, and what that will allow us to do is scale up monitoring, so you can literally put hundreds of these devices out and cover a lot more area and do so with a lot less effort than in the past.”
    The software represents about five years of work by Martin, postdoctoral researcher Victoria Nolan and numerous key contributors who have worked with a code writer to create the model. It’s also part of a larger shift taking place in the field of wildlife research, where computer algorithms are now assisting with work that once took humans thousands of hours to complete.
    Increasingly, computers are getting smarter at, for example, identifying specific noises or certain traits in photos and sound recordings. For researchers such as Martin, it means hours once spent on tasks such as listening to audio or looking at game camera images can now be done by a computer, freeing up valuable time to focus on other aspects of a project.
    The new tool can also be a valuable resource for state and federal agencies looking for information on their quail populations, but with limited funds to spend on any one project. “So, I think this is something states might jump on as far as replacing their current monitoring with acoustic recording devices,” added Martin.
    The software’s success was recently documented by the Journal of Remote Sensing in Ecology and Conservation.
    As the software gets more use and is exposed to sounds from new geographic areas, Martin said, it gets even “smarter.” As it is, quail offer several different kinds of calls. But when the software is exposed to a variety of sounds that aren’t quail, he said, it’s better able to distinguish the correct calls from the ambient noises of the grasses and trees around them.
    Over time, the software will grow more discerning.
    “So that’s why you have to keep giving it training data, and when you move geographies, you encounter new sounds that you didn’t train the model for,” he added. “It’s always about adaption.”
    Story Source:
    Materials provided by University of Georgia. Original written by Kristen Morales. Note: Content may be edited for style and length. More

  • in

    AI takes guesswork out of lateral flow testing

    An artificial intelligence app to read COVID-19 lateral flow tests helped to reduce false results in a new trial out today.
    Published in Cell Reports Medicine, a team of researchers from the University of Birmingham, Durham University and Oxford University tested whether a machine learning algorithm could improve the accuracy of results from antigen lateral flow devices for COVID-19.
    The LFD AI Consortium team worked at UK Health Security Agency assisted test centres and with health care workers conducting self-testing to trial the AI app. More than 100,000 images were submitted as part of the study, and the team found that the algorithm was able to increase the sensitivity of results, determining between a true positive and false negative, from 92% to 97.6% accuracy.
    Professor Andrew Beggs, Professor of Cancer Genetics & Surgery at the University of Birmingham and lead author of the study said:
    “The widespread use of antigen lateral flow devices was a significant moment not just during the pandemic, but has also introduced diagnostic testing to many more people in society. One of the drawbacks with LFD testing for Covid, pregnancy and any other future use is the ‘faint line’ question — where we can’t quite tell if it’s a positive or not.
    “The study looked at the feasibility of using machine learning to take the guesswork out of the faint line tests, and we’re pleased to see that the app saw an increase in sensitivity of the tests, reducing the numbers of false negatives. The promise of this type of technology could be used in lots of applications, both to reduce uncertainty about test results and provide a crucial support for visually impaired people.”
    Professor Camila Caiado, Professor of Statistics at Durham University and chief statistician on the project, said:
    “The increase in sensitivity and overall accuracy is significant and it shows the potential of this app by reducing the number of false negatives and future infections. Crucially, the method can also be easily adapted to the evaluation of other digital readers for lateral flow type devices.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Introducing FathomNet: New open-source image database unlocks the power of AI for ocean exploration

    A new collaborative effort between MBARI and other research institutions is leveraging the power of artificial intelligence and machine learning to accelerate efforts to study the ocean.
    In order to manage impacts from climate change and other threats, researchers urgently need to learn more about the ocean’s inhabitants, ecosystems, and processes. As scientists and engineers develop advanced robotics that can visualize marine life and environments to monitor changes in the ocean’s health, they face a fundamental problem: The collection of images, video, and other visual data vastly exceeds researchers’ capacity for analysis.
    FathomNet is an open-source image database that uses state-of-the-art data processing algorithms to help process the backlog of visual data. Using artificial intelligence and machine learning will alleviate the bottleneck for analyzing underwater imagery and accelerate important research around ocean health.
    “A big ocean needs big data. Researchers are collecting large quantities of visual data to observe life in the ocean. How can we possibly process all this information without automation? Machine learning provides a pathway forwards, however these approaches rely on massive datasets for training. FathomNet has been built to fill this gap,” said MBARI Principal Engineer Kakani Katija.
    Project co-founders Katija, Katy Croff Bell (Ocean Discovery League), and Ben Woodward (CVision AI), along with members of the extended FathomNet team, detailed the development of this new image database in a recent research publication in Scientific Reports.
    Recent advances in machine learning enable fast, sophisticated analysis of visual data, but the use of artificial intelligence in ocean research has been limited by the lack of a standard set of existing images that could be used to train the machines to recognize and catalog underwater objects and life. FathomNet addresses this need by aggregating images from multiple sources to create a publicly available, expertly curated underwater image training database. More