More stories

  • in

    AI models identify biodiversity from animal sounds in tropical rainforests

    Tropical forests are among the most important habitats on our planet. They are characterised by extremely high species diversity and play an eminent role in the global carbon cycle and the world climate. However, many tropical forest areas have been deforested and overexploitation continues day by day.
    Reforested areas in the tropics are therefore becoming increasingly important for the climate and biodiversity. How well biodiversity develops on such areas can be monitored very well with an automated analysis of animal sounds. This was reported by researchers in the journal Nature Communications.
    Recordings on Former Cocoa Plantations and Pastures
    As part of the DFG research group Reassembly, the team worked in northern Ecuador on abandoned pastures and former cacao plantations where forest is gradually reestablishing itself. There, they investigated whether autonomous sound recorders and artificial intelligence (AI) can be used to automatically recognise how the species communities of birds, amphibians and mammals are composed.
    “The research results show that the sound data reflect excellently the return of biodiversity in abandoned agricultural areas,” Professor Jörg Müller is pleased to say. The head of the Ecological Station Fabrikschleichach at Julius-Maximilians-Universität (JMU) Würzburg and his colleague Oliver Mitesser were in charge of the study.
    Overall it is particularly the communities of vocalizing species that mirrors the recolonisation very well — because the communities follow strictly the recovery gradients. A preliminary set of 70 AI bird models was able to describe the entire species communities of birds, amphibians and some calling mammals. Even the changes in nocturnal insects could be meaningfully correlated with them.
    AI Models are Being Further Refined
    The team is currently working on further improving the AI models used and expanding the set of models. The goal is to be able to automatically record even more species. The models are also to be established in other protected areas in Ecuador, the Sailershausen JMU Forest and in Germany’s oldest national park in the Bavarian Forest.
    “Our AI models can be the basis for a very universal tool for monitoring biodiversity in reforested areas,” says Jörg Müller. The Würzburg professor sees possible applications, for example, in the context of certifications or biodiversity credits. Biodiversity credits function similarly to carbon dioxide emissions trading. They are issued by projects that protect or improve biodiversity. They are purchased by companies or organisations that want to compensate for negative impacts of their activities. More

  • in

    Virtual reality helps people with hoarding disorder practice decluttering

    Many people who dream of an organized, uncluttered home à la Marie Kondo find it hard to decide what to keep and what to let go. But for those with hoarding disorder — a mental condition estimated to affect 2.5% of the U.S. population — the reluctance to let go can reach dangerous and debilitating levels.
    Now, a pilot study by Stanford Medicine researchers suggests that a virtual reality therapy that allows those with hoarding disorder to rehearse relinquishing possessions in a simulation of their own home could help them declutter in real life. The simulations can help patients practice organizational and decision-making skills learned in cognitive behavioral therapy — currently the standard treatment — and desensitize them to the distress they feel when discarding.
    The study was published in the October issue of the Journal of Psychiatric Research.
    A hidden problem
    Hoarding disorder is an under-recognized and under-treated condition that has been included in the Diagnostic and Statistical Manual of Mental Disorders — referred to as the DSM-5 — as a formal diagnosis only since 2013. People with the disorder, who tend to be older, have persistent difficulty parting with possessions, resulting in an accumulation of clutter that impairs their relationships, their work and even their safety.
    “Unfortunately, stigma and shame prevent people from seeking help for hoarding disorder,” said Carolyn Rodriguez, MD, PhD, professor of psychiatry and behavioral sciences and senior author of the study. “They may also be unwilling to have anyone else enter the home to help.”
    Sometimes the condition is discovered through cluttered backgrounds on Zoom calls or, tragically, when firefighters respond to a fire, Rodriguez said. Precarious piles of stuff not only prevent people from sleeping in their beds and cooking in their kitchens, but they can also attract pests; block fire exits; and collapse on occupants, first responders and clinicians offering treatment. More

  • in

    New polymer membranes, AI predictions could dramatically reduce energy, water use in oil refining

    A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy and water required while extracting even more useful materials.
    The so-called DUCKY polymers — more on the unusual name in a minute — are reported Oct. 16 in Nature Materials. And they’re just the beginning for the team of Georgia Tech chemists, chemical engineers, and materials scientists. They also have created artificial intelligence tools to predict the performance of these kinds of polymer membranes, which could accelerate development of new ones.
    The implications are stark: the initial separation of crude oil components is responsible for roughly 1% of energy used across the globe. What’s more, the membrane separation technology the researchers are developing could have several uses, from biofuels and biodegradable plastics to pulp and paper products.
    “We’re establishing concepts here that we can then use with different molecules or polymers, but we apply them to crude oil because that’s the most challenging target right now,” said M.G. Finn, professor and James A. Carlos Family Chair in the School of Chemistry and Biochemistry.
    Crude oil in its raw state includes thousands of compounds that have to be processed and refined to produce useful materials — gas and other fuels, as well as plastics, textiles, food additives, medical products, and more. Squeezing out the valuable stuff involves dozens of steps, but it starts with distillation, a water- and energy-intensive process.
    Researchers have been trying to develop membranes to do that work instead, filtering out the desirable molecules and skipping all the boiling and cooling.
    “Crude oil is an enormously important feedstock for almost all aspects of life, and most people don’t think about how it’s processed,” said Ryan Lively, Thomas C. DeLoach Jr. Professor in the School of Chemical and Biomolecular Engineering. “These distillation systems are massive water consumers, and the membranes simply are not. They’re not using heat or combustion. They just use electricity. You could ostensibly run it off of a wind turbine, if you wanted. It’s just a fundamentally different way of doing a separation.”
    What makes the team’s new membrane formula so powerful is a new family of polymers. The researchers used building blocks called spirocyclic monomers that assemble together in chains with lots of 90-degree turns, forming a kinky material that doesn’t compress easily and forms pores that selectively bind and permit desirable molecules to pass through. The polymers are not rigid, which means they’re easier to make in large quantities. They also have a well-controlled flexibility or mobility that allows pores of the right filtering structure to come and go over time. More

  • in

    Deep neural networks don’t see the world the way we do

    Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.
    Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.
    When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.
    The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.
    “This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”
    Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander M?dry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.
    Different perceptions
    In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems. More

  • in

    Virtual driving assessment predicts risk of crashing for newly licensed teen drivers

    New research published today by the journal Pediatrics found that driving skills measured at the time of licensure on a virtual driving assessment (VDA), which exposes drivers to common serious crash scenarios, helps predict crash risk in newly licensed young drivers.
    This study, conducted by the Center for Injury Research and Prevention (CIRP) at Children’s Hospital of Philadelphia (CHOP) with colleagues at the University of Pennsylvania and the University of Michigan, brings the research community one step closer to identifying which skill deficits put young new drivers at higher risk for crashes. With this cutting-edge information, more personalized interventions can be developed to improve the driving skills that prevent crashes.
    While drivers between the ages of 15 and 20 only make up about 5% of all drivers on the road, they are involved in approximately 12% of all vehicle crashes and 8.5% of fatal crashes. The time of greatest crash risk is in the months right after these young drivers receive their license, largely due to deficits in driving skills.
    However, many of these newly licensed drivers do avoid crashes. The challenge for policymakers, clinicians, and families has been identifying which drivers are at increased risk of crashing during the learning phase before they drive on their own. Early identification of at-risk drivers offers the opportunity to intervene with training and other resources known to help prevent crashes, making the roads safer for everyone.
    Over the past two decades, CIRP researchers have systematically determined the primary reason for novice driver crashes — inadequate driving skills, such as speed management — and conducted studies that informed the development and validation of a self-guided VDA that measures performance of these driving skills in common serious crash scenarios that cannot be evaluated with on-road testing. The VDA utilizes the Ready-Assess™ platform developed by Diagnostic Driving, Inc., an AI-driven virtual driving assessment that provides the driver with the insights and tools to improve.
    In this study, researchers examined the ability of the VDA, delivered at the time of the licensing road test, to predict crash risk in the first year after obtaining licensure in the state of Ohio. Using a unique study design, the results of the VDA were linked to police-reported crash records for the first year after obtaining a license.
    “Our previous research showed that performance on the VDA predicted actual on-road driving performance, as measured by failure on the licensing road test. This new study went further to determine whether VDA performance could identify unsafe driving performance predictive of future crash risk,” said lead study author Elizabeth Walshe, PhD, a cognitive neuroscientist and clinical researcher who directs the Neuroscience of Driving team at CIRP. “We found that drivers categorized by their performance as having major issues with dangerous behavior were at higher risk of crashing than average new drivers.”
    The researchers analyzed a unique integrated dataset of individual results of VDA performance, collected in the Ohio Bureau of Motor Vehicles before the licensing road test, linked to licensing and police-reported crash records in 16,914 first-time newly licensed drivers under the age of 25. Data were collected from applicants who completed the VDA between July 2017 and December 2019 on the day they passed the on-road licensing examination in Ohio. Researchers examined crash records up to mid-March 2020. More

  • in

    Photonic crystals bend light as though it were under the influence of gravity

    A collaborative group of researchers has manipulated the behavior of light as if it were under the influence of gravity. The findings, which were published in the journal Physical Review A on September 28, 2023, have far-reaching implications for the world of optics and materials science, and bear significance for the development of 6G communications.
    Albert Einstein’s theory of relativity has long established that the trajectory of electromagnetic waves — including light and terahertz electromagnetic waves — can be deflected by gravitational fields.
    Scientists have recently theoretically predicted that replicating the effects of gravity — i.e., pseudogravity — is possible by deforming crystals in the lower normalized energy (or frequency) region.
    “We set out to explore whether lattice distortion in photonic crystals can produce pseudogravity effects,” said Professor Kyoko Kitamura from Tohoku University’s Graduate School of Engineering.
    Photonic crystals possess unique properties that enable scientists to manipulate and control the behavior of light, serving as ‘traffic controllers’ for light within crystals. They are constructed by periodically arranging two or more different materials with varying abilities to interact with and slow down light in a regular, repeating pattern. Furthermore, pseudogravity effects due to adiabatic changes have been observed in photonic crystals.
    Kitamura and her colleagues modified photonic crystals by introducing lattice distortion: gradual deformation of the regular spacing of elements, which disrupted the grid-like pattern of protonic crystals. This manipulated the photonic band structure of the crystals, resulting in a curved beam trajectory in-medium — just like a light-ray passing by a massive celestial body such as a black hole.
    Specifically, they employed a silicon distorted photonic crystal with a primal lattice constant of 200 micrometers and terahertz waves. Experiments successfully demonstrated the deflection of these waves.
    “Much like gravity bends the trajectory of objects, we came up with a means to bend light within certain materials,” adds Kitamura. “Such in-plane beam steering within the terahertz range could be harnessed in 6G communication. Academically, the findings show that photonic crystals could harness gravitational effects, opening new pathways within the field of graviton physics,” said Associate Professor Masayuki Fujita from Osaka University. More

  • in

    Researchers measure global consensus over the ethical use of AI

    To examine the global state of AI ethics, a team of researchers from Brazil performed a systematic review and meta-analysis of global guidelines for AI use. Publishing October 13 in in the journal Patterns, the researchers found that, while most of the guidelines valued privacy, transparency, and accountability, very few valued truthfulness, intellectual property, or children’s rights. Additionally, most of the guidelines described ethical principles and values without proposing practical methods for implementing them and without pushing for legally binding regulation.
    “Establishing clear ethical guidelines and governance structures for the deployment of AI around the world is the first step to promoting trust and confidence, mitigating its risks, and ensuring that its benefits are fairly distributed,” says social scientist and co-author James William Santos of the Pontifical Catholic University of Rio Grande do Sul.
    “Previous work predominantly centered around North American and European documents, which prompted us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond,” says lead author Nicholas Kluge Corrêa of the Pontifical Catholic University of Rio Grande do Sul and the University of Bonn.
    To determine whether a global consensus exists regarding the ethical development and use of AI, and to help guide such a consensus, the researchers conducted a systematic review of policy and ethical guidelines published between 2014 and 2022. From this, they identified 200 documents related to AI ethics and governance from 37 countries and six continents and written or translated into five different languages (English, Portuguese, French, German, and Spanish). These documents included recommendations, practical guides, policy frameworks, legal landmarks, and codes of conduct.
    Then, the team conducted a meta-analysis of these documents to identify the most common ethical principles, examine their global distribution, and assess biases in terms of the type of organizations or people producing these documents.
    The researchers found that the most common principles were transparency, security, justice, privacy, and accountability, which appeared in 82.5%, 78%, 75.5%, 68.5%, and 67% of the documents, respectively. The least common principles were labor rights, truthfulness, intellectual property, and children/adolescent rights, which appeared in 19.5%, 8.5%, 7%, and 6% of the documents, and the authors emphasize that these principles deserve more attention. For example, truthfulness — the idea that AI should provide truthful information — is becoming increasingly relevant with the release of generative AI technologies like ChatGPT. And since AI has the potential to displace workers and change the way we work, practical measures are to avoid mass unemployment or monopolies.
    Most (96%) of the guidelines were “normative” — describing ethical values that should be considered during AI development and use — while only 2% recommended practical methods of implementing AI ethics, and only 4.5% proposed legally binding forms of AI regulation. More

  • in

    Physicists demonstrate powerful physics phenomenon

    In a new breakthrough, researchers have used a novel technique to confirm a previously undetected physics phenomenon that could be used to improve data storage in the next generation of computer devices.
    Spintronic memories, like those used in some high-tech computers and satellites, use magnetic states generated by an electron’s intrinsic angular momentum to store and read information. Depending on its physical motion, an electron’s spin produces a magnetic current. Known as the “spin Hall effect,” this has key applications for magnetic materials across many different fields, ranging from low power electronics to fundamental quantum mechanics.
    More recently, scientists have found that electrons are also capable of generating electricity through a second kind of movement: orbital angular momentum, similar to how Earth revolves around the sun. This is known as the “orbital Hall effect,” said Roland Kawakami, co-author of the study and a professor in physics at The Ohio State University.
    Theorists predicted that by using light transition metals — materials that have weak spin Hall currents — magnetic currents generated by the orbital Hall effect would be easier to spot flowing alongside them. Until now, directly detecting such a thing has been a challenge, but the study, led by Igor Lyalin, a graduate student in physics, and published today in the journal Physical Review Letters,showed a method to observe the effect.
    “Over the decades, there’s been a continuous discovery of various Hall effects,” said Kawakami. “But the idea of these orbital currents is really a brand new one. The difficulty is that they are mixed with spin currents in typical heavy metals and it’s difficult to tell them apart.”
    Instead, Kawakami’s team demonstrated the orbital Hall effect by reflecting polarized light, in this case, a laser, onto various thin films of the light metal chromium to probe the metal’s atoms for a potential build-up of orbital angular momentum. After nearly a year of painstaking measurements, researchers were able to detect a clear magneto-optical signal which showed that electrons gathered at one end of the film exhibited strong orbital Hall effect characteristics.
    This successful detection could have huge consequences for future spintronics applications, said Kawakami. More