More stories

  • in

    Decoding how salamanders walk

    Researchers at Tohoku University and the Swiss Federal Institute of Technology in Lausanne, with the support of the Human Frontier Science Program, have decoded the flexible motor control mechanisms underlying salamander walking.
    Their findings were published in the journal Frontiers in Neurorobotics on July 30, 2021.
    Animals with four feet can navigate complex, unpredictable, and unstructured environments. The impressive ability is thanks to their body-limb coordination.
    The salamander is an excellent specimen for studying body-limb coordination mechanisms. It is an amphibian that uses four legs and walks by swaying itself from left to right in a motion known as undulation.
    Their nervous system is simpler than those of mammals, and they change their walking pattern according to the speed at which they are moving.
    To decode the salamander’s movement, researchers led by Professor Akio Ishiguro of the Research Institute of Electrical Communication at Tohoku University modeled the salamander’s nervous system mathematically and physically simulated the model.
    In making the model, the researchers hypothesized that the legs and the body are controlled to support other motions by sharing sensory information. They then reproduced the speed-dependent gait transitions of salamanders through computer simulations.
    “We hope this finding provides insights into the essential mechanism behind the adaptive and versatile locomotion of animals,” said Ishiguro.
    The researchers are confident their discovery will aid the development of robots that can move with high agility and adaptability by flexibly changing body-limb coordination patterns.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Internet CBT for depression reviewed and analyzed

    Internet-based cognitive behavioral therapy (CBT) for depression is often just as effective as traditional CBT. This is clear from an international study involving scientists at the University of Gothenburg. However, some online treatments have components that can be harmful.
    Internet CBT (iCBT) as a method of delivering treatment is on the increase. Nevertheless, it has been unclear to date which parts of the treatment are most helpful against depression, which are less efficacious and which are potentially detrimental to patients.
    In an international study, researchers at the University of Gothenburg participated in a systematic literature review and meta-analysis. The study was based on 76 randomized controlled trials (RCTs) in Sweden and elsewhere. In total, the RCTs included 17,521 patients, 71% of whom were women.
    The study, under the aegis of Kyoto University in Japan, is now published in The Lancet Psychiatry. One coauthor is Cecilia Björkelund, Senior Professor of Family Medicine at the University of Gothenburg’s Sahlgrenska Academy.
    “In mild or moderate depression, the effect of iCBT is as good as that of conventional CBT. For many, it’s a superb way of getting access to therapy without having to go to a therapist. We also saw that it was especially good for the elderly — a finding we didn’t entirely expect,” she says.
    Just as in traditional CBT, its online counterpart involves modifying patients’ thoughts, feelings and behaviors that are obstacles in their lives and impair their mood. During the treatment, which often lasts about ten weeks, they are given tasks and exercises to perform on their own.
    The factor that proved most significant for the prognosis was the depth of depression at the start of treatment. In milder depression, better results were obtained. Therapist support and text-message reminders increased the proportion of patients who completed the therapy.
    “If you’re going to use iCBT in health care, the programs have to be regulated just as well as drugs are, but that’s not the case today. With this study, we’re taking a real step forward. First, the study surveys what’s most effective. Second, it provides knowledge of how to design a program and adapt its composition to patients’ problems,” Björkelund says.
    However, iCBT requires continuous therapeutic contact. One reason is the importance of the therapist being able to see an improvement within three to four weeks, ensuring that the trend is not in the wrong direction. Björkelund stresses the great potential danger of depression. In severe depression, internet-mediated therapy is inappropriate.
    The study shows the danger of using iCBT with programs that include relaxation therapy. Rather than being beneficial, this may have negative effects, exacerbating depressive symptoms and causing “relaxation-induced anxiety.”
    “For a depressed person, it isn’t advisable. Relaxation programs shouldn’t be used as part of depression treatment in health care,” Björkelund says.
    Story Source:
    Materials provided by University of Gothenburg. Note: Content may be edited for style and length. More

  • in

    Greece’s Santorini volcano erupts more often when sea level drops

    When sea level drops far below the present-day level, the island volcano Santorini in Greece gets ready to rumble.

    A comparison of the activity of the volcano, which is now partially collapsed, with sea levels over the last 360,000 years reveals that when the sea level dips more than 40 meters below the present-day level, it triggers a fit of eruptions. During times of higher sea level, the volcano is quiet, researchers report online August 2 in Nature Geoscience.

    Other volcanoes around the globe are probably similarly influenced by sea levels, the researchers say. Most of the world’s volcanic systems are in or near oceans.

    “It’s hard to see why a coastal or island volcano would not be affected by sea level,” says Iain Stewart, a geoscientist at the Royal Scientific Society of Jordan in Amman, who was not involved in the work. Accounting for these effects could make volcano hazard forecasting more accurate.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!
    There was a problem signing you up.

    Santorini consists of a ring of islands surrounding the central tip of a volcano poking out of the Aegean Sea. The entire volcano used to be above water, but a violent eruption around 1600 B.C. caused the volcano to cave in partially, forming a lagoon. That particular eruption is famous for potentially dooming the Minoan civilization and inspiring the legend of the lost city of Atlantis (SN: 2/1/12).

    To investigate how sea level might influence the volcano, researchers created a computer simulation of Santorini’s magma chamber, which sits about four kilometers beneath the surface of the volcano. In the simulation, when the sea level dropped at least 40 meters below the present-day level, the crust above the magma chamber splintered. “That gives an opportunity for the magma that’s stored under the volcano to move up through these fractures and make its way to the surface,” says study coauthor Christopher Satow, a physical geographer at Oxford Brookes University in England.

    According to the simulation, it should take about 13,000 years for those cracks to reach the surface and awaken the volcano. After the water rises again, it should take about 11,000 years for the cracks to close and eruptions to stop.

    When the sea drops at least 40 meters below the present-day level, the crust beneath the Santorini volcano (illustrated) starts to crack. As the sea level drops even further over thousands of years, those cracks spread to the surface, bringing up magma that feeds volcanic eruptions.Oxford Brookes University

    It may seem counterintuitive that lowering the amount of water atop the magma chamber would cause the crust to splinter. Satow compares the scenario to wrapping your hands around an inflated balloon, where the rubber is Earth’s crust and your hands’ inward pressure is the weight of the ocean. As someone else pumps air into the balloon — like magma building up under Earth’s crust — the pressure of your hands helps prevent the balloon from popping. “As soon as you start to release the pressure with your hands, [like] taking the sea level down, the balloon starts to expand,” Satow says, and ultimately the balloon breaks.

    Satow’s team tested the predictions of the simulation by comparing the Santorini Volcano’s eruption history — preserved in the rock layers of the islands surrounding the central volcano tip — with evidence of past sea levels from marine sediments. All but three of the volcano’s 211 well-dated eruptions in the last 360,000 years happened during periods of low sea level, as the simulation predicted. Such periods of low sea level occurred when more of Earth’s water was locked up in glaciers during ice ages.

    “It’s really intriguing and interesting, and perhaps not surprising, given that other studies have shown that volcanoes are sensitive to changes in their stress state,” says Emilie Hooft, a geophysicist at the University of Oregon in Eugene, who wasn’t involved in the work. Volcanoes in Iceland, for instance, have shown an uptick in eruptions after overlying glaciers have melted, relieving the volcanic systems of the weight of the ice.

    Volcanoes around the world are likely subject to the effects of sea level, Satow says, though how much probably varies. “Some will be very sensitive to sea level changes, and for others there will be almost no impact at all.” These effects will depend on the depth of the magma chambers feeding into each volcano and the properties of the surrounding crust.

    But if sea level controls the activity of any volcano in or near the ocean, at least to an extent, “you’d expect all these volcanoes to be in sync with one another,” Satow says, “which would be incredible.”

    As for Santorini, given that the last time sea level was 40 meters below the present-day level was about 11,000 years ago — and sea level is continuing to rise due to climate change — Satow’s team expects the volcano to enter a period of relative quiet right about now (SN: 3/14/12). But two major eruptions in the volcano’s history did happen amid high sea levels, the researchers say, so future violent eruptions aren’t completely off the table. More

  • in

    New research infuses equity principles into the algorithm development process

    In the U.S., the place where one was born, one’s social and economic background, the neighborhoods in which one spends one’s formative years, and where one grows old are factors that account for a quarter to 60% of deaths in any given year, partly because these forces play a significant role in occurrence and outcomes for heart disease, cancer, unintentional injuries, chronic lower respiratory diseases, and cerebrovascular diseases — the five leading causes of death.
    While data on such “macro” factors is critical to tracking and predicting health outcomes for individuals and communities, analysts who apply machine-learning tools to health outcomes tend to rely on “micro” data constrained to purely clinical settings and driven by healthcare data and processes inside the hospital, leaving factors that could shed light on healthcare disparities in the dark.
    Researchers at the NYU Tandon School of Engineering and NYU School of Global Public Health (NYU GPH), in a new perspective, “Machine learning and algorithmic fairness in public and population health,” in Nature Machine Intelligence, aim to activate the machine learning community to account for “macro” factors and their impact on health. Thinking outside the clinical “box” and beyond the strict limits of individual factors, Rumi Chunara, associate professor of computer science and engineering at NYU Tandon and of biostatistics at the NYU GPH, found a new approach to incorporating the larger web of relevant data for predictive modeling for individual and community health outcomes.
    “Research of what causes and reduces equity shows that to avoid creating more disparities it is essential to consider upstream factors as well,” explained Chunara. She noted, on the one hand, the large body of work on AI and machine learning implementation in healthcare in areas like image analysis, radiography, and pathology, and on the other the strong awareness and advocacy focused on such areas as structural racism, police brutality, and healthcare disparities that came to light around the COVID-19 pandemic.
    “Our goal is to take that work and the explosion of data-rich machine learning in healthcare, and create a holistic view beyond the clinical setting, incorporating data about communities and the environment.”
    Chunara, along with her doctoral students Vishwali Mhasawade and Yuan Zhao, at NYU Tandon and NYU GPH, respectively, leveraged the Social Ecological Model, a framework for understanding how the health, habits and behavior of an individual are affected by factors such as public policies at the national and international level and availability of health resources within a community and neighborhood. The team shows how principles of this model can be used in algorithm development to show how algorithms can be designed and used more equitably.
    The researchers organized existing work into a taxonomy of the types of tasks for which machine learning and AI are used that span prediction, interventions, identifying effects and allocations, to show examples of how a multi-level perspective can be leveraged. In the piece, the authors also show how the same framework is applicable to considerations of data privacy, governance, and best practices to move the healthcare burden from individuals, toward improving equity.
    As an example of such approaches, members of the same team recently presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society a new approach to using “causal multi-level fairness,” the larger web of relevant data for assessing fairness of algorithms. This work builds on the field of “algorithmic fairness,” which, to date, is limited by its exclusive focus on individual-level attributes such as gender and race.
    In this work Mhasawade and Chunara formalized a novel approach to understanding fairness relationships using tools from causal inference, synthesizing a means by which an investigator could assess and account for effects of sensitive macro attributes and not merely individual factors. They developed the algorithm for their approach and provided the settings under which it is applicable. They also illustrated their method on data showing how predictions based merely on data points associated with labels like race, income and gender are of limited value if sensitive attributes are not accounted for, or are accounted for without proper context.
    “As in healthcare, algorithmic fairness tends to be focused on labels — men and women, Black versus white, etc. — without considering multiple layers of influence from a causal perspective to decide what is fair and unfair in predictions,” said Chunara. “Our work presents a framework for thinking not only about equity in algorithms but also what types of data we use in them.” More

  • in

    Artificial Intelligence learns better when distracted

    How should you train your AI system? This question is pertinent, because many deep learning systems are still black boxes. Computer scientists from the Netherlands and Spain have now determined how a deep learning system well suited for image recognition learns to recognize its surroundings. They were able to simplify the learning process by forcing the system’s focus toward secondary characteristics.
    Convolutional Neural Networks (CNNs) are a form of bio-inspired deep learning in artificial intelligence. The interaction of thousands of ‘neurons’ mimics the way our brain learns to recognize images. ‘These CNNs are successful, but we don’t fully understand how they work’, says Estefanía Talavera Martinez, lecturer and researcher at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence of the University of Groningen in the Netherlands.
    Food
    She has made use of CNNs herself to analyse images made by wearable cameras in the study of human behaviour. Among other things, Talavera Martinez has been studying our interactions with food, so she wanted the system to recognize the different settings in which people encounter food. ‘I noticed that the system made errors in the classification of some pictures and needed to know why this happened.’
    By using heat maps, she analysed which parts of the images were used by the CNNs to identify the setting. ‘This led to the hypothesis that the system was not looking at enough details’, she explains. For example, if an AI system has taught itself to use mugs to identify a kitchen, it will wrongly classify living rooms, offices and other places where mugs are used. The solution that was developed by Talavera Martinez and her colleagues David Morales (Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada) and Beatriz Remeseiro (Department of Computer Science, Universidad de Oviedo), both in Spain, is to distract the system from their primary targets.
    Blurred
    They trained CNNs using a standard image set of planes or cars and identified through heat maps which parts of the images were used for classification. Then, these parts were blurred in the image set, which was then used for a second round of training. ‘This forces the system to look elsewhere for identifiers. And by using this extra information, it becomes more fine-grained in its classification.’
    The approach worked well in the standard image sets, and was also successful in the images Talavera Martinez had collected herself from the wearable cameras. ‘Our training regime gives us results similar to other approaches, but is much simpler and requires less computing time.’ Previous attempts to increase fine-grained classification included combining different sets of CNNs. The approach developed by Talavera Martinez and her colleagues is much more lightweight. ‘This study gave us a better idea of how these CNNs learn, and that has helped us to improve the training program.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    New information storage and processing device

    A team of scientists has developed a means to create a new type of memory, marking a notable breakthrough in the increasingly sophisticated field of artificial intelligence.
    “Quantum materials hold great promise for improving the capacities of today’s computers,” explains Andrew Kent, a New York University physicist and one of the senior investigators. “The work draws upon their properties in establishing a new structure for computation.”
    The creation, designed in partnership with researchers from the University of California, San Diego (UCSD) and the University of Paris-Saclay, is reported in Scientific Reports.
    “Since conventional computing has reached its limits, new computational methods and devices are being developed,” adds Ivan Schuller, a UCSD physicist and one of the paper’s authors. “These have the potential of revolutionizing computing and in ways that may one day rival the human brain.”
    In recent years, scientists have sought to make advances in what is known as “neuromorphic computing” — a process that seeks to mimic the functionality of the human brain. Because of its human-like characteristics, it may offer more efficient and innovative ways to process data using approaches not achievable using existing computational methods.
    In the Scientific Reports work, the researchers created a new device that marks major progress already made in this area.
    To do so, they built a nanoconstriction spintronic resonator to manipulate known physical properties in innovative ways.
    Resonators are capable of generating and storing waves of well-defined frequencies — akin to the box of a string instrument. Here, the scientists constructed a new type of resonator — capable of storing and processing information similar to synapses and neurons in the brain. The one described in Scientific Reports combines the unique properties of quantum materials together with that of spintronic magnetic devices.
    Spintronic devices are electronics that use an electron’s spin in addition to its electrical charge to process information in ways that reduce energy while increasing storage and processing capacity relative to more traditional approaches. A broadly used such device, a “spin torque oscillator,” operates at a specific frequency. Combining it with a quantum material allows tuning this frequency and thus broadening its applicability considerably.
    “This is a fundamental advance that has applications in computing, particularly in neuromorphic computing, where such resonators can serve as connections among computing components,” observes Kent.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Adapting roots to a hotter planet could ease pressure on food supply

    The shoots of plants get all of the glory, with their fruit and flowers and visible structure. But it’s the portion that lies below the soil — the branching, reaching arms of roots and hairs pulling up water and nutrients — that interests plant physiologist and computer scientist, Alexander Bucksch, associate professor of Plant Biology at the University of Georgia.
    The health and growth of the root system has deep implications for our future.
    Our ability to grow enough food to support the population despite a changing climate, and to fix carbon from the atmosphere in the soil are critical to our, and other species’, survival. The solutions, Bucksch believes, lie in the qualities of roots.
    “When there is a problem in the world, humans can move. But what does the plant do?” he asked. “It says, ‘Let’s alter our genome to survive.’ It evolves.”
    Until recently, farmers and plant breeders didn’t have a good way to gather information about the root system of plants, or make decisions about the optimal seeds to grow deep roots.
    In a paper published this month in Plant Physiology, Bucksch and colleagues introduce DIRT/3D (Digital Imaging of Root Traits), an image-based 3D root phenotyping platform that can measure 18 architecture traits from mature field-grown maize root crowns excavated using the Shovelomics technique. More

  • in

    Systems intelligent organizations succeed – regardless of structures

    Matrix, process, or something else? The structure of an organisation is of little significance for its success, as long as there is systems intelligence, according to a study conducted by doctoral student Juha Törmänen together with Professor Esa Saarinen, and Professor Emeritus Raimo P. Hämäläinen, based on a survey of 470 British and US citizens in 2018-2019.
    Systems Intelligence is a concept created by Saarinen and Hämäläinen connecting human sensitivity and engineering thinking, which takes comprehensive interaction between individuals and their environments into account. It examines people and organisations through systemic perception, attitude, reflection, positive engagement, attunement, spirited discovery, wise action, and effective responsiveness.
    The researchers ascertained how well these different dimensions of Systems Intelligence explained an organisation’s success.
    ‘A systems intelligent organisation is successful. By its nature, a systems intelligent organisation is also one that is capable of learning and development. The employees of a systems intelligent organisation have models of behaviour and action, which enable learning’, Törmänen says.
    About 60 percent of the respondents were employees and about 40 percent were in a managerial or leadership position. The respondents evaluated issues pertaining to their organisations on if people there are warm and accepting of one another, and if they bring out the best in others. The metrics of Systems Intelligence were compared with the Dimensions of the Learning Organization Questionnaire (DLOQ), which is the most common scale used for evaluating learning organizations. Questions from both metrics were placed randomly in the questionnaire. In addition, respondents were asked to choose from among ten options on how successful the organisations that they represent are in their fields.
    The study revealed that Systems Intelligence and the DLOQ are approximately equally good at explaining an organisation’s success. A respondent who evaluates the organisation as being successful, giving it the highest evaluation, typically gave the organisation higher marks in both the dimensions of Systems Intelligence and the different areas of the DLOQ. More