More stories

  • in

    The Southern Ocean is still swallowing large amounts of humans’ carbon dioxide emissions

    The Southern Ocean is still busily absorbing large amounts of the carbon dioxide emitted by humans’ fossil fuel burning, a study based on airborne observations of the gas suggests. The new results counter a 2018 report that had found that the ocean surrounding Antarctica might not be taking up as much of the emissions as previously thought, and in some regions may actually be adding CO₂ back to the atmosphere.    

    It’s not exactly a relief to say that the oceans, which are already becoming more acidic and storing record-breaking amounts of heat due to global warming, might be able to bear a little more of the climate change burden (SN: 4/28/17; SN: 1/13/21). But “in many ways, [the conclusion] was reassuring,” says Matthew Long, an oceanographer at the National Center for Atmospheric Research in Boulder, Colo.  

    That’s because the Southern Ocean alone has been thought to be responsible for nearly half of the global ocean uptake of humans’ CO₂ emissions each year. That means it plays an outsize role in modulating some of the immediate impacts of those emissions. However, the float-based estimates had suggested that, over the course of a year, the Southern Ocean was actually a net source of carbon dioxide rather than a sink, ultimately emitting about 0.3 billion metric tons of the gas back to the atmosphere each year.

    In contrast, the new findings, published in the Dec. 3 Science, suggest that from 2009 through 2018, the Southern Ocean was still a net sink, taking up a total of about 0.55 billion metric tons of carbon dioxide each year.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The 2018 study had used newly deployed deep-diving ocean floats, now numbering almost 200, that are part of a project called Southern Ocean Carbon and Climate Observations and Modeling, or SOCCOM. Calculations based on data collected from 2014 through 2017 by 35 of the floats suggested that parts of the ocean were actually releasing a great deal of carbon dioxide back into the atmosphere during winter (SN: 6/2/19). That sparked concerns that the Southern Ocean’s role in buffering the impacts of climate change on Earth might not be so robust as once thought.

    Long says he and other researchers were somewhat skeptical about that takeaway, however. The floats measure temperature, salinity and pH in the water down to about 2,000 meters, and scientists use those data to calculate the carbon dioxide concentration in the water. But those calculations rest on several assumptions about the ocean water properties, as actual data are still very scarce. That may be skewing the data a bit, leading to calculations of higher carbon dioxide emitted from the water than is actually occurring, Long suggests.

    Another way to measure how much carbon dioxide is moving between air and sea is by taking airborne measurements. In the new study, the team amassed previously collected carbon dioxide data over large swaths of the Southern Ocean during three separate series of aircraft flights — one series lasting from 2009 to 2011, one in the winter of 2016 and a third in several periods from 2016 to 2018 (SN: 9/8/11). Then, the researchers used those data to create simulations of how much carbon dioxide could possibly be moving between ocean and atmosphere each year.

    The float-based and aircraft-based studies estimate different overall amounts of carbon dioxide moving out of the ocean, but both identified a seasonal pattern of less carbon dioxide absorbed by the ocean during winter. That indicates that both types of data are picking up a real trend, says Ken Johnson, an ocean chemist at the Monterey Bay Aquarium Research Institute in Moss Landing, Calif., who was not involved in the research. “We all go up and down together.”

    It’s not yet clear whether the SOCCOM data were off. But to better understand what sorts of biases might affect the pH calculations, researchers must compare direct measurements of carbon dioxide in the water taken from ships with pH-based estimates at the same location. Such studies are under way right now off the coast of California, Johnson says.

    The big takeaway, Johnson says, is that both datasets — as well as direct shipboard measurements in the Southern Ocean, which are few and far between — are going to be essential for understanding what role these waters play in the planet’s carbon cycle. While the airborne studies can help constrain the big picture of carbon dioxide emissions data from the Southern Ocean, the floats are much more widely distributed, and so are able to identify local and regional variability in carbon dioxide, which the atmospheric data can’t do.

    “The Southern Ocean is the flywheel of the climate system,” the part of an engine’s machinery that keeps things chugging smoothly along, Johnson says. “If we don’t get our understanding of the Southern Ocean right, we don’t have much hope for understanding the rest of the world.” More

  • in

    Green information technologies: Superconductivity meets spintronics

    When two superconducting regions are separated by a strip of non-superconducting material, a special quantum effect can occur, coupling both regions: The Josephson effect. If the spacer material is a half-metal ferromagnet novel implications for spintronic applications arise. An international team has now for the first time designed a material system that exhibits an unusually long-range Josephson effect: Here, regions of superconducting YBa2Cu3O7 are separated by a region of half-metallic, ferromagnetic manganite (La2/3Sr1/3MnO3) one micron wide.
    With the help of magneto-transport measurements, the researchers were able to demonstrate the presence of a supercurrent circulating through the manganite — this supercurrent is arising from the superconducting coupling between both superconducting regions, and thus a manifestation of a Josephson effect with a macroscopic long range.
    Extremely rare: Triplett superconductivity
    In addition, the scientists explored another interesting property with profound consequences for spintronic applications. In superconductors electrons pair together in so-called Cooper pairs. In the vast majority of superconducting materials these pairs are composed by electrons with opposite spin in order to minimise the magnetic exchange field which is detrimental for the stabilisation of superconductivity. The ferromagnet used by the international team has been a half-ferromagnet for which only one spin type electron is allowed to circulate. The fact that a supercurrent has been detected within this material, implies that the Cooper pairs of this supercurrent must be composed by electrons having the same spin. This so-called “triplet” superconductivity is extremely rare.
    Mapping magnetic domains at BESSY II
    “At the XMCD-PEEM station at BESSY II, we mapped and measured the magnetic domains within the manganite spacer. We observed wide regions homogeneously magnetised and connecting the superconducting regions. Triplet spin pairs can propagate freely in these,” explains Dr. Sergio Valencia Molina, HZB physicist, who supervised the measurements at BESSY II.
    Superconducting currents flow without resistance which make them very appealing for low-power consumption applications. In the present case this current is made of electrons with equal spins. Such spin polarised currents could be used in novel superconducting spintronic applications for the transport (over long distances) and reading/writing of information while profiting from the stability imposed by the macroscopic quantum coherence of the Josephson effect.
    The new device made of the superconducting and ferromagnetic components therefore opens up opportunities for superconducting spintronics and new perspectives for quantum computing.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    How digital and molecular data can be integrated and used to improve health

    Analysing molecular characteristics and their variation during lifestyle changes, by combining digital tools, classical laboratory tests and new biomolecular measurements, could enable individualised prevention of disease. This is according to a new study from Karolinska Institutet in Sweden and the University of Helsinki in Finland published in the journal Cell Systems. The researchers show what a proactive healthcare model could comprise and how it could help in maintaining good health.
    Sensors, apps and other digital alternatives for monitoring health are increasing our ability to take proactive measures to improve our health and wellbeing. Moreover, the simultaneous measurement of numerous biomolecular variables (multiomics) enables deep and comprehensive profiling of human biology.
    “Instead of focusing on the treatment of the later stages of disease, future healthcare services could focus on more proactive and individualised interventions and on the early detection of disease,” says the study’s first author Francesco Marabita, researcher at the Department of Oncology-Pathology, Karolinska Institutet and SciLifeLab in Sweden. “It might sound a little futuristic, but the technology is already there.”
    The Digital Health Revolution (DHR) project is a multicentre study set up a few years ago by researchers, amongst other institutions, from the Institute for Molecular Medicine Finland (FIMM) at the University of Helsinki to explore and pilot future approaches to healthcare.
    The study spanned 16 months and included 96 individuals between the ages of 25 and 59 who were registered at an occupational healthcare clinic in Helsinki, Finland. There were no known serious diseases, but some of the participants had risk factors such as high blood pressure, elevated glucose or obesity.
    The molecular profiling was done in collaboration with investigators from Karolinska Institutet and SciLifeLab. In addition to extensive multiomics analyses, the serial data collection included online questionnaires, clinical laboratory measurements in blood samples, analysis of the gut microbiome, and activity and sleep data using a smart watch. More

  • in

    Improving perceptions of emerging technologies can help ease strain on health-care systems

    More attention must be paid to improving perceptions of emerging technologies like AI-powered symptom checkers, which could ease the strain on health-care systems, according to a recent study.
    Symptom checkers are online platforms that help with self-triage based on a range of inputted symptoms and demographic details.
    The study, led by University of Waterloo researchers, found that “tech seekers,” people who are open to technology but perceive a lack of access to it, are the most likely to want to use the technology — more than “tech acceptors,” people who are both open to it and perceive it to be accessible.
    The least likely group of people to adopt the tool are “tech rejectors,” those who do not view it as accessible and have a negative view of AI. In between were “skeptics,” who have concerns about trust and output quality, and “unsure acceptors,” who do not perceive access to be an issue but have negative perceptions about AI.
    “These findings should be of great interest — or concern — to the three active arms of any health-care system that intends to use AI-driven symptom checkers: prospective patients, medical experts and developers of AI-driven symptom checkers,” said co-author Ashok Chaurasia, a professor in the School of Public Health Sciences. “This study highlights the need for more collaboration between these groups to improve AI models and their perception within the general population and medical experts.”
    Stephanie Aboueid, the study’s lead author and a School of Public Health Sciences graduate, said, “This technology is very promising in the health-care sector, given that it has the potential to reduce unnecessary medical visits and address the lack of access to primary care providers.”
    The researchers surveyed 1,305 university students aged 18 to 34 who had never used a symptom checker before the study. They gathered data on trust, usefulness, credibility, demonstrability, output quality, perspectives about AI, ease of use and accessibility for the analysis.
    “Symptom checkers are important because they speak to the younger generation who value timeliness and convenience,” Aboueid said. “They are not just a fad, as we’ve seen with Babylon, for example, which recently went public and has been adopted by various health institutions.
    Aboueid said the researchers used university-aged responders for the study because they are typically eager adopters of technology. Because of the age group studied, high education levels and good health status, additional studies are needed in other populations with wider age ranges, education and health levels, the researchers said.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Which role models are best for STEM? Researchers offer recommendations in new analysis

    An analysis of the effect role models have on students’ motivation in studying STEM subjects points to new ways to deploy these leaders in order to encourage learning across different populations. The recommendations provide a resource for parents, teachers, and policymakers seeking to maximize role models’ impact in diversifying the fields of science, technology, engineering, and mathematics.
    “STEM fields fail to attract and retain women as well as racial and ethnic minorities in numbers proportional to their share of the population,” explains Andrei Cimpian, a professor in New York University’s Department of Psychology and the senior author of the paper, hich appears in the International Journal of STEM Education. “A popular method to diversify the STEM workforce has been to introduce students to STEM role models, but less clear is how effective this approach is — simply because it’s not certain which role models resonate with different student populations.”
    “Our recommendations, based on an analysis of over 50 studies, are aimed at ensuring that STEM role models are motivating for students of all backgrounds and demographics,” adds lead author Jessica Gladstone, an NYU postdoctoral fellow at the time of the study and now a researcher at Virginia Commonwealth University.
    Marian Wright Edelman, founder and president emerita of the Children’s Defense Fund, popularized the phrase “You can’t be what you can’t see,” which emphasized the importance of having role models with whom diverse populations could identify.
    While many have claimed that exposing students to role models is an effective tool for diversifying STEM fields, the evidence supporting this position is mixed. Moreover, the researchers note, the argument is a vague one, leaving open questions about under what conditions and for which populations role models can be useful for this purpose.
    Gladstone and Cimpian sought to bring more clarity to this important issue by reframing the question being asked. Rather than asking “Are role models effective?,” they asked a more specific — and potentially more informative — question: “Which role models are effective for which students?”
    In addressing it, they reviewed 55 studies on students’ STEM motivation as a function of several key features of role models — their perceived competence, their perceived similarity to students, and the perceived attainability of their success. They also examined how features of the students themselves, such as their gender, race/ethnicity, age, and identification with STEM, modulate the effectiveness of role models. More

  • in

    'My robot is a softie': Physical texture influences judgments of robot personality

    Researchers have found that the physical texture of robots influenced perceptions of robot personality. Furthermore, first impressions of robots, based on physical appearance alone, could influence the relationship between physical texture and robot personality formation. This work could facilitate the development of robots with perceived personalities that match user expectations.
    Impressions of a robot’s personality can be influenced by the way it looks, sounds, and feels. But now, researchers from Japan have found specific causal relationships between impressions of robot personality and body texture.
    In a study published in Advanced Robotics, researchers from Osaka University and Kanazawa University have revealed that a robot’s physical texture interacts with elements of its appearance in a way that influences impressions of its personality.
    Body texture, such as softness or elasticity, is an important consideration in the design of robots meant for interactive functions. In addition, appearance can modulate whether a person anticipates a robot to be friendly, likable, or capable, among other characteristics.
    However, the ways in which people perceive the physical texture and the personality of robots have only been examined independently. As a result, the relationships between these two factors is unclear, something the researchers aimed to address.
    “The mechanisms of impression formation should be quantitatively and systematically investigated,” says lead author of the study Naoki Umeda. “Because various factors contribute to personality impression, we wanted to investigate how specific robot body properties promote or deteriorate specific kinds of impressions.”
    To do this, the researchers asked adult participants to view, touch, and evaluate six different inactive robots that were humanoid to varying degrees. The participants were asked to touch the arm of the robots. For each robot, four fake arms had been constructed; these were made of silicone rubber and prepared in such a way that their elasticity varied, thus providing differing touch sensations. The causal relationships between the physical textures of the robot arms and the participant perceptions were then evaluated.
    “The results confirmed our expectations,” explains Hisashi Ishihara, senior author. “We found that the impressions of the personalities of the robots varied according to the texture of the robot arms, and that there were specific relationships among certain parameters.”
    The researchers also found that the first impressions of the robots, made before the participants touched them, could modulate one of the effects.
    “We found that the impression of likability was strengthened when the participant anticipated that the robot would engage in peaceful emotional verbal communication. This suggests that both first impressions and touch sensations are important considerations for social robot designers focused on perceived robot personality,” says Ishihara.
    Given that many robots are designed for physical interaction with humans — for instance those used in therapy or clinical settings — the texture of the robot body is an important consideration. A thorough understanding of the physical factors that influence user impressions of robots will enable researchers to design robots that optimize user comfort. This is especially important for robots employed for advanced communication, because user comfort will influence the quality of communication, and thus the utility of the robotic system.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    COVID-19 mobile robot could detect and tackle social distancing breaches

    A new strategy to reduce the spread of COVID-19 employs a mobile robot that detects people in crowds who are not observing social-distancing rules, navigates to them, and encourages them to move apart. Adarsh Jagan Sathyamoorthy of the University of Maryland, College Park, and colleagues present these findings in the open-access journal PLOS ONE on Dec. 1, 2021.
    Previous research has shown that staying at least two meters apart from others can reduce the spread of COVID-19. Technology-based methods — such as strategies using WiFi and Bluetooth — hold promise to help detect and discourage lapses in social distancing. However, many such approaches require participation from individuals or existing infrastructure, so robots have emerged as a potential tool for addressing social distancing in crowds.
    Now, Sathyamoorthy and colleagues have developed a novel way to use an autonomous mobile robot for this purpose. The robot can detect breaches and navigate to them using its own Red Green Blue — Depth (RGB-D) camera and 2-D LiDAR (Light Detection and Ranging) sensor, and can tap into an existing CCTV system, if available. Once it reaches the breach, the robot encourages people to move apart via text that appears on a mounted display.
    The robot uses a novel system to sort people who have breached social distancing rules into different groups, prioritize them according to whether they are standing still or moving, and then navigate to them. This system employs a machine-learning method known as Deep Reinforcement Learning and Frozone, an algorithm previously developed by several of the same researchers to help robots navigate crowds.
    The researchers tested their method by having volunteers act out social-distancing breach scenarios while standing still, walking, or moving erratically. Their robot was able to detect and address most of the breaches that occurred, and CCTV enhanced its performance.
    The robot also uses a thermal camera that can detect people with potential fevers, aiding contact-tracing efforts, while also incorporating measures to ensure privacy protection and de-identification.
    Further research is needed to validate and refine this method, such as by exploring how the presence of robots impacts people’s behavior in crowds.
    The authors add: “A lot of healthcare workers and security personnel had to put their health at risk to serve the public during the COVID-19 pandemic. Our work’s core objective is to provide them with tools to safely and efficiently serve their communities.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Engineers create perching bird-like robot

    Like snowflakes, no two branches are alike. They can differ in size, shape and texture; some might be wet or moss-covered or bursting with offshoots. And yet birds can land on just about any of them. This ability was of great interest to the labs of Stanford University engineers Mark Cutkosky and David Lentink — now at University of Groningen in the Netherlands — which have both developed technologies inspired by animal abilities.
    “It’s not easy to mimic how birds fly and perch,” said William Roderick, PhD ’20, who was a graduate student in both labs. “After millions of years of evolution, they make takeoff and landing look so easy, even among all of the complexity and variability of the tree branches you would find in a forest.”
    Years of study on animal-inspired robots in the Cutkosky Lab and on bird-inspired aerial robots in the Lentink Lab enabled the researchers to build their own perching robot, detailed in a paper published Dec. 1 in Science Robotics. When attached to a quadcopter drone, their “stereotyped nature-inspired aerial grasper,” or SNAG, forms a robot that can fly around, catch and carry objects and perch on various surfaces. Showing the potential versatility of this work, the researchers used it to compare different types of bird toe arrangements and to measure microclimates in a remote Oregon forest.
    A bird bot in the forest
    In the researchers’ previous studies of parrotlets — the second smallest parrot species — the diminutive birds flew back and forth between special perches while being recorded by five high-speed cameras. The perches — representing a variety of sizes and materials, including wood, foam, sandpaper and Teflon — also contained sensors that captured the physical forces associated with the birds’ landings, perching and takeoff.
    “What surprised us was that they did the same aerial maneuvers, no matter what surfaces they were landing on,” said Roderick, who is lead author of the paper. “They let the feet handle the variability and complexity of the surface texture itself.” This formulaic behavior seen in every bird landing is why the “S” in SNAG stands for “stereotyped.”
    Just like the parrotlets, SNAG approaches every landing in the same way. But, in order to account for the size of the quadcopter, SNAG is based on the legs of a peregrine falcon. In place of bones, it has a 3D-printed structure — which took 20 iterations to perfect — and motors and fishing line stand-in for muscles and tendons. More