More stories

  • in

    Smoke from Australia’s intense fires in 2019 and 2020 damaged the ozone layer

    Towers of smoke that rose high into the stratosphere during Australia’s “black summer” fires in 2019 and 2020 destroyed some of Earth’s protective ozone layer, researchers report in the March 18 Science.

    Chemist Peter Bernath of Old Dominion University in Norfolk, Va., and his colleagues analyzed data collected in the lower stratosphere during 2020 by a satellite instrument called the Atmospheric Chemistry Experiment. It measures how different particles in the atmosphere absorb light at different wavelengths. Such absorption patterns are like fingerprints, identifying what molecules are present in the particles.

    The team’s analyses revealed that the particles of smoke, shot into the stratosphere by fire-fueled thunderstorms called pyrocumulonimbus clouds, contained a variety of mischief-making organic molecules (SN: 12/15/20). The molecules, the team reports, kicked off a series of chemical reactions that altered the balances of gases in Earth’s stratosphere to a degree never before observed in 15 years of satellite measurements. That shuffle included boosting levels of chlorine-containing molecules that ultimately ate away at the ozone.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Ozone concentrations in the stratosphere initially increased from January to March 2020, due to similar chemical reactions — sometimes with the contribution of wildfire smoke — that produce ozone  pollution at ground level (SN: 12/8/21). But from April to December 2020, the ozone levels not only fell, but sank below the average ozone concentration from 2005 to 2019.

    Earth’s ozone layer shields the planet from much of the sun’s ultraviolet radiation. Once depleted by human emissions of chlorofluorocarbons and other ozone-damaging substances, the layer has been showing signs of recovery thanks to the Montreal Protocol, an international agreement to reduce the atmospheric concentrations of those substances (SN: 2/10/21).

    But the increasing frequency of large wildfires due to climate change — and their ozone-destroying potential — could become a setback for that rare climate success story, the researchers say (SN: 3/4/20). More

  • in

    AI provides accurate breast density classification

    An artificial intelligence (AI) tool can accurately and consistently classify breast density on mammograms, according to a study in Radiology: Artificial Intelligence.
    Breast density reflects the amount of fibroglandular tissue in the breast commonly seen on mammograms. High breast density is an independent breast cancer risk factor, and its masking effect of underlying lesions reduces the sensitivity of mammography. Consequently, many U.S. states have laws requiring that women with dense breasts be notified after a mammogram, so that they can choose to undergo supplementary tests to improve cancer detection.
    In clinical practice, breast density is visually assessed on two-view mammograms, most commonly with the American College of Radiology Breast Imaging-Reporting and Data System (BI-RADS) four-category scale, ranging from Category A for almost entirely fatty breasts to Category D for extremely dense. The system has limitations, as visual classification is prone to inter-observer variability, or the differences in assessments between two or more people, and intra-observer variability, or the differences that appear in repeated assessments by the same person.
    To overcome this variability, researchers in Italy developed software for breast density classification based on a sophisticated type of AI called deep learning with convolutional neural networks, a sophisticated type of AI that is capable of discerning subtle patterns in images beyond the capabilities of the human eye. The researchers trained the software, known as TRACE4BDensity, under the supervision of seven experienced radiologists who independently visually assessed 760 mammographic images.
    External validation of the tool was performed by the three radiologists closest to the consensus on a dataset of 384 mammographic images obtained from a different center.
    TRACE4BDensity showed 89% accuracy in distinguishing between low density (BI-RADS categories A and B) and high density (BI-RADS categories C and D) breast tissue, with an agreement of 90% between the tool and the three readers. All disagreements were in adjacent BI-RADS categories.
    “The particular value of this tool is the possibility to overcome the suboptimal reproducibility of visual human density classification that limits its practical usability,” said study co-author Sergio Papa, M.D., from the Centro Diagnostico Italiano in Milan, Italy. “To have a robust tool that proposes the density assignment in a standardized fashion may help a lot in decision-making.”
    Such a tool would be particularly valuable, the researchers said, as breast cancer screening becomes more personalized, with density assessment accounting for one important factor in risk stratification.
    “A tool such as TRACE4BDensity can help us advise women with dense breasts to have, after a negative mammogram, supplemental screening with ultrasound, MRI or contrast-enhanced mammography,” said study co-author Francesco Sardanelli, M.D., from the IRCCS Policlinico San Donato in San Donato, Italy.
    The researchers plan additional studies to better understand the full capabilities of the software.
    “We would like to further assess the AI tool TRACE4BDensity, particularly in countries where regulations on women density is not active, by evaluating the usefulness of such tool for radiologists and patients,” said study co-author Christian Salvatore, Ph.D., senior researcher, University School for Advanced Studies IUSS Pavia and co-founder and chief executive officer of DeepTrace Technologies.
    “Development and Validation of an AI-driven Mammographic Breast Density Classification Tool Based on Radiologist Consensus.” Collaborating with Drs. Papa, Sardanelli and Salvatore were Veronica Magni, M.D., Matteo Interlenghi, M.Sc., Andrea Cozzi, M.D., Marco Alì, Ph.D., Alcide A. Azzena, M.D., Davide Capra, M.D., Serena Carriero, M.D., Gianmarco Della Pepa, M.D., Deborah Fazzini, M.D., Giuseppe Granata, M.D., Caterina B. Monti, M.D., Ph.D., Giulia Muscogiuri, M.D., Giuseppe Pellegrino, M.D., Simone Schiaffino, M.D., and Isabella Castiglioni, M.Sc., M.B.A. More

  • in

    Mathematical paradoxes demonstrate the limits of AI

    Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
    Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realise when it’s making a mistake than to produce a correct result.
    Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.
    The researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their results are reported in the Proceedings of the National Academy of Sciences.
    Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and easy to fool.
    “Many AI systems are unstable, and it’s becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,” said co-author Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.”
    The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency. More

  • in

    Public transport: AI assesses resilience of timetables

    A brief traffic jam, a stuck door, or many passengers getting on and off at a stop — even small delays in the timetables of trains and buses can lead to major problems. A new artificial intelligence (AI) could help designing schedules that are less susceptible to those minor disruptions. It was developed by a team from the Martin Luther University Halle-Wittenberg (MLU), the Fraunhofer Institute for Industrial Mathematics ITWM and the University of Kaiserslautern. The study was published in “Transportation Research Part C: Emerging Technologies.”
    The team was looking for an efficient way to test how well timetables can compensate for minor, unavoidable disruptions and delays. In technical terms, this is called robustness. Until now, such timetable optimisations have required elaborate computer simulations that calculate the routes of a large number of passengers under different scenarios. A single simulation can easily take several minutes of computing time. However, many thousands of such simulations are needed to optimise timetables. “Our new method enables a timetable’s robustness to be very accurately estimated within milliseconds,” says Professor Matthias Müller-Hannemann from the Institute of Computer Science at MLU. The researchers from Halle and Kaiserslautern used numerous methods for evaluating timetables in order to train their artificial intelligence. The team tested the new AI using timetables for Göttingen and part of southern Lower Saxony and achieved very good results.
    “Delays are unavoidable. They happen, for example, when there is a traffic jam during rush hour, when a door of the train jams, or when a particularly large number of passengers get on or off at a stop,” Müller-Hannemann says. When transfers are tightly scheduled, even a few minutes of delay can lead to travellers missing their connections. “In the worst case, they miss the last connection of the day,” adds co-author Ralf Rückert. Another consequence is that vehicle rotations can be disrupted so that follow-on journeys begin with a delay and the problem continues to grow.
    There are limited ways to counteract such delays ahead of time: Travel times between stops and waiting times at stops could be more generously calculated, and larger time buffers could be planned at terminal stops and between subsequent trips. However, all this comes at the expense of economic efficiency. The new method could now help optimise timetables so that a very good balance can be achieved between passenger needs, such as fast connections and few transfers, timetable robustness against disruptions, and the external economic conditions of the transport companies.
    The study was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the framework of the research unit “Integrated Planning for Public Transport.”
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    Even the sea has light pollution. These new maps show its extent

    The first global atlas of ocean light pollution shows that large swaths of the sea are squinting in the glare of humans’ artificial lights at night.

    From urbanized coastlines along the Persian Gulf to offshore oil complexes in the North Sea, humans’ afterglow is powerful enough to penetrate deep into many coastal waters, potentially changing the behaviors of creatures that live there, researchers report December 13 in Elementa: Science of the Anthropocene. Regional and seasonal differences — such as phytoplankton blooms or sediment from rivers — also affect the depth to which light penetrates.

    Artificial lights are known to affect land dwellers, such as by swelling or shrinking certain insect populations, or by making it harder for sparrows to fight off West Nile virus (SN: 3/30/21; SN: 8/31/21; SN: 1/19/18). But the bright lights of coastal cities, oil rigs and other offshore structures can also create a powerful glow in the sky over the sea.

    To assess where this glow is strongest, marine biogeochemist Tim Smyth of Plymouth Marine Laboratory in England and colleagues combined a world atlas of artificial night sky brightness created in 2016 with ocean and atmosphere data (SN: 6/10/16). Those data include shipboard measurements of artificial light, satellite data collected monthly from 1998 to 2017 to estimate the prevalence of light-scattering phytoplankton and sediment, and computer simulations of how different wavelengths of light move through the water.

    Not all species are equally sensitive to light, so to assess impact, the team focused on copepods, ubiquitous shrimplike creatures that are a key part of many ocean food webs. Like other tiny zooplankton, copepods use the sun or the winter moon as a cue to plunge en masse to the dark deep, seeking safety from surface predators (SN: 1/11/16; SN: 4/18/18).

    Humans’ nighttime light has the most impact in the top meter of the water, the team found. Here, artificial light is intense enough to cause a biological response across nearly 2 million square kilometers of ocean, an area roughly that of Mexico. Twenty meters down, the total affected area shrinks by more than half to 840,000 square kilometers.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up. More

  • in

    BirdBot is energy-efficient thanks to nature as a model

    If a Tyrannosaurus Rex living 66 million years ago featured a similar leg structure as an ostrich running in the savanna today, then we can assume bird legs stood the test of time — a good example of evolutionary selection.
    Graceful, elegant, powerful — flightless birds like the ostrich are a mechanical wonder. Ostriches, some of which weigh over 100kg, run through the savanna at up to 55km/h. The ostriches outstanding locomotor performance is thought to be enabled by the animal’s leg structure. Unlike humans, birds fold their feet back when pulling their legs up towards their bodies. Why do the animals do this? Why is this foot movement pattern energy-efficient for walking and running? And can the bird’s leg structure with all its bones, muscles, and tendons be transferred to walking robots?
    Alexander Badri-Spröwitz has spent more than five years on these questions. At the Max Planck Institute for Intelligent Systems (MPI-IS), he leads the Dynamic Locomotion Group. His team works at the interface between biology and robotics in the field of biomechanics and neurocontrol. The dynamic locomotion of animals and robots is the group’s main focus.
    Together with his doctoral student Alborz Aghamaleki Sarvestani, Badri-Spröwitz has constructed a robot leg that, like its natural model, is energy-efficient: BirdBot needs fewer motors than other machines and could, theoretically, scale to large size. On March 16th, Badri-Spröwitz, Aghamaleki Sarvestani, the roboticist Metin Sitti, a director at MPI-IS, and biology professor Monica A. Daley of the University of California, Irvine, published their research in the  journal Science Robotics.
    Compliant spring-tendon network made of muscles and tendons
    When walking, humans pull their feet up and bend their knees, but feet and toes point forward almost unchanged. It is known that Birds are different — in the swing phase, they fold their feet backward. But what is the function of this motion? Badri-Spröwitz and his team attribute this movement to a mechanical coupling. “It’s not the nervous system, it’s not electrical impulses, it’s not muscle activity,” Badri-Spröwitz explains. “We hypothesized a new function of the foot-leg coupling through a network of muscles and tendons that extends across multiple joints.” These multi-joint muscle-tendon coordinate foot folding in the swing phase. In our robot, we have implemented the coupled mechanics in the leg and foot, which enables energy-efficient and robust robot walking. Our results demonstrating this mechanism in a robot lead us to believe that similar efficiency benefits also hold true for birds,” he explains. More

  • in

    Scientists devise new technique to increase chip yield from semiconductor wafer

    Scientists from the Nanyang Technological University, Singapore (NTU Singapore) and the Korea Institute of Machinery & Materials (KIMM) have developed a technique to create a highly uniform and scalable semiconductor wafer, paving the way to higher chip yield and more cost-efficient semiconductors.
    Semiconductor chips commonly found in smart phones and computers are difficult and complex to make, requiring highly advanced machines and special environments to manufacture.
    Their fabrication is typically done on silicon wafers and then diced into the small chips that are used in devices. However, the process is imperfect and not all chips from the same wafer work or operate as desired. These defective chips are discarded, lowering semiconductor yield while increasing production cost.
    The ability to produce uniform wafers at the desired thickness is the most important factor in ensuring that every chip fabricated on the same wafer performs correctly.
    Nanotransfer-based printing — a process that uses a polymer mould to print metal onto a substrate through pressure, or ‘stamping’ — has gained traction in recent years as a promising technology for its simplicity, relative cost-effectiveness, and high throughput.
    However, the technique uses a chemical adhesive layer, which causes negative effects, such as surface defects and performance degradation when printed at scale, as well as human health hazards. For these reasons, mass adoption of the technology and consequent chip application in devices has been limited. More

  • in

    What's the prevailing opinion on social media? Look at the flocks, says researcher

    A University at Buffalo communication researcher has developed a framework for measuring the slippery concept of social media public opinion.
    These collective views on a topic or issue expressed on social media, distinct from the conclusions determined through survey-based public opinion polling, have never been easy to determine. But the “murmuration” framework developed and tested by Yini Zhang, PhD, an assistant professor of communication in the UB College of Arts and Sciences, and her collaborators addresses challenges, like identifying online demographics and factoring for opinion manipulation, that are characteristic on these digital battlegrounds of public discourse.
    Murmuration identifies meaningful groups of social media actors based on the “who-follows-whom” relationship. The actors attract like-minded followers to form “flocks,” which serve as the units of analysis. As opinions form and shift in response to external events, the flocks’ unfolding opinions move like the fluid murmuration of airborne starlings.
    The framework and the findings from an analysis of social network structure and opinion expression from over 193,000 Twitter accounts, which followed more than 1.3 million other accounts, suggest that flock membership can predict opinion and that the murmuration framework reveals distinct patterns of opinion intensity. The researchers studied Twitter because of the ability to see who is following whom, information that is not publicly accessible on other platforms.
    The results, published in the Journal of Computer-Mediated Communication, further support the echo chamber tendencies prevalent on social media, while adding important nuance to existing knowledge.
    “By identifying different flocks and examining the intensity, temporal pattern and content of their expression, we can gain deeper insights far beyond where liberals and conservatives stand on a certain issue,” says Zhang, an expert in social media and political communication. “These flocks are segments of the population, defined not by demographic variables of questionable salience, like white women aged 18-29, but by their online connections and response to events.
    “As such, we can observe opinion variations within an ideological camp and opinions of people that might not be typically assumed to have an opinion on certain issues. We see the flocks as naturally occurring, responding to things as they happen, in ways that take a conversational element into consideration.”
    Zhang says it’s important not to confuse public opinion, as measured by survey-based polling methods, and social media public opinion.
    “Arguably, social media public opinion is twice removed from the general public opinion measured by surveys,” say Zhang. “First, not everyone uses social media. Second, among those who do, only a subset of them actually express opinions on social media. They tend to be strongly opinionated and thus more willing to express their views publicly.”
    Murmuration offers insights that can complement information gathered through survey-based polling. It also moves away from mining social media for text from specific tweets. Murmuration takes full advantage of social media’s dynamic aspect. When text is removed from its context, it becomes difficult to accurately determine questions about what led to the discussion, when it began, and how it evolved over time.
    “Murmuration can allow for research that makes better use of social media data to study public opinion as a form of social interaction and reveal underlying social dynamics,” says Zhang.
    Story Source:
    Materials provided by University at Buffalo. Original written by Bert Gambini. Note: Content may be edited for style and length. More