More stories

  • in

    Researchers developing AI to make the internet more accessible

    In an effort to make the internet more accessible for people with disabilities, researchers at The Ohio State University have begun developing an artificial intelligence agent that could complete complex tasks on any website using simple language commands.
    In the three decades since it was first released into the public domain, the world wide web has become an incredibly intricate, dynamic system. Yet because internet function is now so integral to society’s well-being, its complexity also makes it considerably harder to navigate.
    Today there are billions of websites available to help access information or communicate with others, and many tasks on the internet can take more than a dozen steps to complete. That’s why Yu Su, co-author of the study and an assistant professor of computer science and engineering at Ohio State, said their work, which uses information taken from live sites to create web agents — online AI helpers — is a step toward making the digital world a less confusing place.
    “For some people, especially those with disabilities, it’s not easy for them to browse the internet,” said Su. “We rely more and more on the computing world in our daily life and work, but there are increasingly a lot of barriers to that access, which, to some degree, widens the disparity.”
    The study was presented in December at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), a flagship conference for AI and machine learning research.
    By taking advantage of the power of large language models, the agent works similarly to how humans behave when browsing the web, said Su. The Ohio State team showed that their model was able to understand the layout and functionality of different websites using only its ability to process and predict language.
    Researchers started the process by creating Mind2Web, the first dataset for generalist web agents. Though previous efforts to build web agents focused on toy simulated websites, Mind2Web fully embraces the complex and dynamic nature of real-world websites and emphasizes an agent’s ability of generalizing to entirely new websites it has never seen before. Su said that much of their success is due to their agent’s ability to handle the internet’s ever-evolving learning curve. The team lifted over 2,000 open-ended tasks from 137 different real-world websites, which they then used to train the agent.

    Some of the tasks included booking one-way and round-trip international flights, following celebrity accounts on Twitter, browsing comedy films from 1992 to 2017 streaming on Netflix, and even scheduling car knowledge tests at the DMV. Many of the tasks were very complex — for example, booking one of the international flights used in the model would take 14 actions. Such effortless versatility allows for diverse coverage on a number of websites, and opens up a new landscape for future models to explore and learn in an autonomous fashion, said Su.
    “It’s only become possible to do something like this because of the recent development of large language models like ChatGPT,” said Su. Since the chatbot became public in November 2022, millions of users have used it to automatically generate content, from poetry and jokes to cooking advice and medical diagnoses.
    Still, because one website could contain thousands of raw HTML elements, it would be too costly to feed so much information to a single large language model. To address this gap, the study also introduces a framework called MindAct, a two-pronged agent that uses both small and large language models to carry out these tasks. The team found that by using this strategy, MindAct significantly outperforms other common modeling strategies and is able to understand various concepts at a decent level.
    With more fine-tuning, the study points out, the model could likely be used in tandem with both open-and closed-source large language models such as Flan-T5 or GPT-4. However, their work does highlight an increasingly relevant ethical problem in creating flexible artificial intelligence, said Su. While it could certainly serve as a helpful agent to humans surfing the web, the model could also be used to enhance systems like ChatGPT and turn the entire internet into an unprecedentedly powerful tool, said Su.
    “On the one hand, we have great potential to improve our efficiency and to allow us to focus on the most creative part of our work,” he said. “But on the other hand, there’s tremendous potential for harm.” For instance, autonomous agents able to translate online steps into the real world could influence society by taking potentially dangerous actions, such as misusing financial information or spreading misinformation.
    “We should be extremely cautious about these factors and make a concerted effort to try to mitigate them,” said Su. But as AI research continues to evolve, he notes that it’s likely society will experience major growth in the commercial use and performance of generalist web agents in the years to come, especially as the technology has already gained so much popularity in the public eye.
    “Throughout my career, my goal has always been trying to bridge the gap between human users and the computing world,” said Su. “That said, the real value of this tool is that it will really save people time and make the impossible possible.”
    The research was supported by the National Science Foundation, the U.S. Army Research Lab and the Ohio Supercomputer Center. Other co-authors were Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang and Huan Sun, all of Ohio State. More

  • in

    Researchers demonstrate that quantum entanglement and topology are inextricably linked

    For the first time, researchers from the Structured Light Laboratory (School of Physics) at the University of the Witwatersrand in South Africa, led by Professor Andrew Forbes, in collaboration with string theorist Robert de Mello Koch from Huzhou University in China (previously from Wits University), have demonstrated the remarkable ability to perturb pairs of spatially separated yet interconnected quantum entangled particles without altering their shared properties.
    “We achieved this experimental milestone by entangling two identical photons and customising their shared wave-function in such a way that their topology or structure becomes apparent only when the photons are treated as a unified entity,” explains lead author, Pedro Ornelas, an MSc student in the structured light laboratory.
    This connection between the photons was established through quantum entanglement, often referred to as ‘spooky action at a distance’, enabling particles to influence each other’s measurement outcomes even when separated by significant distances. The research was published in Nature Photonics on 8 January 2024.
    The role of topology and its ability to preserve properties, in this work, can be likened to how a coffee mug can be reshaped into the form of a doughnut; despite the changes in appearance and shape during the transformation, a singular hole — a topological characteristic — remains constant and unaltered. In this way, the two objects are topologically equivalent. “The entanglement between our photons is malleable, like clay in a potter’s hands, but during the moulding process, some features are retained,” explains Forbes.
    The nature of the topology investigated here, termed Skyrmion topology, was initially explored by Tony Skyrme in the 1980s as field configurations displaying particle-like characteristics. In this context, topology refers to a global property of the fields, akin to a piece of fabric (the wave-function) whose texture (the topology) remains unchanged regardless of the direction in which it is pushed.
    These concepts have since been realised in modern magnetic materials, liquid crystals, and even as optical analogues using classical laser beams. In the realm of condensed matter physics, skyrmions are highly regarded for their stability and noise resistance, leading to groundbreaking advancements in high-density data storage devices. “We aspire to see a similar transformative impact with our quantum-entangled skyrmions,” says Forbes.
    Previous research depicted these Skyrmions as localised at a single location. “Our work presents a paradigm shift: the topology that has traditionally been thought to exist in a single and local configuration is now nonlocal or shared between spatially separated entities,” says Ornelas.

    Expanding on this concept, the researchers utilise topology as a framework to classify or distinguish entangled states. They envisage that “this fresh perspective can serve as a labelling system for entangled states, akin to an alphabet!” says Dr Isaac Nape, a co-investigator.
    “Similar to how spheres, doughnuts, and handcuffs are distinguished by the number of holes they contain, our quantum skyrmions can be differentiated by their topological aspects in the same fashion,” says Nape. The team hopes that this might become a powerful tool that paves the way for new quantum communication protocols that use topology as an alphabet for quantum information processing across entanglement based channels.
    The findings reported in the article are crucial because researchers have grappled for decades with developing techniques to preserve entangled states. The fact that topology remains intact even as entanglement decays suggests a potentially new encoding mechanism that utilises entanglement, even in scenarios with minimal entanglement where traditional encoding protocols would fail.
    “We will focus our research efforts on defining these new protocols and expanding the landscape of topological nonlocal quantum states,” says Forbes. More

  • in

    Severe MS predicted using machine learning

    A combination of only 11 proteins can predict long-term disability outcomes in multiple sclerosis (MS) for different individuals. The identified proteins could be used to tailor treatments to the individual based on the expected severity of the disease. The study, led by researchers at Linköping University in Sweden, has been published in the journal Nature Communications.
    “A combination of 11 proteins predicted both short and long-term disease activity and disability outcomes. We also concluded that it’s important to measure these proteins in cerebrospinal fluid, which better reflects what’s going on in the central nervous system, compared with measuring in the blood,” says Julia Åkesson, doctoral student at Linköping University and the University of Skövde.
    In multiple sclerosis, the immune system attacks the person’s own body, damaging nerves in the brain and in the spinal cord. What is attacked primarily is a fatty compound called myelin, which surrounds and insulates the nerve axons so that signals can be transmitted. When myelin is damaged, transmission becomes less efficient.
    Disease progression in multiple sclerosis varies considerably from person to person. To those for whom a more severe disease is predicted, it is important not to lose valuable time at the onset of the disease but to get the right treatment quickly. The researchers behind the current study, which is a collaboration between Linköping University, the Karolinska Institute and the University of Skövde, wanted to find out whether it was possible to detect at an early stage of disease which patients would require a more powerful treatment. Being able to do so would be relevant both to physicians and those living with MS.
    “I think we’ve come one step closer to an analysis tool for selecting which patients would need more effective treatment in an early stage of the disease. But such a treatment may have side effects and be relatively expensive, and some patients don’t need it,” says Mika Gustafsson, professor of bioinformatics at the Department of Physics, Chemistry and Biology at Linköping University, who led the study.
    Finding markers linked to disease severity many years ahead is a complicated challenge. In their study, the researchers analysed nearly 1,500 proteins in samples from 92 people with suspected or recently diagnosed MS. Data from the protein analyses were combined with a large amount of information from the patients’ journals, such as disability, results from MRI scans of the nervous system, and treatments received. Using machine learning, the researchers found a number of proteins that could predict disease progression.
    “Having a panel consisting of only 11 proteins makes it easy should anyone want to develop analysis for this. It won’t be as costly as measuring 1,500 proteins, so we’ve really narrowed it down to make it useful for others wanting to take this further,” says Sara Hojjati, doctoral student at the Department of Biomedical and Clinical Sciences at Linköping University.

    The research team also found that a specific protein, leaking from damaged nerve axons, is a reliable biomarker for disease activity in the short term. This protein is called neurofilament light chain, NfL. These findings confirm earlier research on the use of NfL to identify nerve damage and also suggest that the protein indicates how active the disease is.
    One of the main strengths of the study is that the combination of proteins found in the patient group from which samples were taken at Linköping University Hospital was later confirmed in a separate group consisting of 51 MS patients sampled at the Karolinska University Hospital in Stockholm.
    This study is the first to measure such a large amount of proteins with a highly sensitive method, proximity extension assay, combined with next-generation sequencing, PEA-NGS. This technology allows for high-accuracy measuring also of very small amounts, which is important as these proteins are often present in very low levels.
    The study was funded by the Swedish Foundation for Strategic Research, the Swedish Brain Foundation, Knut and Alice Wallenberg Foundation, Margaretha af Ugglas Foundation, the Swedish Research Council, NEURO Sweden and the Swedish Foundation for MS research, and others. More

  • in

    New study uses machine learning to bridge the reality gap in quantum devices

    A study led by the University of Oxford has used the power of machine learning to overcome a key challenge affecting quantum devices. For the first time, the findings reveal a way to close the ‘reality gap’: the difference between predicted and observed behaviour from quantum devices. The results have been published in Physical Review X.
    Quantum computing could supercharge a wealth of applications, from climate modelling and financial forecasting, to drug discovery and artificial intelligence. But this will require effective ways to scale and combine individual quantum devices (also called qubits). A major barrier against this is inherent variability: where even apparently identical units exhibit different behaviours.
    Functional variability is presumed to be caused by nanoscale imperfections in the materials that quantum devices are made from. Since there is no way to measure these directly, this internal disorder cannot be captured in simulations, leading to the gap in predicted and observed outcomes.
    To address this, the research group used a “physics-informed” machine learning approach to infer these disorder characteristics indirectly. This was based on how the internal disorder affected the flow of electrons through the device.
    Lead researcher Associate Professor Natalia Ares (Department of Engineering Science, University of Oxford) said: ‘As an analogy, when we play “crazy golf” the ball may enter a tunnel and exit with a speed or direction that doesn’t match our predictions. But with a few more shots, a crazy golf simulator, and some machine learning, we might get better at predicting the ball’s movements and narrow the reality gap.’
    The researchers measured the output current for different voltage settings across an individual quantum dot device. The data was input into a simulation which calculated the difference between the measured current with the theoretical current if no internal disorder was present. By measuring the current at many different voltage settings, the simulation was constrained to find an arrangement of internal disorder that could explain the measurements at all voltage settings. This approach used a combination of mathematical and statistical approaches coupled with deep learning.
    Associate Professor Ares added: ‘In the crazy golf analogy, it would be equivalent to placing a series of sensors along the tunnel, so that we could take measurements of the ball’s speed at different points. Although we still can’t see inside the tunnel, we can use the data to inform better predictions of how the ball will behave when we take the shot.’
    Not only did the new model find suitable internal disorder profiles to describe the measured current values, it was also able to accurately predict voltage settings required for specific device operating regimes.
    Crucially, the model provides a new method to quantify the variability between quantum devices. This could enable more accurate predictions of how devices will perform, and also help to engineer optimum materials for quantum devices. It could inform compensation approaches to mitigate the unwanted effects of material imperfections in quantum devices.
    Co-author David Craig, a PhD student at the Department of Materials, University of Oxford, added, ‘Similar to how we cannot observe black holes directly but we infer their presence from their effect on surrounding matter, we have used simple measurements as a proxy for the internal variability of nanoscale quantum devices. Although the real device still has greater complexity than the model can capture, our study has demonstrated the utility of using physics-aware machine learning to narrow the reality gap.’ More

  • in

    Towards more accurate 3D object detection for robots and self-driving cars

    Robotics and autonomous vehicles are among the most rapidly growing domains in the technological landscape, with the potential to make work and transportation safer and more efficient. Since both robots and self-driving cars need to accurately perceive their surroundings, 3D object detection methods are an active area of study. Most 3D object detection methods employ LiDAR sensors to create 3D point clouds of their environment. Simply put, LiDAR sensors use laser beams to rapidly scan and measure the distances of objects and surfaces around the source. However, using LiDAR data alone can lead to errors due to the high sensitivity of LiDAR to noise, especially in adverse weather conditions like during rainfall.
    To tackle this issue, scientists have developed multi-modal 3D object detection methods that combine 3D LiDAR data with 2D RGB images taken by standard cameras. While the fusion of 2D images and 3D LiDAR data leads to more accurate 3D detection results, it still faces its own set of challenges, with accurate detection of small objects remaining difficult. The problem mainly lies in properly aligning the semantic information extracted independently from the 2D and 3D datasets, which is hard due to issues such as imprecise calibration or occlusion.
    Against this backdrop, a research team led by Professor Hiroyuki Tomiyama from Ritsumeikan University, Japan, has developed an innovative approach to make multi-modal 3D object detection more accurate and robust. The proposed scheme, called “Dynamic Point-Pixel Feature Alignment Network” (DPPFA−Net), is described in their paper published in IEEE Internet of Things Journal on 3 November 2023.
    The model comprises an arrangement of multiple instances of three novel modules: the Memory-based Point-Pixel Fusion (MPPF) module, the Deformable Point-Pixel Fusion (DPPF) module, and the Semantic Alignment Evaluator (SAE) module. The MPPF module is tasked with performing explicit interactions between intra-modal features (2D with 2D and 3D with 3D) and cross-modal features (2D with 3D). The use of the 2D image as a memory bank reduces the difficulty in network learning and makes the system more robust against noise in 3D point clouds. Moreover, it promotes the use of more comprehensive and discriminative features.
    In contrast, the DPPF module performs interactions only at pixels in key positions, which are determined via a smart sampling strategy. This allows for feature fusion in high resolutions at a low computational complexity. Finally, the SAE module helps ensure semantic alignment between both data representations during the fusion process, which mitigates the issue of feature ambiguity.
    The researchers tested DPPFA−Net by comparing it to the top performers for the widely used KITTI Vision Benchmark. Notably, the proposed network achieved average precision improvements as high as 7.18% under different noise conditions. To further test the capabilities of their model, the team created a new noisy dataset by introducing artificial multi-modal noise in the form of rainfall to the KITTI dataset. The results show that the proposed network performed better than existing models not only in the face of severe occlusions but also under various levels of adverse weather conditions. “Our extensive experiments on the KITTI dataset and challenging multi-modal noisy cases reveal that DPPFA-Net reaches a new state-of-the-art,” remarks Prof. Tomiyama.
    Notably, there are various ways in which accurate 3D object detection methods could improve our lives. Self-driving cars, which rely on such techniques, have the potential to reduce accidents and improve traffic flow and safety. Furthermore, the implications in the field of robotics should not be understated. “Our study could facilitate a better understanding and adaptation of robots to their working environments, allowing a more precise perception of small targets,” explains Prof. Tomiyama. “Such advancements will help improve the capabilities of robots in various applications.” Another use for 3D object detection networks is the pre-labeling of raw data for deep-learning perception systems. This would greatly reduce the cost of manual annotation, accelerating developments in the field.
    Overall, this study is a step in the right direction towards making autonomous systems more perceptive and assisting us better with human activities. More

  • in

    Climate – Science News

    Climate – Science News


    INDEPENDENT JOURNALISM SINCE 1921
    Thu, 21 Dec 2023 20:55:32 +0000
    en-US

    hourly

    1
    https://wordpress.org/?v=6.2.3

    Speed bumps under Thwaites Glacier could help slow its flow to the sea
    /article/speed-bumps-thwaites-glacier

    Tue, 19 Dec 2023 13:00:00 +0000

    /?p=3134419

    SAN FRANCISCO — Most of the news regarding the Thwaites Glacier, a Florida-sized slab of ice that is melting and currently contributing about 4 percent of global sea level rise, is bad. But a bit of good news may have emerged.

    A seismic survey of the bed beneath an upstream section of Thwaites has revealed rough high-rises of earth under the Antarctic glacier, which are comparable in height to the Manhattan skyline, glaciologist Coen Hofstede reported December 12 at a news conference during the American Geophysical Union fall meeting. These rugged rises may be snagging the glacier’s underbelly, slowing its flow toward the ocean and mitigating global sea level rise.

    Glaciers flow somewhat like rivers, but much slower. Where Thwaites outlets into the ocean, it connects to a floating shelf of ice that braces and partially restrains the glacier. That ice shelf was once pinned upon an underwater mountain, which helped stabilize it (SN: 12/13/21). But now the shelf is so deteriorated that it’s basically unhitched, Erin Pettit, a glaciologist at Oregon State University in Corvallis, said at the news event.

    Fortunately, though, the glacier “is not going to suddenly flow off land,” thanks partly to what’s been discovered upstream, said Pettit, who was not involved in the discovery.  

    To image Thwaites’ underbelly, researchers used a tractorlike vehicle (background, center) to haul a seismic vibrator truck on a sled, as well as a 1.5-kilometer-long chain of seismometers (foreground), across the glacier’s surface. A caboose-train (left) used for cooking, eating and repairs accompanied the vibrator truck across the ice. Coen Hofstede

    More than 70 kilometers inland from Thwaites’ ice shelf, Hofstede and his colleagues conducted a seismic survey to probe the glacier’s underbelly. The team attached a 1.5-kilometer-long daisy-chain of seismometers to a vehicle equipped with a vibrating plate. Then they drove a roughly 200-kilometer-long stretch of the glacier, using the plate to generate seismic waves and the seismometers to record the waves’ reflectance off layers of ice and earth below. “It’s almost like radar,” said Hofstede, of the Alfred Wegener Institute Helmholtz Center for Polar and Marine Research in Bremerhaven, Germany.

    A Pisten Bully (center left), a tracked vehicle able to maneuver on the ice, tows seismic equipment (red) across Thwaites Glacier. A second Pisten Bully (right) hauls the
    accommodation train with the crew’s sleeping tents.Ole Zeising

    The seismic waves revealed rises under Thwaites that are 10 to 20 kilometers long and toothed with blocks of sediment. These blocks stood up to 100 meters tall above the rises and stretched for up to several kilometers horizontally.

    The data showed that the upstream faces of these blocks appear to be under greater pressure than their downstream sides, and that there might be layers of deformed ice within the glacier above the rises. Hofstede hypothesizes that the rises and blocks are slowing Thwaites’ flow as its ice presses against them.

    Using computers to simulate the flow of Thwaites glacier shows that “it gets hung up on tall features,” said glaciologist Ben Smith of the University of Washington in Seattle, who was not involved in the work.

    The rises are probably related to a rift system, an area where tectonic forces have pulled the ground apart, Hofstede said. Under Thwaites, these rifts run roughly perpendicular to the glacier’s ice flow, sort of like speed bumps on a street.

    The findings will allow for more nuanced simulations of the glacier’s evolution, Hofstede said, which are crucial for understanding rates of sea level rise.

    ]] >

    Invisible comet tails of mucus slow sinking flakes of ‘marine snow’
    /article/comet-tails-mucus-marine-snow

    Mon, 18 Dec 2023 18:00:00 +0000

    /?p=3134321

    WASHINGTON — Tiny, sinking flakes of detritus in the ocean fall more slowly thanks to the goop that surrounds each flake, new observations reveal.

    The invisible mucus makes “comet tails” that surround each flake, physicist Rahul Chajwa of Stanford University reported November 19 at the American Physical Society’s Division of Fluid Dynamics meeting. Those mucus tails slow the speed at which the flakes fall. That could affect the rate at which carbon gets sequestered deep in the oceans, making the physics of this sticky goo important for understanding Earth’s climate.

    Although scientists knew the goo was a component of the “marine snow” that falls in the ocean, they hadn’t previously measured its impact on sinking speed.

    Marine snow is made of dead and living phytoplankton, decaying organic matter, feces, bacteria and other aquatic sundries, all wrapped up in mucus that’s produced by the organisms. Like the gunk known for clogging airways during respiratory virus season, the mucus is what’s called a viscoelastic fluid (SN: 3/17/16). That’s something that flows like a liquid but exhibits elastic behavior as well, springing back after being stretched.

    This underwater blizzard is not easy to study. When observed in the ocean, the particles sink swiftly out of view. In the laboratory, the particles can be viewed for longer periods, but the trek ashore degrades the delicate marine snow and kills the living organisms within it.

    Tiny particles (white dots) within a seawater-filled chamber were used to measure the rate at which fluid flows around this flake of marine snow as it falls. The chamber is designed to keep the sinking snowflake in view of the camera.

    So Chajwa and colleagues built a physics lab at sea. Aboard a research vessel in the Gulf of Maine, the team collected marine snow particles in traps 80 meters below the water’s surface. Then they loaded their catch into a device onboard, designed to observe the particles falling.

    Nicknamed “the gravity machine,” it’s a fluid-filled wheel that rotates in order to keep an individual flake in view of a camera. It’s a bit like a hamster wheel for falling debris. As the flake sinks, the wheel turns so as to move the snow in the opposite direction, allowing the snowfall to be observed indefinitely. The gravity machine was itself mounted on a gimbal designed to stave off sloshing from the rocking of the ship.

    “It’s a very nice compromise between the real marine snow that you get in the ocean versus what you can do practically in the lab,” says biophysicist Anupam Sengupta of the University of Luxembourg, who was not involved with the research.

    To observe how the fluid flowed around the particles, the researchers added tiny beads within the fluid in the gravity machine. That revealed the rate of fluid flow around the particles. The speed of fluid flow was slowed in a comet tail–shaped region around the particle, revealing the invisible mucus that sinks along with the particle.

    Marine snow particles (one shown) are surrounded with invisible mucus. Drag the slider to see how fluid flows around the flake as it falls. Slower speeds (yellow) reveal mucus that trails the flake in a comet tail–shape (red dotted line). Left: Rahul Chajwa and Manu Prakash/PrakashLab/Stanford UniversityRight: Rahul Chajwa and Manu Prakash/PrakashLab/Stanford University

    The particles sank at speeds up to 200 meters per day. The mucus played a big role in sinking speed. “The more the mucus, the slower the particles sink,” Chajwa says. On average, the mucus causes the marine snow particles to linger twice as long in the upper 100 meters of the ocean as they otherwise would, Chajwa and colleagues determined.

    If it falls deep enough, marine snow can sequester carbon away from the atmosphere. That’s because living phytoplankton, like plants, take in carbon dioxide and release oxygen. When phytoplankton form marine snow, they take that carbon along with them as they sink. If a flake reaches the ocean floor, it can settle into a scum at the bottom that caches that carbon over long time periods. The faster the particles sink, the more likely they are to make it to the abyss before being eaten by critters (SN: 6/23/22).

    Knowing how fast the particles sink is important for calculating the ocean’s impact on Earth’s climate, and how that might change as the climate warms, the researchers say. The oceans are major players in the planet’s carbon cycle (SN: 12/2/21), and scientists estimate that oceans have taken up roughly 30 percent of the carbon dioxide released by humans since industrialization. Chajwa and colleagues hope that their results can be used to refine climate models, which currently do not take the mucus into account.

    So this mucus is nothing to sneeze at. “We’re talking about microscopic physics,” says Stanford physicist Manu Prakash, a coauthor of the work, which is also reported in a paper submitted October 3 at arXiv.org. “But multiply that by the volume of the ocean … that’s what gives you the scale of the problem.”

    ]] >

    3 Antarctic glaciers show rapidly accelerated ice loss from ocean warming
    /article/3-antarctic-glaciers-rapid-loss-climate-change

    Mon, 18 Dec 2023 12:00:00 +0000

    /?p=3134270

    SAN FRANCISCO — Several Antarctic glaciers are undergoing dramatic acceleration and ice loss. Hektoria Glacier, the worst affected, has quadrupled its sliding speed and lost 25 kilometers of ice off its front in just 16 months, scientists say.

    The rapid retreat “is really unheard of,” says Mathieu Morlighem, a glaciologist at Dartmouth College who was not part of the team reporting these findings.

    The collapse was triggered by unusually warm ocean temperatures, which caused sea ice to retreat. This allowed a series of large waves to hit a section of coastline that is normally shielded from them. “What we’re seeing here is an indication of what could happen elsewhere” in Antarctica, says Naomi Ochwat, a glaciologist at the University of Colorado Boulder who presented the findings December 11 at the American Geophysical Union meeting.

    Hektoria Glacier, Green Glacier, and Crane Glacier sit near the tip of the Antarctic Peninsula, which reaches up toward South America. The crescent moon–shaped bay, called the Larsen B Embayment, once seemed stable. As these glaciers oozed off the coastline, their ice used to merge into a floating slab around 200 meters thick. This slab, called the Larsen B Ice Shelf, was about the size of Rhode Island and filled the entire bay.

    Having existed for over 10,000 years, this ice shelf buttressed and stabilized the glaciers flowing into it. But during a warm summer in 2002, it suddenly fragmented into thousands of skinny icebergs (SN: 3/27/02).

    Hektoria, Green, and Crane glaciers — no longer contained by the ice shelf —  began to flow into the ocean several times faster than they had before, shedding billions of tons of ice over the next decade.

    Then starting in 2011, the hemorrhaging slowed down. The thin veneer of sea ice that forms over the bay each winter began to persist year round, preserved by a series of cold summers. This “landfast ice,” attached firmly to the coastline, grew five to 10 meters thick, stabilizing the glaciers. Their floating tongues gradually advanced back into the bay. But things changed abruptly in early 2022. On January 19 and 20, the landfast ice disintegrated into fragments, which drifted away.

    Satellite images taken just 10 days apart reveal the dramatic breakup of sea ice in Antarctica’s Larsen B Embayment. On January 16, 2022, sea ice filled the bay (left). By January 26 (right), the ice had fractured and was drifting away following a series of powerful waves that struck the bay several days earlier. Left: Joshua Stevens, MODIS/LANCE/EOSDIS/NASA, WORLDVIEW/GIBS/NASARight: Joshua Stevens, MODIS/LANCE/EOSDIS/NASA, WORLDVIEW/GIBS/NASA

    Using data from ocean buoys farther north, Ochwat and colleagues determined that a series of powerful waves, higher than 1.5 meters, had swept in from the northeast — cracking apart the landfast ice. Those waves were highly unusual for this area.

    The Southern Ocean, which encircles Antarctica, holds some of the world’s roughest waters. The Antarctic Peninsula extends up into this turbulent region, but its east side, where the Larsen B Embayment sits, rarely feels the waves. It is normally protected by several hundred kilometers of pack ice — floes of sea ice, pressed together by ocean currents — that dampen the waves, leaving the waters near Larsen as flat as a mirror.

    In 2022, water temperatures near the surface of the Southern Ocean rose several tenths of a degree Celsius higher than normal, causing pack ice to shrink and peel away from the peninsula. This exposed the area to waves, which then broke up the landfast sea ice.

    The glaciers accelerated as their floating tongues, no longer held in place, fragmented into bergs. Crane Glacier lost 11 kilometers of ice, nearly erasing its floating tongue; Green Glacier lost 18 kilometers, encompassing all of its floating ice.

    Hektoria lost all 15 kilometers of its floating ice — followed by another 10 kilometers of ice that is normally more stable, because it rests on the seafloor. That “is faster than any tidewater glacier retreat that we know of,” Ochwat says.

    The previous standout, Alaska’s Columbia Glacier, had lost 20 kilometers of ice in 30 years, records show. But Hektoria lost its 10 kilometers of nonfloating ice in just five months — including 2.5 kilometers that crumbled in a 3-day period.

    All of this suggests that people trying to predict sea level rise need to consider sea ice, Morlighem says. Up until now, “its role in [glacier] dynamics has been completely ignored.”

    Ochwat is waiting to see what will happen as the current Antarctic summer heats up between December and March. Hektoria and the other glaciers have been retreating only during summer months, when sea ice is absent; they pause during winter, when the surface of the bay freezes for a few months.

    If Antarctic sea ice continues to shrink, as it has since 2022, it could spell trouble, says study coauthor Ted Scambos, a glaciologist also at UC Boulder. “You’re going to have a longer section of coastline where wave action can act on the front of ice shelves and glaciers,” potentially accelerating glacial retreat.

    ]] >

    COP28 nations agreed to ‘transition’ from fossil fuels. That’s too slow, experts say
    /article/cop28-fossil-fuels-climate-change

    Fri, 15 Dec 2023 15:30:00 +0000

    /?p=3134279

    Days of contentious wrangling in Dubai at the United Nations’ 28th annual climate summit ended December 13 with a historic agreement to “transition away” from fossil fuels and accelerate climate action over the next decade. The organization touted the agreement as a moment of global solidarity, marking “the beginning of the end” of the fossil fuel era.

    But the final agreement reached at COP28, signed by nearly 200 nations, did not include language that explicitly mandated phasing out fossil fuel energy, deeply frustrating many nations as well as climate scientists and activists.

    The agreement is considered the world’s first “global stocktake,” an inventory of climate actions and progress made since the 2015 Paris Agreement to limit global warming to “well below” 2 degrees Celsius above the preindustrial average (SN: 12/12/15).

    It acknowledges the conclusions of scientific research that greenhouse gas emissions will need to be cut by 43 percent by 2030 compared with 2019 levels, in order to limit global warming to 1.5 degrees Celsius by the end of the century. It then calls on nations to speed up climate actions before 2030 so as to reach global net zero by 2050 — in which greenhouse gases entering the atmosphere are balanced by their removal from the atmosphere. Among the actions called for are increasing global renewable energy generation, phasing down coal power and phasing out fossil fuel subsidies.

    But among many scientists gathered in San Francisco at the American Geophysical Union’s annual meeting to discuss climate change’s impacts to Earth’s atmosphere, polar regions, oceans and biosphere, the reaction to the language in the agreement was more frustrated than celebratory.

    “The beginning of the end? I wish it was the middle of the end,” says climate scientist Luke Parsons of the Nature Conservancy, who is based in Durham, N.C. “But you have to start somewhere, I guess.”

    It is a step forward, says Ted Scambos, a glaciologist at the University of Colorado Boulder. “Saying it out loud, that we are aiming to phase out fossil fuels, is huge.”

    It’s not a moment too soon: The globe is already experiencing many climate change–linked extreme weather events, including the hottest 12 months ever recorded (SN: 11/9/23). Still, Scambos says, “it’s a tribute to the science and the negotiators that we can take this step now, before the disastrous global impacts truly get underway.” But, he added, “I fear that the pace [of future climate action] will … still be driven by impacts arriving at our collective doors.”

    Other researchers had a grimmer take.

    “It was weak sauce,” says climate scientist Michael Mann of the University of Pennsylvania. “What we really need is a commitment to phase out fossil fuels, on a very specific timeline: We’re going to reduce carbon emissions by 50 percent this decade, bring them down to zero mid-century. Instead, they agreed to transition away from fossil fuels — the analogy that I use is, you’re diagnosed with diabetes, and you tell your doctor you’re going to transition away from doughnuts. That’s not going to cut it. It didn’t meet the moment.”

    Eric Rignot, a glaciologist at the University of California, Irvine, called the agreement “deeply disappointing and misleading,” noting that it didn’t include any language specifically calling for phasing out fossil fuels. Furthermore, he says, “COP28 keeps entertaining the idea that 1.5 degrees Celsius may be achievable, but everyone is offtrack to meet that goal. [And] for ice sheets and glaciers, even 1.5 degrees is not sustainable.”  There already are fears, for instance, that the melting of Greenland’s ice sheet can’t be stopped (SN: 8/9/21).

    Even if the world stays close to that average temperature, “the ice sheets are going to be retreating,” says Rob DeConto, a glaciologist at the University of Massachusetts at Amherst. “But you start getting out toward the end of the century, and all hell is going to break loose if we go much above 1.5. You’re talking about actually exceeding the limits of adaptation around so much of our coastlines.”  

    On December 12, the eighth anniversary of the signing of the Paris Agreement, the European Union’s Copernicus Climate Change Service noted that the world has, in effect, “lost” 19 years by delaying action to reduce fossil fuel emissions. Back in 2015, climate projections suggested that Earth’s average temperature would reach the 1.5 degrees C threshold by the year 2045 — then 30 years away. Now, projections show that the planet may reach that benchmark by 2034, just 11 years in the future.

    “We’ve got a shrinking window of opportunity,” Mann says. “And that window of opportunity will close if we don’t make dramatic and immediate reductions to our carbon emissions.”

    ]] >

    Ocean heat waves often lurk out of sight
    /article/ocean-heat-waves-below-surface-common

    Thu, 14 Dec 2023 19:30:00 +0000

    /?p=3134157 More

  • in

    New soft robots roll like tires, spin like tops and orbit like moons

    Researchers have developed a new soft robot design that engages in three simultaneous behaviors: rolling forward, spinning like a record,and following a path that orbits around a central point. The device, which operates without human or computer control, holds promise for developing soft robotic devices that can be used to navigate and map unknown environments.
    The new soft robots are called twisted ringbots. They are made of ribbon-like liquid crystal elastomers that are twisted — like a rotini noodle — and then joined together at the end to form a loop that resembles a bracelet. When the robots are placed on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion; the warmer the surface, the faster the robot rolls.
    “The ribbon rolls on its horizontal axis, giving the ring forward momentum,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University.
    The twisted ringbot also spins along its central axis, like a record on a turntable. And as the twisted ringbot moves forward it travels in an orbital path around a central point, essentially moving in a large circle. However, if the twisted ringbot encounters a boundary — like the wall of a box — it will travel along the boundary.
    “This behavior could be particularly useful for mapping unknown environments,” Yin says.
    The twisted ringbots are examples of devices whose behavior is governed by physical intelligence, meaning their actions are determined by their structural design and the materials they are made of, rather than being directed by a computer or human intervention.
    The researchers are able to fine-tune the behavior of the twisted ringbot by engineering the geometry of the device. For example, they can control the direction that the twisted ringbot spins by twisting the ribbon one way or the other. Speed can be influenced by varying the width of the ribbon, the number of twists in the ribbon, and so on.

    In proof-of-concept testing, the researchers showed that the twisted ringbot was able to follow the contours of various confined spaces.
    “Regardless of where the twisted ringbot is introduced to these spaces, it is able to make its way to a boundary and follow the boundary lines to map the space’s contours — whether it’s a square, a triangle and so on,” says Fangjie Qi, first author of the paper and a Ph.D. student at NC State. “It also identifies gaps or damage in the boundary.
    “We were also able to map the boundaries of more complex spaces by introducing two twisted ringbots into the space, with each robot rotating in a different direction,” Qi says. “This causes them to take different paths along the boundary. And by comparing the paths of both twisted ringbots, we’re able to capture the contours of the more complex space.”
    “In principle, no matter how complex a space is, you would be able to map it if you introduced enough of the twisted ringbots to map the whole picture, each one giving part of it,” says Yin. “And, given that these are relatively inexpensive to produce, that’s viable.
    “Soft robotics is still a relatively new field,” Yin says. “Finding new ways to control the movement of soft robots in a repeatable, engineered way moves the field forward. And advancing our understanding of what is possible is exciting.”
    The paper, “Defected Twisted Ring Topology For Autonomous Periodic Flip-Spin-Orbit Soft Robot,” will be published the week of January 8 in Proceedings of the National Academy of Sciences. The paper was co-authored by Yanbin Li and Yao Zhao, postdoctoral researchers at NC State; Yaoye Hong, a recent Ph.D. graduate of NC State; and Haitao Qing, a Ph.D. student at NC State.
    The work was done with support from the National Science Foundation under grants 2005374 and 2126072. More

  • in

    New AI tool accurately detects COVID-19 from chest X-rays

    Researchers have developed a groundbreaking Artificial Intelligence (AI) system that can rapidly detect COVID-19 from chest X-rays with more than 98% accuracy. The study results have just been published in Nature Scientific Reports.
    Corresponding author Professor Amir H Gandomi, from the University of Technology Sydney (UTS) Data Science Institute, said there was a pressing need for effective automated tools to detect COVID-19, given the significant impact on public health and the global economy.
    “The most widely used COVID-19 test, real time polymerase chain reaction (PCR), can be slow and costly, and produce false-negatives. To confirm a diagnosis, radiologists need to manually examine a CT scans or X-rays, which can be time consuming and prone to error,” said Professor Gandomi.
    “The new AI system could be particularly beneficial in countries experiencing high levels of COVID-19 where there is a shortage of radiologists. Chest X-rays are portable, widely available and provide lower exposure to ionizing radiation than CT scans,” he said.
    Common symptoms of COVID-19 include fever, cough, difficulty breathing and a sore throat, however it can be difficult to distinguish COVID-19 from Flu and other types of pneumonia.
    The new AI system uses a deep learning-based algorithm called a Custom Convolutional Neural Network (Custom-CNN) that is able to quickly and accurately distinguish between COVID-19 cases, normal cases, and pneumonia in X-ray images.
    “Deep learning offers an end-to-end solution, eliminating the need to manually search for biomarkers. The Custom-CNN model streamlines the detection process, providing a faster and more accurate diagnosis of COVID-19,” said Professor Gandomi.

    “If a PCR test or rapid antigen test shows a negative or inconclusive result, due to low sensitivity, patients may require further examination via radiological imaging to confirm or rule out the virus’s presence. In this situation the new AI system could prove beneficial.
    “While radiologists play a crucial role in medical diagnosis, AI technology can assist them in making accurate and efficient diagnoses,” said Professor Gandomi.
    The performance of the Custom-CNN model was evaluated via a comprehensive comparative analysis, with accuracy as the performance criterion. The results showed that the new model outperforms the other AI diagnostic models.
    Fast and accurate diagnosis of COVID-19 can ensure patients get the correct treatment, including COVID-19 antivirals, which work best if taken within five days of the onset of symptoms. It could also help them isolate and protect others from getting infected, reducing pandemic outbreaks.
    This breakthrough represents a significant step in combatting the ongoing challenges posed by the pandemic, potentially transforming the landscape of COVID-19 diagnosis and management. More