More stories

  • in

    Engineers teach AI to navigate ocean with minimal energy

    Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
    “When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research that was published by Nature Communications on December 8.
    The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm on a tiny computer chip that could power seaborne drones both on Earth and other planets. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans, for example using the algorithm in combination with prosthetics they previously developed to help jellyfish swim faster and on command. Fully mechanical robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa.
    In either scenario, drones would need to be able to make decisions on their own about where to go and the most efficient way to get there. To do so, they will likely only have data that they can gather themselves — information about the water currents they are currently experiencing.
    To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers — for the purposes of this project, the team wrote software that can be installed and run on a Teensy — a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.
    Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.
    This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.
    “We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.
    The technology is still in its infancy: currently, the team would like to test the AI on each different type of flow disturbance it would possibly encounter on a mission in the ocean — for example, swirling vortices versus streaming tidal currents — to assess its effectiveness in the wild. However, by incorporating their knowledge of ocean-flow physics within the reinforcement learning strategy, the researchers aim to overcome this limitation. The current research proves the potential effectiveness of RL networks in addressing this challenge — particularly because they can operate on such small devices. To try this in the field, the team is placing the Teensy on a custom-built drone dubbed the “CARL-Bot” (Caltech Autonomous Reinforcement Learning Robot). The CARL-Bot will be dropped into a newly constructed two-story-tall water tank on Caltech’s campus and taught to navigate the ocean’s currents.
    “Not only will the robot be learning, but we’ll be learning about ocean currents and how to navigate through them,” says Peter Gunnarson, graduate student at Caltech and lead author of the Nature Communications paper.
    Story Source:
    Materials provided by California Institute of Technology. Original written by Robert Perkins. Note: Content may be edited for style and length. More

  • in

    Wildfire smoke may ramp up toxic ozone production in cities

    Wildfire smoke and urban air pollution bring out the worst in each other.

    As wildfires rage, they transform their burned fuel into a complex chemical cocktail of smoke. Many of these airborne compounds, including ozone, cause air quality to plummet as wind carries the smoldering haze over cities. But exactly how — and to what extent — wildfire emissions contribute to ozone levels downwind of the fires has been a matter of debate for years, says Joel Thornton, an atmospheric scientist at the University of Washington in Seattle.

    A new study has now revealed the elusive chemistry behind ozone production in wildfire plumes. The findings suggest that mixing wildfire smoke with nitrogen oxides — toxic gases found in car exhaust — could pump up ozone levels in urban areas, researchers report December 8 in Science Advances.

    Atmospheric ozone is a major component of smog that can trigger respiratory problems in humans and wildlife (SN: 1/4/21). Many ingredients for making ozone — such as volatile organic compounds and nitrogen oxides — can be found in wildfire smoke, says Lu Xu, an atmospheric chemist currently at the National Oceanographic and Atmospheric Administration Chemical Sciences Laboratory in Boulder, Colo. But a list of ingredients isn’t enough to replicate a wildfire’s ozone recipe. So Xu and colleagues took to the sky to observe the chemistry in action.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Through a joint project with NASA and NOAA, the researchers worked with the Fire Influence on Regional to Global Environments and Air Quality flight campaign to transform a jetliner into a flying laboratory. In July and August 2019, the flight team collected air samples from smoldering landscapes across the western United States. As the plane passed headlong through the plumes, instruments onboard recorded the kinds and amounts of each molecule detected in the haze. By weaving in and out of the smoke as it drifted downwind from the flames, the team also analyzed how the plume’s chemical composition changed over time.

    Using these measurements along with the wind patterns and fuel from each wildfire sampled, the researchers created a straightforward equation to calculate ozone production from wildfire emissions. “We took a complex question and gave it a simple answer,” says Xu, who did the work while at Caltech.

    As expected, the researchers found that wildfire emissions contain a dizzying array of organic compounds and nitrogen oxide species among other molecules that contribute to ozone formation. Yet their analysis showed that the concentration of nitrogen oxides decreases in the hours after the plume is swept downwind. Without this key ingredient, ozone production slows substantially.  

    Air pollution from cities and other urban areas is chock full of noxious gases. So when wildfire smoke wafts over cityscapes, a boost of nitrous oxides could jump-start ozone production again, Xu says.

    In a typical fire season, mixes like these could increase ozone levels by as much as 3 parts per billion in the western United States, the researchers estimate. This concentration is far below the U.S. Environmental Protection Agency’s health safety standard of 70 parts per billion, but the incremental increase could still pose a health risk to people who are regularly exposed to smoke, Xu says.

    With climate change increasing the frequency and intensity of wildfires, this new ozone production mechanism has important implications for urban air quality, says Qi Zhang, an atmospheric chemist at the University of California, Davis who was not involved in the study (SN: 9/18/20). She says the work provides an “important missing link” between wildfire emissions and ozone chemistry.

    The findings may also pose a challenge for environmental policy makers, says Thornton, who was not involved in the research. Though state and local authorities set strict regulations to limit atmospheric ozone, wildfire smoke may undermine those strategies, he says. This could make it more difficult for cities, especially in the western United States, to meet EPA ozone standards despite air quality regulations. More

  • in

    These tiny liquid robots never run out of juice as long as they have food

    When you think of a robot, images of R2-D2 or C-3PO might come to mind. But robots can serve up more than just entertainment on the big screen. In a lab, for example, robotic systems can improve safety and efficiency by performing repetitive tasks and handling harsh chemicals.
    But before a robot can get to work, it needs energy — typically from electricity or a battery. Yet even the most sophisticated robot can run out of juice. For many years, scientists have wanted to make a robot that can work autonomously and continuously, without electrical input.
    Now, as reported last week in the journal Nature Chemistry, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of Massachusetts Amherst have demonstrated just that — through “water-walking” liquid robots that, like tiny submarines, dive below water to retrieve precious chemicals, and then surface to deliver chemicals “ashore” again and again.
    The technology is the first self-powered, aqueous robot that runs continuously without electricity. It has potential as an automated chemical synthesis or drug delivery system for pharmaceuticals.
    “We have broken a barrier in designing a liquid robotic system that can operate autonomously by using chemistry to control an object’s buoyancy,” said senior author Tom Russell, a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts Amherst who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab’s Materials Sciences Division.
    Russell said that the technology significantly advances a family of robotic devices called “liquibots.” In previous studies, other researchers demonstrated liquibots that autonomously perform a task, but just once; and some liquibots can perform a task continuously, but need electricity to keep on running. In contrast, “we don’t have to provide electrical energy because our liquibots get their power or ‘food’ chemically from the surrounding media,” Russell explained. More

  • in

    AI-powered computer model predicts disease progression during aging

    Using artificial intelligence, a team of University at Buffalo researchers has developed a novel system that models the progression of chronic diseases as patients age.
    Published in Oct. in the Journal of Pharmacokinetics and Pharmacodynamics, the model assesses metabolic and cardiovascular biomarkers — measurable biological processes such as cholesterol levels, body mass index, glucose and blood pressure — to calculate health status and disease risks across a patient’s lifespan.
    The findings are critical due to the increased risk of developing metabolic and cardiovascular diseases with aging, a process that has adverse effects on cellular, psychological and behavioral processes.
    “There is an unmet need for scalable approaches that can provide guidance for pharmaceutical care across the lifespan in the presence of aging and chronic co-morbidities,” says lead author Murali Ramanathan, PhD, professor of pharmaceutical sciences in the UB School of Pharmacy and Pharmaceutical Sciences. “This knowledge gap may be potentially bridged by innovative disease progression modeling.”
    The model could facilitate the assessment of long-term chronic drug therapies, and help clinicians monitor treatment responses for conditions such as diabetes, high cholesterol and high blood pressure, which become more frequent with age, says Ramanathan.
    Additional investigators include first author and UB School of Pharmacy and Pharmaceutical Sciences alumnus Mason McComb, PhD; Rachael Hageman Blair, PhD, associate professor of biostatistics in the UB School of Public Health and Health Professions; and Martin Lysy, PhD, associate professor of statistics and actuarial science at the University of Waterloo.
    The research examined data from three case studies within the third National Health and Nutrition Examination Survey (NHANES) that assessed the metabolic and cardiovascular biomarkers of nearly 40,000 people in the United States.
    Biomarkers, which also include measurements such as temperature, body weight and height, are used to diagnose, treat and monitor overall health and numerous diseases.
    The researchers examined seven metabolic biomarkers: body mass index, waist-to-hip ratio, total cholesterol, high-density lipoprotein cholesterol, triglycerides, glucose and glycohemoglobin. The cardiovascular biomarkers examined include systolic and diastolic blood pressure, pulse rate and homocysteine.
    By analyzing changes in metabolic and cardiovascular biomarkers, the model “learns” how aging affects these measurements. With machine learning, the system uses a memory of previous biomarker levels to predict future measurements, which ultimately reveal how metabolic and cardiovascular diseases progress over time.
    Story Source:
    Materials provided by University at Buffalo. Original written by Marcene Robinson. Note: Content may be edited for style and length. More

  • in

    Physicists have coaxed ultracold atoms into an elusive form of quantum matter

    An elusive form of matter called a quantum spin liquid isn’t a liquid, and it doesn’t spin — but it sure is quantum.

    Predicted nearly 50 years ago, quantum spin liquids have long evaded definitive detection in the laboratory. But now, a lattice of ultracold atoms held in place with lasers has shown hallmarks of the long-sought form of matter, researchers report in the Dec. 3 Science.

    Quantum entanglement goes into overdrive in the newly fashioned material. Even atoms on opposite sides of the lattice share entanglement, or quantum links, meaning that the properties of distant atoms are correlated with one another. “It’s very, very entangled,” says physicist Giulia Semeghini of Harvard University, a coauthor of the new study. “If you pick any two points of your system, they are connected to each other through this huge entanglement.” This strong, long-range entanglement could prove useful for building quantum computers, the researchers say.

    The new material matches predictions for a quantum spin liquid, although its makeup strays a bit from conventional expectations. While the traditional idea of a quantum spin liquid relies on the quantum property of spin, which gives atoms magnetic fields, the new material is based on different atomic quirks.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    A standard quantum spin liquid should arise among atoms whose spins are in conflict. Spin causes atoms to act as tiny magnets. Normally, at low temperatures, those atoms would align their magnetic poles in a regular pattern. For example, if one atom points up, its neighbors point down. But if atoms are arranged in a triangle, for example, each atom has two neighbors that themselves point in opposite directions. That arrangement leaves the third one with nowhere to turn — it can’t oppose both of its neighbors at once.

    So atoms in quantum spin liquids refuse to choose (SN: 9/21/21). Instead, the atoms wind up in a superposition, a quantum combination of spin up and down, and each atom’s state is linked with those of its compatriots. The atoms are constantly fluctuating and never settle down into an orderly arrangement of spins, similarly to how atoms in a normal liquid are scattered about rather than arranged in a regularly repeating pattern, hence the name.

    Conclusive evidence of quantum spin liquids has been hard to come by in solid materials. In the new study, the researchers took a different tack: They created an artificial material composed of 219 trapped rubidium atoms cooled to a temperature of around 10 microkelvins (about –273.15° Celsius). The array of atoms, known as a programmable quantum simulator, allows scientists to fine-tune how atoms interact to investigate exotic forms of quantum matter.

    In the new experiment, rather than the atoms’ spins being in opposition, a different property created disagreement. The researchers used lasers to put the atoms into Rydberg states, meaning one of an atom’s electrons is bumped to a very high energy level (SN: 8/29/16). If one atom is in a Rydberg state, its neighbors prefer not to be. That setup begets a Rydberg-or-not discord, analogous to the spin-up and -down battle in a traditional quantum spin liquid.

    The scientists confirmed the quantum spin liquid effect by studying the properties of atoms that fell along loops traced through the material. According to quantum math, those atoms should have exhibited certain properties unique to quantum spin liquids. The results matched expectations for a quantum spin liquid and revealed that long-range entanglement was present.

    Notably, the material’s entanglement is topological. That means it is described by a branch of mathematics called topology, in which an object is defined by certain geometrical properties, for example, its number of holes (SN: 10/4/16). Topology can protect information from being destroyed: A bagel that falls off the counter will still have exactly one hole, for example. This information-preserving feature could be a boon to quantum computers, which must grapple with fragile, easily destroyed quantum information that makes calculations subject to mistakes (SN: 6/22/20).

    Whether the material truly qualifies as a quantum spin liquid, despite not being based on spin, depends on your choice of language, says theoretical physicist Christopher Laumann of Boston University, who was not involved with the study. Some physicists use the term “spin” to describe other systems with two possible options, because it has the same mathematics as atomic spins that can point either up or down. “Words have meaning, until they don’t,” he quips. It all depends how you spin them. More

  • in

    Liquid crystals for fast switching devices

    Liquid crystals are not solid, but some of their physical properties are directional — like in a crystal. This is because their molecules can arrange themselves into certain patterns. The best-known applications include flat screens and digital displays. They are based on pixels of liquid crystals whose optical properties can be switched by electric fields.
    Some liquid crystals form the so-called cholesteric phases: the molecules self-assemble into helical structures, which are characterised by pitch and rotate either to the right or to the left. “The pitch of the cholesteric spirals determines how quickly they react to an applied electric field,” explains Dr. Alevtina Smekhova, physicist at HZB and first author of the study, which has now been published in Soft Matter.
    Simple molecular chain
    In this work, she and partners from the Academies of Sciences in Prague, Moscow and Chernogolovka investigated a liquid crystalline cholesteric compound called EZL10/10, developed in Prague. “Such cholesteric phases are usually formed by molecules with several chiral centres, but here the molecule has only one chiral centre,” explains Dr. Smekhova. It is a simple molecular chain with one lactate unit.
    Ultrashort pitch
    At BESSY II, the team has now examined this compound with soft X-ray light and determined the pitch and space ordering of the spirals. This was the shortest up-to-date reported value of the pitch: only 104 nanometres! This is twice as short as the previously known pitch of spiral structures in liquid crystals. Further analysis showed that in this material the cholesteric spirals form domains with characteristic lengths of about five pitches.
    Outlook
    “This very short pitch makes the material unique and promising for optoelectronic devices with very fast switching times,” Dr. Smekhova points out. In addition, the EZ110/10 compound is thermally and chemically stable and can easily be further varied to obtain structures with customised pitch lengths.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Invasive grasses are taking over the American West’s sea of sagebrush

    No one likes a cheater, especially one that prospers as easily as the grass Bromus tectorum does in the American West. This invasive species is called cheatgrass because it dries out earlier than native plants, shortchanging wildlife and livestock in search of nutritious food.

    Unfortunately for those animals and the crowded-out native plants, cheatgrass and several other invasive annual grasses now dominate one-fifth of the Great Basin, a wide swath of land that includes portions of Oregon, Nevada, Idaho, Utah and California. In 2020, these invasive grasses covered more than 77,000 square kilometers of Great Basin ecosystems, including higher elevation habitats that are now accessible to nonnative plants due to climate change, researchers report November 17 in Diversity and Distributions.

    This invasion of exotic annual grasses is degrading one of North America’s most imperiled biomes: a vast sea of sagebrush shrubs, wildflowers and bunchgrasses where pronghorn and mule deer roam and where ranchers rely on healthy rangelands to raise cattle.

    What’s more, these invasive grasses, which are highly flammable when dry, are also linked to more frequent and larger wildfires. In parts of Idaho’s Snake River Plain that are dominated by cheatgrass, for example, fires now occur every three to five years as opposed to the historical average of 60 to 110 years. From 2000 to 2009, 39 out of 50 of the largest fires in the Great Basin were associated with cheatgrass.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To add insult to injury, cheatgrass is more efficient at recolonizing burned areas after a fire than native plants, creating a vicious loop: More cheatgrass causes more fires, and more fires foster more of the weeds. This means that land managers are often behind the curve, trying to keep cheatgrass from spreading to prevent wildfires, while also attempting to restore native plant communities after fires so that the sagebrush ecosystems don’t transition into a monoculture of invasive grasses.

    “We need to get strategic spatially to pinpoint where to protect intact native plant communities rather than constantly chasing the problem,” says Joseph Smith, a rangeland ecology researcher at the University of Montana in Missoula.

    To do that, Smith and his colleagues quantified how much of the Great Basin has transitioned to invasive annual grasses over the last three decades. The researchers used the Rangeland Analysis Platform, or RAP, a remote sensing product powered by Google Earth Engine that estimates the type and percentage of vegetation at a baseball diamond–sized scale.

    While the satellite imagery that RAP relies on can show where annual grasses turn brown in late spring in the West or where perennial plants stay green longer into the summer, the technology can’t delineate between native and nonnative plants. So researchers cross-checked RAP data with on-the-ground vegetation surveys collected through the U.S. Bureau of Land Management’s assessment, inventory and monitoring strategy.

    Invasive annual grasses have increased eightfold in area in the Great Basin region since 1990, the team found. Smith and colleagues estimate that areas dominated by the grasses have grown more than 2,300 square kilometers annually, a rate of take-over proportionally greater than recent deforestation of the Amazon rainforest.

    Perhaps most alarmingly, the team found that annual grasses, most of which are invasive, are steadily moving into higher elevations previously thought to be at minimal risk of invasion (SN: 10/3/14). Invasive annual grasses don’t tolerate cold, snowy winters as well as native perennial plants. But as a result of climate change, winters are trending more mild in the Great Basin and summers more arid. While perennial plants are struggling to survive the hot, dry months, invasive grass seeds simply lie dormant and wait for fall rains.

    “In a lot of ways, invasive grasses just do an end run around perennials. They don’t have to deal with the harshest effects of climate change because of their different life cycle,” Smith explains.

    Though the scale of the problem can seem overwhelming, free remote sensing technology like RAP may help land managers better target efforts to slow the spread of these invasive grasses and explore their connection to wildfires. Smith, for instance, is now researching how mapping annual grasses in the spring might help forecast summer wildfires.

    “If we don’t know where the problem is, then we don’t know where to focus solutions,” says Bethany Bradley, an invasion ecologist and biogeographer at the University of Massachusetts Amherst who wasn’t involved in the research. “Mapping invasive grasses can certainly help people stop the grass-fire cycle by knowing where to treat them with herbicides.” More

  • in

    How statistics can aid in the fight against misinformation

    An American University math professor and his team created a statistical model that can be used to detect misinformation in social posts. The model also avoids the problem of black boxes that occur in machine learning.
    With the use of algorithms and computer models, machine learning is increasingly playing a role in helping to stop the spread of misinformation, but a main challenge for scientists is the black box of unknowability, where researchers don’t understand how the machine arrives at the same decision as human trainers.
    Using a Twitter dataset with misinformation tweets about COVID-19, Zois Boukouvalas, assistant professor in AU’s Department of Mathematics and Statistics, College of Arts and Sciences, shows how statistical models can detect misinformation in social media during events like a pandemic or a natural disaster. In newly published research, Boukouvalas and his colleagues, including AU student Caitlin Moroney and Computer Science Prof. Nathalie Japkowicz, also show how the model’s decisions align with those made by humans.
    “We would like to know what a machine is thinking when it makes decisions, and how and why it agrees with the humans that trained it,” Boukouvalas said. “We don’t want to block someone’s social media account because the model makes a biased decision.”
    Boukouvalas’ method is a type of machine learning using statistics. It’s not as popular a field of study as deep learning, the complex, multi-layered type of machine learning and artificial intelligence. Statistical models are effective and provide another, somewhat untapped, way to fight misinformation, Boukouvalas said.
    For a testing set of 112 real and misinformation tweets, the model achieved a high prediction performance and classified them correctly, with an accuracy of nearly 90 percent. (Using such a compact dataset was an efficient way for verifying how the method detected the misinformation tweets.)
    “What’s significant about this finding is that our model achieved accuracy while offering transparency about how it detected the tweets that were misinformation,” Boukouvalas added. “Deep learning methods cannot achieve this kind of accuracy with transparency.” More