More stories

  • in

    Prediction of human movement during disasters to allow for more effective emergency response

    The COVID-19 pandemic, bigger and more frequent wildfires, devastating floods, and powerful storms have become unfortunate facts of life. With each disaster, people depend on the emergency response of governments, nonprofit organizations, and the private sector for aid when their lives are upended. However, a complicating factor in delivering that aid is that people tend to disperse with such disasters.
    In research recently published in The Proceedings of the National Academy of Sciences, a team led by Jianxi Gao, assistant professor of computer science at Rensselaer Polytechnic Institute, and Qi “Ryan” Wang, associate professor of civil and environmental engineering at Northeastern University, formulated a method to predict human movement during large-scale extreme events with the goal of enabling more effective emergency responses. The model also revealed great disparity in movement among different economic groups.
    “Despite many possible variables, we found that changes in human mobility behavior during various extreme events exhibit a consistent hyperbolic decline,” said Gao. “We call it ‘spatiotemporal decay.'”
    Typically, people’s movements follow predictable patterns. When an extreme event disrupts the pattern, scientists refer to it as a “mobility perturbation.” For example, people may stop commuting to work, or they may change their route, or even evacuate to a shelter. Not only do these mobility perturbations cause challenges when delivering aid, but they also lead to financial, medical, and quality of life repercussions. The nature, extent, and duration of mobility perturbations vary widely.
    Gao’s team tracked the anonymous movements of 90 million people in the United States over the course of six large-scale disasters including wildfires, tropical storms, winter freezes, and pandemics in order to develop a unified model.
    “Our model reveals the underlying uniformity across variables by incorporating heterogeneity across space and over time,” said Gao. “We found strong regularities in how much mobility behavior changes following extreme events and in how fast mobility behavior returns to normal, allowing us to predict complex human behaviors during large-scale crises.”
    Gao’s team found that people living close to the nucleus of the crisis — ground zero, or where a storm hits — limit their mobility significantly and quickly. Those living further away do not alter their movement patterns as drastically. This is what is referred to as ‘spatial decay.’ Over time, mobility patterns either return to normal, inch towards normal, or become even more perturbed. The team accounted for these variables by considering ‘temporal decay,’ as well.
    When the team applied the model to the COVID-19 pandemic, it revealed great differences in movement among economic groups, which may help to explain the different infection rates. People from wealthy areas were more able to immediately reduce their mobility and maintain that change longer. People living in lower income areas exhibited a faster and greater hyperbolic decay.
    “In other words, wealthier people were able to socially distance,” Gao said. “Lower income people were forced to return to work.”
    “If events of recent years have taught us anything, it is that we must do our best to prepare for crises,” said Curt Breneman, Dean of the Rensselaer School of Science. “This work by Dr. Gao and his team can inform enhanced and proactive emergency response planning to mitigate future extreme events. It also shines a light on persistent social inequities that we must find new ways to address.”
    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Katie Malatino. Note: Content may be edited for style and length. More

  • in

    More than meets the eye: How patterns in nature arise and inspire everything from scientific theory to biodegradable materials

    Nature is full of patterns. Among them are tiling patterns, which mimic what you’d see on a tiled bathroom floor, characterized by both tiles and interfaces — such as grout — in between. In nature, a giraffe’s coloring is an example of a tiling pattern. But what makes these natural patterns form?
    A new University of Arizona study uses bacteria to understand how tiles and interfaces come to be. The findings have implications for understanding how complex, multicellular life might have evolved on Earth and how new biomaterials might be created from biological sources.
    In many biological systems, tiling patterns are functionally important. For example, a fly’s wings have tiles and interfaces. Veins, which provide stability and contain nerves, are interfaces, which break up a wing into smaller tiles. And the human retina at the back of the inner eye contains cells that are also arranged like a mosaic of tiles to process what’s in our field of view.
    A great deal of research has looked at how such patterns can be established through biochemical interactions. However, patterns can also be established through mechanical interactions. That process is not as well understood.
    A new paper published in Nature shines new light on mechanical pattern formation. It was led by former UArizona postdoctoral fellow Honesty Kim. Ingmar Riedel-Kruse, an associate professor in the UArizona Department of Molecular and Cellular Biology, is the paper’s senior author.
    The Riedel-Kruse lab, in partnership with researchers from the Massachusetts Institute of Technology’s Applied Mathematics Department, used bacteria to model how tiling patterns can arise through mechanical interactions. More

  • in

    New programmable materials can sense their own movements

    MIT researchers have developed a method for 3D printing materials with tunable mechanical properties, which can sense how they are moving and interacting with the environment. The researchers create these sensing structures using just one material and a single run on a 3D printer.
    To accomplish this, the researchers began with 3D-printed lattice materials and incorporated networks of air-filled channels into the structure during the printing process. By measuring how the pressure changes within these channels when the structure is squeezed, bent, or stretched, engineers can receive feedback on how the material is moving.
    These lattice materials are composed of single cells in a repeating pattern. Changing the size or shape of the cells alters the material’s mechanical properties, such as stiffness or hardness. For instance, a denser network of cells makes a stiffer structure.
    This technique could someday be used to create flexible soft robots with embedded sensors that enable the robots understand their posture and movements. It might also be used to produce wearable smart devices, like customized running shoes that provide feedback on how an athlete’s foot is impacting the ground.
    “The idea with this work is that we can take any material that can be 3D-printed and have a simple way to route channels throughout it so we can get sensorization with structure. And if you use really complex materials, then you can have motion, perception, and structure all in one,” says co-lead author Lillian Chin, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
    Joining Chin on the paper are co-lead author Ryan Truby, a former CSAIL postdoc who is now as assistant professor at Northwestern University; Annan Zhang, a CSAIL graduate student; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The paper is published in Science Advances. More

  • in

    AI may come to the rescue of future firefighters

    In firefighting, the worst flames are the ones you don’t see coming. Amid the chaos of a burning building, it is difficult to notice the signs of impending flashover — a deadly fire phenomenon wherein nearly all combustible items in a room ignite suddenly. Flashover is one of the leading causes of firefighter deaths, but new research suggests that artificial intelligence (AI) could provide first responders with a much-needed heads-up.
    Researchers at the National Institute of Standards and Technology (NIST), the Hong Kong Polytechnic University and other institutions have developed a Flashover Prediction Neural Network (FlashNet) model to forecast the lethal events precious seconds before they erupt. In a new study published in Engineering Applications of Artificial Intelligence, FlashNet boasted an accuracy of up to 92.1% across more than a dozen common residential floorplans in the U.S. and came out on top when going head-to-head with other AI-based flashover predicting programs.
    Flashovers tend to suddenly flare up at approximately 600 degrees Celsius (1,100 degrees Fahrenheit) and can then cause temperatures to shoot up further. To anticipate these events, existing research tools either rely on constant streams of temperature data from burning buildings or use machine learning to fill in the missing data in the likely event that heat detectors succumb to high temperatures.
    Until now, most machine learning-based prediction tools, including one the authors previously developed, have been trained to operate in a single, familiar environment. In reality, firefighters are not afforded such luxury. As they charge into hostile territory, they may know little to nothing about the floorplan, the location of fire or whether doors are open or closed.
    “Our previous model only had to consider four or five rooms in one layout, but when the layout switches and you have 13 or 14 rooms, it can be a nightmare for the model,” said NIST mechanical engineer Wai Cheong Tam, co-first author of the new study. “For real-world application, we believe the key is to move to a generalized model that works for many different buildings.”
    To cope with the variability of real fires, the researchers beefed up their approach with graph neural networks (GNN), a kind of machine learning algorithm good at making judgments based on graphs of nodes and lines, representing different data points and their relationships with one another.
    “GNNs are frequently used for estimated time of arrival, or ETA, in traffic where you can be analyzing 10 to 50 different roads. It’s very complicated to properly make use of that kind of information simultaneously, so that’s where we got the idea to use GNNs,” said Eugene Yujun Fu, a research assistant professor at the Hong Kong Polytechnic University and study co-first author. “Except for our application, we’re looking at rooms instead of roads and are predicting flashover events instead of ETA in traffic.”
    The researchers digitally simulated more than 41,000 fires in 17 kinds of buildings, representing a majority of the U.S. residential building stock. In addition to layout, factors such as the origin of the fire, types of furniture and whether doors and windows were open or closed varied throughout. They provided the GNN model with a set of nearly 25,000 fire cases to use as study material and then 16,000 for fine tuning and final testing.
    Across the 17 kinds of homes, the new model’s accuracy depended on the amount of data it had to chew on and the lead time it sought to provide firefighters. However, the model’s accuracy — at best, 92.1% with 30 seconds of lead time — outperformed five other machine-learning-based tools, including the authors’ previous model. Critically, the tool produced the least false negatives, dangerous cases where the models fail to predict an imminent flashover.
    The authors threw FlashNet into scenarios where it had no prior information about the specifics of a building and the fire burning inside it, similar to the situation firefighters often find themselves in. Given those constraints, the tool’s performance was quite promising, Tam said. However, the authors still have a ways to go before they can take FlashNet across the finish line. As a next step, they plan to battle-test the model with real-world, rather than simulated, data.
    “In order to fully test our model’s performance, we actually need to build and burn our own structures and include some real sensors in them,” Tam said. “At the end of the day, that’s a must if we want to deploy this model in real fire scenarios.” More

  • in

    Ultracold atoms dressed by light simulate gauge theories

    Our modern understanding of the physical world is based on gauge theories: mathematical models from theoretical physics that describe the interactions between elementary particles (such as electrons or quarks) and explain quantum mechanically three of the fundamental forces of nature: the electromagnetic, weak, and strong forces. The fourth fundamental force, gravity, is described by Einstein’s theory of general relativity, which, while not yet understood in the quantum regime, is also a gauge theory. Gauge theories can also be used to explain the exotic quantum behavior of electrons in certain materials or the error correction codes that future quantum computers will need to work reliably, and are the workhorse of modern physics.
    In order to better understand these theories, one possibility is to realize them using artificial and highly controllable quantum systems. This strategy is called quantum simulation and constitutes a special type of quantum computing. It was first proposed by the physicist Richard Feynman in the 80s, more than fifteen years after being awarded the Nobel prize in physics for his pioneering theoretical work on gauge theories. Quantum simulation can be seen as a quantum LEGO game where experimental physicists give reality to abstract theoretical models. They build them in the laboratory “quantum brick by quantum brick,” using very well controlled quantum systems such as ultracold atoms or ions. After assembling one quantum LEGO prototype for a specific model, the researchers can measure its properties very precisely in the lab, and use their results to understand better the theory that it mimics. During the last decade, quantum simulation has been intensively exploited to investigate quantum materials. However, playing the quantum LEGO game with gauge theories is fundamentally more challenging. Until now, only the electromagnetic force could be investigated in this way.
    In a recent study published in Nature, ICFO experimental researchers Anika Frölian, Craig Chisholm, Ramón Ramos, Elettra Neri, and Cesar Cabrera, led by ICREA Prof. at ICFO Leticia Tarruell, in collaboration with Alessio Celi, a theoretical researcher from the Talent program at the Autonomous University of Barcelona, were able to simulate a gauge theory other than electromagnetism for the first time, using ultracold atoms.
    A gauge theory for very heavy photons
    The team set out to realize in the laboratory a gauge theory belonging to the class of topological gauge theories, different from the class of dynamical gauge theories to which electromagnetism belongs.
    In the gauge theory language, the electromagnetic force between two electrons arises when they exchange a photon: a particle of light that can propagate even when matter is absent. However, in two-dimensional quantum materials subjected to very strong magnetic fields, the photons exchanged by the electrons behave as if they were extremely heavy and can only move as long as they are attached to matter. As a result, the electrons have very peculiar properties: they can only flow through the edges of the material, in a direction that is set by the orientation of the magnetic field, and their charge becomes apparently fractional. This behavior is known as the fractional quantum Hall effect, and is described by the Chern-Simons gauge theory (named after the mathematicians that developed one of its key elements). The behavior of the electrons restricted to a single edge of the material should also be described by a gauge theory, in this case called chiral BF, which was proposed in the 90s but not realized in a laboratory until the ICFO and UAB researchers pulled it out of the freezer. More

  • in

    Sea sponges launch slow-motion snot rockets to clean their pores

    The next time you spot a sea sponge, say “gesundheit!” Some sponges regularly “sneeze” to clear debris from their porous bodies.

    As filter feeders, sponges draw in water through inlet pores — called ostia — and strain it through an internal canal system for nutrients. But there are also inedible bits in the water, like sediment. To prevent the undesirable junk from clogging up their outer pores, a Caribbean tube sponge (Aplysina archeri) uses mucus to trap and sneeze out unwanted particles, Niklas Kornder, a marine biologist at the University of Amsterdam, and colleagues report online August 10 in Current Biology. To the team’s surprise, it found that the sponge expels its snot from the same pores through which it absorbs water.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    It’s “like someone with a runny nose,” says team member Sally Leys, an evolutionary biologist at the University of Alberta in Edmonton, Canada. “It’s constantly streaming, but it’s going counterflow to the in-current.”

    Researchers knew that sponges used contractions dubbed “sneezing” to move water through their bodies in a one-way flow. Typically, water comes in through numerous ostia and leaves through the osculum, a hole near the sponges’ top.

    But when the team captured time-lapse video of A. archeri, it saw tiny specks of mucus exiting from the ostia, moving against the flow of incoming water. Sneezelike contractions appeared to expel and move the specks along a “mucus highway” across the surface of the sponge to points where they collected in stringy, gooey clumps. Unlike an explosive human sneeze, the sponges slowly and continuously secreted debris-laden mucus from their ostia, with one contraction taking between 20 and 50 minutes, the study finds.

    [embedded content]
    The Caribbean tube sponge (Aplysina archeri) uses contractions — called “sneezes” — to help eject mucus from its pores, or ostia. As the time-lapse video zooms in closer, it’s possible to see tiny specks of debris floating out of these pores and traveling along a “mucus highway” where they collect into stringy clumps of goo floating above the surface of the sponge. In real time, this sponge takes between 20 and 50 minutes to complete a sneeze.

    Other sea critters feast on these ocean boogers, like brittle stars and small crustaceans. Scientists view sponges primarily as habitat builders, but the mucus buffet shows they also perform an important function as food providers, says Amanda Kahn, a marine biologist at Moss Landing Marine Labs in California who was not involved with this work.

    “There’s so much to be said for a study that really spends time and watches,” Kahn says. “They let the animals show for themselves what was happening.”

    Most sponges appear to sneeze, so it’s likely not just A. archeri that uses the counterflow technique, Leys says. The team also noted a similar behavior in an Indo-Pacific sponge (Chelonaplysilla sp). But biologists need to dig deeper to figure out how widespread the mechanism is. It’s also unclear exactly what the mucus is or how it’s moving backward through pores. More

  • in

    Researcher develops algorithm to track mental states through the skin

    Researchers at NYU Tandon have reached a key milestone in their quest to develop wearable technology that manages to measure key brain mechanisms through the skin.
    Rose Faghih, Associate Professor of Biomedical Engineering, has been working for the last seven years on a technology that can measure mental activity using electrodermal activity (EDA) — an electrical phenomenon of the skin that is influenced by brain activity related to emotional status. Internal stresses, whether caused by pain, exhaustion, or a particularly packed schedule, can cause changes in the EDA — changes that are directly correlated to mental states.
    The overarching goal — a Multimodal Intelligent Noninvasive brain state Decoder for Wearable AdapTive Closed-loop arcHitectures, or MINDWATCH, as Faghih calls it — would act as a way to monitor a wearer’s mental state, and offer nudges that would help them revert back to a more neutral state of mind. For example, if a person was experiencing a particularly severe bout of work-related stress, the MINDWATCH could pick up on this and automatically play some relaxing music.
    Now Faghih — along with Rafiul Amin, her former PhD student — has accomplished a crucial task required for monitoring this information. For the first time, they have developed a novel inference engine that can monitor brain activity through the skin in real time with high scalability and accuracy. The results are featured in a new paper, “Physiological Characterization of Electrodermal Activity Enables Scalable Near Real-Time Autonomic Nervous System Activation Inference,” published in PLOS Computational Biology.
    “Inferring autonomic nervous system activation from wearable devices in real-time opens new opportunities for monitoring and improving mental health and cognitive engagement,” according to Faghih.
    Previous methods measuring sympathetic nervous system activation through the skin took minutes, which is not practical for wearable devices. While her earlier work focused on inferring brain activity through sweat activation and other factors, the new study additionally models the sweat glands themselves. The model includes a 3D state-space representation of the direct secretion of sweat via pore opening, as well as diffusion followed by corresponding evaporation and reabsorption. This detailed model of the glands provides exceptional insight into inferring the brain activity. More

  • in

    New quantum whirlpools with tetrahedral symmetries discovered in a superfluid

    An international collaboration of scientists has created and observed an entirely new class of vortices — the whirling masses of fluid or air.
    Led by researchers from Amherst College in the US and the University of East Anglia and Lancaster University in the UK, their new paper details the first laboratory studies of these ‘exotic’ whirlpools in an ultracold gas of atoms at temperatures as low as tens of billionths of a degree above absolute zero.
    The discovery, announced this week in the journal Nature Communications, may have exciting future implications for implementations of quantum information and computing.
    Vortices are familiar objects in nature, from the whirlpools of water down a bathtub drain to the airflow around a hurricane.
    In quantum-mechanical systems, such as an atomic Bose-Einstein condensate, the vortices tend to be tiny and their circulation comes in discrete, quantized units. Such vortices have long been objects of fascination for physicists and have helped to illuminate the unusual properties of superfluidity and superconductivity.
    The unusual nature of the observed whirlpools here, however, is due to symmetries in the quantum gas. One especially fascinating property of physical theories, from cosmology to elementary particles, is the appearance of asymmetric worlds despite perfect underlying symmetries. For example, when water freezes to ice, disordered molecules in a liquid arrange themselves into a periodic array. More