More stories

  • in

    A new connection between topology and quantum entanglement

    Topology and entanglement are two powerful principles for characterizing the structure of complex quantum states. In a new paper in the journal Physical Review X, researchers from the University of Pennsylvania establish a relationship between the two.
    “Our work ties two big ideas together,” says Charles Kane, the Christopher H. Browne Distinguished Professor of Physics in Penn’s School of Arts & Sciences. “It’s a conceptual link between topology, which is a way of characterizing the universal features that quantum states have, and entanglement, which is a way in which quantum states can exhibit non-local correlations, where something that happens in one point in space is correlated with something that happens in another part in space. What we’ve found is a situation where those concepts are tightly intertwined.”
    The seed for exploring this connection came during the long hours Kane spent in his home office during the pandemic, pondering new ideas. One train of thought had him envisioning the classic textbook image of the Fermi surface of copper, which represents the metal’s potential electron energies. It’s a picture every physics student sees, and one with which Kane was highly familiar.
    “Of course, I learned about that picture back in the 1980s but had never thought about it as describing a topological surface,” Kane says.
    A classic way of thinking about topological surfaces, says Kane, is to consider the difference between a donut and a sphere. What’s the difference? A single hole. Topology considers these generalizable properties of a surface, which are not changed by deformation. Under this principle, a coffee cup and a donut would have the same topological property.
    Considering the Fermi surface of copper as a topological object, then, the associated number of holes it possesses is four, a figure also known as a genus. Once Kane began thinking of the Fermi surface in this way, he wondered whether a relationship could exist between the genus and quantum entanglement. More

  • in

    Safe havens for cooperation

    Why do individuals from single cells to humans cooperate with each other and how do they form well-functioning networks? A research team led by Prof. Dr Thilo Gross from the University of Oldenburg has come a step closer to answering this question. According to their model, networks with a high level of cooperation can emerge if the cooperating individuals take a clear-cut position towards free riders. However, if the contributors leave an environment too quickly because others do not cooperate, this will ultimately lead to an overall lower level of cooperation. The six authors from the US, England and Germany present the results of their ecological model in the Proceedings of the National Academy of Sciences (PNAS).
    The paper focuses on a fundamental problem: How can individuals who contribute time and effort to create a cooperative environment survive in a system where they compete against free-riders who take advantage of their work?
    The researchers used game theory to analyse cooperation in networks, focusing on the so-called “snowdrift game.” “This game is based on a situation in which two drivers are surprised by a snowstorm and get stuck in the snow,” explains Gross, a professor for biodiversity theory at the University of Oldenburg’s Helmholtz Institute for Functional Marine Biodiversity. The drivers each have a snow shovel available and can choose between two options — to cooperate or not. A driver’s highest payoff comes from letting the opponent clear all the snow by themselves. Nevertheless, the opponent is still rewarded for his work because he gets home faster.
    The authors added a new option to an abstract model of this game: the players were able to quit the scene and relocate. This power turned out to be an important piece of the puzzle. “If hard-working contributors can abandon the environment in which they are exploited it leaves free-riders to their own devices while the contributors may prosper elsewhere,” says Gross.
    Always on the move
    But now the new paper shows that there’s a twist: If contributors use their power to quit too liberally then they are creating an environment where contributors and free-riders alike are always on the move.
    “It seems absurd, but we reach a state where everybody is constantly looking for a better place, but in fact all that moving around just means every place becomes the same,” says Ashkaan Fahimipour, a computational biologist at the University of California and lead author of the study. He completed the study as a PhD student supervised by Gross.
    The author’s mathematical work reveals that the onset of this state happens in a sharp transition. On one side of this transition lies the world where everybody is always on the move only to discover that it is bad everywhere. But on the other side there is a completely different situation. Here people are more lenient with their environment, they endure just a little bit longer, but decisively quit when things become too bad. This creates enough departures to punish free-riders but not enough to make every place the same. Thus safe havens for cooperation can form where strong contributions to the common good create prosperous environments.
    In the new paper the author’s focus is mainly on the onset of cooperation among animals and in early civilizations, but their mathematical framework is transferable to a broad variety of different settings. “Maybe our results also hold a message for the modern world,” says Gross.
    The work was a collaboration of researchers from the University of California and Princeton University in the US and the University of Bristol in the UK. In Germany, researchers from the Helmholtz Institute for Functional Marine Biodiversity in Oldenburg, the University of Oldenburg, the Max-Planck-Institute for Evolutionary Biology in Plön and the Alfred-Wegener-Institute for Marine and Polar Research in Bremerhaven contributed to the study.
    Story Source:
    Materials provided by University of Oldenburg. Note: Content may be edited for style and length. More

  • in

    The Arctic is warming even faster than scientists realized

    The Arctic is heating up at a breakneck speed compared with the rest of Earth. And new analyses show that the region is warming even faster than scientists thought. Over the last four decades, the average Arctic temperature increased nearly four times as fast as the global average, researchers report August 11 in Communications Earth & Environment.

    And that’s just on average. Some parts of the Arctic Ocean, such as the Barents Sea between Russia and Norway’s Svalbard archipelago, are warming as much as seven times as fast, meteorologist Mika Rantanen of the Finnish Meteorological Institute in Helsinki and colleagues found. Previous studies have tended to say that the Arctic’s average temperature is increasing two to three times as fast as elsewhere, as humans continue causing the climate to change.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To calculate the true pace of the accelerated warming, a phenomenon called Arctic amplification, the researchers averaged four sets of satellite data from 1979 to 2021 (SN: 7/1/20). Globally, the average temperature increase over that time was about 0.2 degrees Celsius per decade. But the Arctic was warming by about 0.75 degrees C per decade.

    Even the best climate models are not doing a great job of reproducing that warming, Rantanen and colleagues say. The inability of the models to realistically simulate past Arctic amplification calls into question how well the models can project future changes there.

    It’s not clear where the problem lies. One issue may be that the models are struggling with correctly simulating the sensitivity of Arctic temperatures to the loss of sea ice. Vanishing snow and ice, particularly sea ice, are one big reason why Arctic warming is on hyperspeed. The bright white snow and ice create a reflective shield that bounces incoming radiation from the sun back into space. But open ocean waters or bare rocks absorb that heat, raising the temperature. More

  • in

    Prediction of human movement during disasters to allow for more effective emergency response

    The COVID-19 pandemic, bigger and more frequent wildfires, devastating floods, and powerful storms have become unfortunate facts of life. With each disaster, people depend on the emergency response of governments, nonprofit organizations, and the private sector for aid when their lives are upended. However, a complicating factor in delivering that aid is that people tend to disperse with such disasters.
    In research recently published in The Proceedings of the National Academy of Sciences, a team led by Jianxi Gao, assistant professor of computer science at Rensselaer Polytechnic Institute, and Qi “Ryan” Wang, associate professor of civil and environmental engineering at Northeastern University, formulated a method to predict human movement during large-scale extreme events with the goal of enabling more effective emergency responses. The model also revealed great disparity in movement among different economic groups.
    “Despite many possible variables, we found that changes in human mobility behavior during various extreme events exhibit a consistent hyperbolic decline,” said Gao. “We call it ‘spatiotemporal decay.'”
    Typically, people’s movements follow predictable patterns. When an extreme event disrupts the pattern, scientists refer to it as a “mobility perturbation.” For example, people may stop commuting to work, or they may change their route, or even evacuate to a shelter. Not only do these mobility perturbations cause challenges when delivering aid, but they also lead to financial, medical, and quality of life repercussions. The nature, extent, and duration of mobility perturbations vary widely.
    Gao’s team tracked the anonymous movements of 90 million people in the United States over the course of six large-scale disasters including wildfires, tropical storms, winter freezes, and pandemics in order to develop a unified model.
    “Our model reveals the underlying uniformity across variables by incorporating heterogeneity across space and over time,” said Gao. “We found strong regularities in how much mobility behavior changes following extreme events and in how fast mobility behavior returns to normal, allowing us to predict complex human behaviors during large-scale crises.”
    Gao’s team found that people living close to the nucleus of the crisis — ground zero, or where a storm hits — limit their mobility significantly and quickly. Those living further away do not alter their movement patterns as drastically. This is what is referred to as ‘spatial decay.’ Over time, mobility patterns either return to normal, inch towards normal, or become even more perturbed. The team accounted for these variables by considering ‘temporal decay,’ as well.
    When the team applied the model to the COVID-19 pandemic, it revealed great differences in movement among economic groups, which may help to explain the different infection rates. People from wealthy areas were more able to immediately reduce their mobility and maintain that change longer. People living in lower income areas exhibited a faster and greater hyperbolic decay.
    “In other words, wealthier people were able to socially distance,” Gao said. “Lower income people were forced to return to work.”
    “If events of recent years have taught us anything, it is that we must do our best to prepare for crises,” said Curt Breneman, Dean of the Rensselaer School of Science. “This work by Dr. Gao and his team can inform enhanced and proactive emergency response planning to mitigate future extreme events. It also shines a light on persistent social inequities that we must find new ways to address.”
    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Katie Malatino. Note: Content may be edited for style and length. More

  • in

    More than meets the eye: How patterns in nature arise and inspire everything from scientific theory to biodegradable materials

    Nature is full of patterns. Among them are tiling patterns, which mimic what you’d see on a tiled bathroom floor, characterized by both tiles and interfaces — such as grout — in between. In nature, a giraffe’s coloring is an example of a tiling pattern. But what makes these natural patterns form?
    A new University of Arizona study uses bacteria to understand how tiles and interfaces come to be. The findings have implications for understanding how complex, multicellular life might have evolved on Earth and how new biomaterials might be created from biological sources.
    In many biological systems, tiling patterns are functionally important. For example, a fly’s wings have tiles and interfaces. Veins, which provide stability and contain nerves, are interfaces, which break up a wing into smaller tiles. And the human retina at the back of the inner eye contains cells that are also arranged like a mosaic of tiles to process what’s in our field of view.
    A great deal of research has looked at how such patterns can be established through biochemical interactions. However, patterns can also be established through mechanical interactions. That process is not as well understood.
    A new paper published in Nature shines new light on mechanical pattern formation. It was led by former UArizona postdoctoral fellow Honesty Kim. Ingmar Riedel-Kruse, an associate professor in the UArizona Department of Molecular and Cellular Biology, is the paper’s senior author.
    The Riedel-Kruse lab, in partnership with researchers from the Massachusetts Institute of Technology’s Applied Mathematics Department, used bacteria to model how tiling patterns can arise through mechanical interactions. More

  • in

    New programmable materials can sense their own movements

    MIT researchers have developed a method for 3D printing materials with tunable mechanical properties, which can sense how they are moving and interacting with the environment. The researchers create these sensing structures using just one material and a single run on a 3D printer.
    To accomplish this, the researchers began with 3D-printed lattice materials and incorporated networks of air-filled channels into the structure during the printing process. By measuring how the pressure changes within these channels when the structure is squeezed, bent, or stretched, engineers can receive feedback on how the material is moving.
    These lattice materials are composed of single cells in a repeating pattern. Changing the size or shape of the cells alters the material’s mechanical properties, such as stiffness or hardness. For instance, a denser network of cells makes a stiffer structure.
    This technique could someday be used to create flexible soft robots with embedded sensors that enable the robots understand their posture and movements. It might also be used to produce wearable smart devices, like customized running shoes that provide feedback on how an athlete’s foot is impacting the ground.
    “The idea with this work is that we can take any material that can be 3D-printed and have a simple way to route channels throughout it so we can get sensorization with structure. And if you use really complex materials, then you can have motion, perception, and structure all in one,” says co-lead author Lillian Chin, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
    Joining Chin on the paper are co-lead author Ryan Truby, a former CSAIL postdoc who is now as assistant professor at Northwestern University; Annan Zhang, a CSAIL graduate student; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The paper is published in Science Advances. More

  • in

    AI may come to the rescue of future firefighters

    In firefighting, the worst flames are the ones you don’t see coming. Amid the chaos of a burning building, it is difficult to notice the signs of impending flashover — a deadly fire phenomenon wherein nearly all combustible items in a room ignite suddenly. Flashover is one of the leading causes of firefighter deaths, but new research suggests that artificial intelligence (AI) could provide first responders with a much-needed heads-up.
    Researchers at the National Institute of Standards and Technology (NIST), the Hong Kong Polytechnic University and other institutions have developed a Flashover Prediction Neural Network (FlashNet) model to forecast the lethal events precious seconds before they erupt. In a new study published in Engineering Applications of Artificial Intelligence, FlashNet boasted an accuracy of up to 92.1% across more than a dozen common residential floorplans in the U.S. and came out on top when going head-to-head with other AI-based flashover predicting programs.
    Flashovers tend to suddenly flare up at approximately 600 degrees Celsius (1,100 degrees Fahrenheit) and can then cause temperatures to shoot up further. To anticipate these events, existing research tools either rely on constant streams of temperature data from burning buildings or use machine learning to fill in the missing data in the likely event that heat detectors succumb to high temperatures.
    Until now, most machine learning-based prediction tools, including one the authors previously developed, have been trained to operate in a single, familiar environment. In reality, firefighters are not afforded such luxury. As they charge into hostile territory, they may know little to nothing about the floorplan, the location of fire or whether doors are open or closed.
    “Our previous model only had to consider four or five rooms in one layout, but when the layout switches and you have 13 or 14 rooms, it can be a nightmare for the model,” said NIST mechanical engineer Wai Cheong Tam, co-first author of the new study. “For real-world application, we believe the key is to move to a generalized model that works for many different buildings.”
    To cope with the variability of real fires, the researchers beefed up their approach with graph neural networks (GNN), a kind of machine learning algorithm good at making judgments based on graphs of nodes and lines, representing different data points and their relationships with one another.
    “GNNs are frequently used for estimated time of arrival, or ETA, in traffic where you can be analyzing 10 to 50 different roads. It’s very complicated to properly make use of that kind of information simultaneously, so that’s where we got the idea to use GNNs,” said Eugene Yujun Fu, a research assistant professor at the Hong Kong Polytechnic University and study co-first author. “Except for our application, we’re looking at rooms instead of roads and are predicting flashover events instead of ETA in traffic.”
    The researchers digitally simulated more than 41,000 fires in 17 kinds of buildings, representing a majority of the U.S. residential building stock. In addition to layout, factors such as the origin of the fire, types of furniture and whether doors and windows were open or closed varied throughout. They provided the GNN model with a set of nearly 25,000 fire cases to use as study material and then 16,000 for fine tuning and final testing.
    Across the 17 kinds of homes, the new model’s accuracy depended on the amount of data it had to chew on and the lead time it sought to provide firefighters. However, the model’s accuracy — at best, 92.1% with 30 seconds of lead time — outperformed five other machine-learning-based tools, including the authors’ previous model. Critically, the tool produced the least false negatives, dangerous cases where the models fail to predict an imminent flashover.
    The authors threw FlashNet into scenarios where it had no prior information about the specifics of a building and the fire burning inside it, similar to the situation firefighters often find themselves in. Given those constraints, the tool’s performance was quite promising, Tam said. However, the authors still have a ways to go before they can take FlashNet across the finish line. As a next step, they plan to battle-test the model with real-world, rather than simulated, data.
    “In order to fully test our model’s performance, we actually need to build and burn our own structures and include some real sensors in them,” Tam said. “At the end of the day, that’s a must if we want to deploy this model in real fire scenarios.” More

  • in

    Ultracold atoms dressed by light simulate gauge theories

    Our modern understanding of the physical world is based on gauge theories: mathematical models from theoretical physics that describe the interactions between elementary particles (such as electrons or quarks) and explain quantum mechanically three of the fundamental forces of nature: the electromagnetic, weak, and strong forces. The fourth fundamental force, gravity, is described by Einstein’s theory of general relativity, which, while not yet understood in the quantum regime, is also a gauge theory. Gauge theories can also be used to explain the exotic quantum behavior of electrons in certain materials or the error correction codes that future quantum computers will need to work reliably, and are the workhorse of modern physics.
    In order to better understand these theories, one possibility is to realize them using artificial and highly controllable quantum systems. This strategy is called quantum simulation and constitutes a special type of quantum computing. It was first proposed by the physicist Richard Feynman in the 80s, more than fifteen years after being awarded the Nobel prize in physics for his pioneering theoretical work on gauge theories. Quantum simulation can be seen as a quantum LEGO game where experimental physicists give reality to abstract theoretical models. They build them in the laboratory “quantum brick by quantum brick,” using very well controlled quantum systems such as ultracold atoms or ions. After assembling one quantum LEGO prototype for a specific model, the researchers can measure its properties very precisely in the lab, and use their results to understand better the theory that it mimics. During the last decade, quantum simulation has been intensively exploited to investigate quantum materials. However, playing the quantum LEGO game with gauge theories is fundamentally more challenging. Until now, only the electromagnetic force could be investigated in this way.
    In a recent study published in Nature, ICFO experimental researchers Anika Frölian, Craig Chisholm, Ramón Ramos, Elettra Neri, and Cesar Cabrera, led by ICREA Prof. at ICFO Leticia Tarruell, in collaboration with Alessio Celi, a theoretical researcher from the Talent program at the Autonomous University of Barcelona, were able to simulate a gauge theory other than electromagnetism for the first time, using ultracold atoms.
    A gauge theory for very heavy photons
    The team set out to realize in the laboratory a gauge theory belonging to the class of topological gauge theories, different from the class of dynamical gauge theories to which electromagnetism belongs.
    In the gauge theory language, the electromagnetic force between two electrons arises when they exchange a photon: a particle of light that can propagate even when matter is absent. However, in two-dimensional quantum materials subjected to very strong magnetic fields, the photons exchanged by the electrons behave as if they were extremely heavy and can only move as long as they are attached to matter. As a result, the electrons have very peculiar properties: they can only flow through the edges of the material, in a direction that is set by the orientation of the magnetic field, and their charge becomes apparently fractional. This behavior is known as the fractional quantum Hall effect, and is described by the Chern-Simons gauge theory (named after the mathematicians that developed one of its key elements). The behavior of the electrons restricted to a single edge of the material should also be described by a gauge theory, in this case called chiral BF, which was proposed in the 90s but not realized in a laboratory until the ICFO and UAB researchers pulled it out of the freezer. More