More stories

  • in

    Trotting robots reveal emergence of animal gait transitions

    With the help of a form of machine learning called deep reinforcement learning (DRL), the EPFL robot notably learned to transition from trotting to pronking — a leaping, arch-backed gait used by animals like springbok and gazelles — to navigate a challenging terrain with gaps ranging from 14-30cm. The study, led by the BioRobotics Laboratory in EPFL’s School of Engineering, offers new insights into why and how such gait transitions occur in animals.
    “Previous research has introduced energy efficiency and musculoskeletal injury avoidance as the two main explanations for gait transitions. More recently, biologists have argued that stability on flat terrain could be more important. But animal and robotic experiments have shown that these hypotheses are not always valid, especially on uneven ground,” says PhD student Milad Shafiee, first author on a paper published in Nature Communications.
    Shafiee and co-authors Guillaume Bellegarda and BioRobotics Lab head Auke Ijspeert were therefore interested in a new hypothesis for why gait transitions occur: viability, or fall avoidance. To test this hypothesis, they used DRL to train a quadruped robot to cross various terrains. On flat terrain, they found that different gaits showed different levels of robustness against random pushes, and that the robot switched from a walk to a trot to maintain viability, just as quadruped animals do when they accelerate. And when confronted with successive gaps in the experimental surface, the robot spontaneously switched from trotting to pronking to avoid falls. Moreover, viability was the only factor that was improved by such gait transitions.
    “We showed that on flat terrain and challenging discrete terrain, viability leads to the emergence of gait transitions, but that energy efficiency is not necessarily improved,” Shafiee explains. “It seems that energy efficiency, which was previously thought to be a driver of such transitions, may be more of a consequence. When an animal is navigating challenging terrain, it’s likely that its first priority is not falling, followed by energy efficiency.”
    A bio-inspired learning architecture
    To model locomotion control in their robot, the researchers considered the three interacting elements that drive animal movement: the brain, the spinal cord, and sensory feedback from the body. They used DRL to train a neural network to imitate the spinal cord’s transmission of brain signals to the body as the robot crossed an experimental terrain. Then, the team assigned different weights to three possible learning goals: energy efficiency, force reduction, and viability. A series of computer simulations revealed that of these three goals, viability was the only one that prompted the robot to automatically — without instruction from the scientists — change its gait.
    The team emphasizes that these observations represent the first learning-based locomotion framework in which gait transitions emerge spontaneously during the learning process, as well as the most dynamic crossing of such large consecutive gaps for a quadrupedal robot.
    “Our bio-inspired learning architecture demonstrated state-of-the-art quadruped robot agility on the challenging terrain,” Shafiee says.
    The researchers aim to expand on their work with additional experiments that place different types of robots in a wider variety of challenging environments. In addition to further elucidating animal locomotion, they hope that ultimately, their work will enable the more widespread use of robots for biological research, reducing reliance on animal models and the associated ethics concerns. More

  • in

    Scientists harness the wind as a tool to move objects

    Researchers have developed a technique to move objects around with a jet of wind. The new approach makes it possible to manipulate objects at a distance and could be integrated into robots to give machines ethereal fingers.
    ‘Airflow or wind is everywhere in our living environment, moving around objects like pollen, pathogens, droplets, seeds and leaves. Wind has also been actively used in industry and in our everyday lives — for example, in leaf blowers to clean leaves. But so far, we can’t control the direction the leaves move — we can only blow them together into a pile,’ says Professor Quan Zhou from Aalto University, who led the study.
    The first step in manipulating objects with wind is understanding how objects move in the airflow. To that end, a research team at Aalto University recorded thousands of sample movements in an artificially generated airflow and used these to build templates of how objects move on a surface in a jet of air.
    The team’s analysis showed that even though the airflow is generally chaotic, it’s still regular enough to move objects in a controlled way in different directions — even back towards the nozzle blowing out the air.
    ‘We designed an algorithm that controls the direction of the air nozzle with two motors. The jet of air is blown onto the surface from several meters away and to the side of the object, so the generated airflow field moves the object in the desired direction. The control algorithm repeatedly adjusts the direction of the air nozzle so that the airflow moves the objects along the desired trajectory,’ explains Zhou.
    ‘Our observations allowed us to use airflow to move objects along different paths, like circles or even complex letter-like paths. Our method is versatile in terms of the object’s shape and material — we can control the movement of objects of almost any shape,’ he continues.
    The technology still needs to be refined, but the researchers are optimistic about the untapped potential of their nature-inspired approach. It could be used to collect items that are scattered on a surface, such as pushing debris and waste to collection points. It could also be useful in complex processing tasks where physical contact is impossible, such as handling electrical circuits.
    ‘We believe that this technique could get even better with a deeper understanding of the characteristics of the airflow field, which is what we’re working on next,’ says Zhou. More

  • in

    Researchers develop a new way to instruct dance in Virtual Reality

    Researchers at Aalto University were looking for better ways to instruct dance choreography in virtual reality. The new WAVE technique they developed will be presented in May at the CHI conference, a major venue for human-computer interaction research.
    Previous techniques have largely relied on pre-rehearsal and simplification.
    ‘In virtual reality, it is difficult to visualise and communicate how a dancer should move. The human body is so multi-dimensional, and it is difficult to take in rich data in real time,’ says Professor Perttu Hämäläinen.
    The researchers started by experimenting with visualisation techniques familiar from previous dance games. But after several prototypes and stages, they decided to try out the audience wave, familiar from sporting events, to guide the dance.
    ‘The wave-like movement of the model dancers allows you to see in advance what kind of movement is coming next. And you don’t have to rehearse the movement beforehand,’ says PhD researcher Markus Laattala.
    In general, one cannot follow a new choreography in real time because of the delay in human perceptual motor control. The WAVE technique developed by the researchers, on the other hand, is based on anticipating future movement, such as a turn.
    ‘No one had figured out how to guide a continuous, fluid movement like contemporary dance. In the choreography we implemented, making a wave is communication, a kind of micro-canon in which the model dancers follow the same choreography with a split-second delay,’ says Hämäläinen.

    From tai chi to exaggerated movements
    A total of 36 people took part in the one-minute dance test, comparing the new WAVE visualization to a traditional virtual version in which there was only one model dancer to follow. The differences between the techniques were clear.
    ‘This implementation is at least suitable for slow-paced dance styles. The dancer can just jump in and start dancing without having to learn anything beforehand. However, in faster movements, the visuals can get confusing, and further research and development is needed to adapt and test the approach with more dance styles’ says Hämäläinen.
    In addition to virtual dance games, the new technique may be applicable to music videos, karaoke, and tai chi.
    ‘It would be optimal for the user if they could decide how to position the model dancers in a way that suits them. And if the idea were taken further, several dancers could send each other moves in social virtual reality. It could become a whole new way of dancing together’, says Laattala.
    ‘Current mainstream VR devices only track the movement of the headset and handheld controllers. On the other hand, machine learning data can sometimes be used to infer how the legs move,’ says Hämäläinen.

    ‘But in dance, inference is more difficult because the movements are stranger than, for example, walking,’ adds Laattala.
    On the other hand, if you have a mirror in the real dance space, you can follow the movement of your feet using machine vision. The dancer’s view could be modified using a virtual mirror.
    ‘A dancer’s virtual performance can be improved by exaggeration, for example by increasing flexibility, height of the jumps, or hip movement. This can make them feel that they are more skilled than they are, which research shows has a positive impact on physical activity motivation,’ says Hämäläinen.
    The virtual dance game has been developed using the Magics infrastructure’s motion capture kit, where the model dancer is dressed in a costume with sensors. These have been used to record the dance animation.
    The WAVE dance game can be downloaded for Meta Quest 2 and 3 VR devices here: https://github.com/CarouselDancing/WAVE. The Github repository also includes the open source code that anyone can use to develop the game further.
    Reference:
    Laattala, M., Piitulainen, R., Ady, N., Tamariz, M., & Hämäläinen, P. (2024). Anticipatory Movement Visualization for VR Dancing. ACM SIGCHI Annual Conference on Human Factors in Computing Systems. More

  • in

    ‘Seeing the invisible’: New tech enables deep tissue imaging during surgery

    Hyperspectral imaging (HSI) is a state-of-the-art technique that captures and processes information across a given electromagnetic spectrum. Unlike traditional imaging techniques that capture light intensity at specific wavelengths, HSI collects a full spectrum at each pixel in an image. This rich spectral data enables the distinction between different materials and substances based on their unique spectral signatures. Near-infrared hyperspectral imaging (NIR-HSI) has attracted significant attention in the food and industrial fields as a non-destructive technique for analyzing the composition of objects. A notable aspect of NIR-HSI is over-thousand-nanometer (OTN) spectroscopy, which can be used for the identification of organic substances, their concentration estimation, and 2D map creation. Additionally, NIR-HSI can be used to acquire information deep into the body, making it useful for the visualization of lesions hidden in normal tissues.
    Various types of HSI devices have been developed to suit different imaging targets and situations, such as for imaging under a microscope or portable imaging and imaging in confined spaces. However, for OTN wavelengths, ordinary visible cameras lose sensitivity and only a few commercially available lenses exist that can correct chromatic aberration. Moreover, it is necessary to construct cameras, optical systems, and illumination systems for portable NRI-HSI devices, but no device that can acquire NIR-HSI with a rigid scope, crucial for portability, has been reported yet.
    Now, in a new study, a team of researchers, led by Professor Hiroshi Takemura from Tokyo University of Science (TUS) and including Toshihiro Takamatsu, Ryodai Fukushima, Kounosuke Sato, Masakazu Umezawa, and Kohei Soga, all from TUS, Hideo Yokota from RIKEN, and Abian Hernandez Guedes and Gustavo M. Calico, both from the University of Las Palmas de Gran Canaria, has recently developed the world’s first rigid endoscope system capable of HSI from visible to OTN wavelengths. Their findings were published in Volume 32, Issue 9 of Optics Express on April 17, 2024.
    At the core of this innovative system lies a supercontinuum (SC) light source and an acoustic-opto tunable filter (AOTF) that can emit specific wavelengths. Prof. Takemura explains, “An SC light source can output intense coherent white light, whereas an AOTF can extract light containing a specific wavelength. This combination offers easy light transmission to the light guide and the ability to electrically switch between a broad range of wavelengths within a millisecond.”
    The team verified the optical performance and classification ability of the system, demonstrating its capability to perform HSI in the range of 490-1600 nm, enabling visible as well as NIR-HSI. Additionally, the results highlighted several advantages, such as the low light power of extracted wavelengths, enabling non-destructive imaging, and downsizing capability. Moreover, a more continuous NIR spectrum can be obtained compared to that of conventional rigid-scope-type devices.
    To demonstrate their system’s capability, the researchers used it to acquire the spectra of six types of resins and employed a neural network to classify the spectra pixel-by-pixel in multiple wavelengths. The results revealed that when the OTN wavelength range was extracted from the HSI data for training, the neural network could classify seven different targets, including the six resins and a white reference, with an accuracy of 99.6%, reproducibility of 93.7%, and specificity of 99.1%. This means that the system can successfully extract molecular vibration information of each resin at each pixel.
    Prof. Takemura and his team also identified several future research directions for improving this method, including enhancing image quality and recall in the visible region and refining the design of the rigid endoscope to correct chromatic aberrations over a wide area. With these further advancements, in the coming years, the proposed HSI technology is expected to facilitate new applications in industrial inspection and quality control, working as a “superhuman vision” tool that unlocks new ways of perceiving and understanding the world around us.
    “This breakthrough, which combines expertise from different fields through a collaborative, cross-disciplinary approach, enables the identification of invaded cancer areas and the visualization of deep tissues such as blood vessels, nerves, and ureters during medical procedures, leading to improved surgical navigation. Additionally, it enables measurement using light previously unseen in industrial applications, potentially creating new areas of non-use and non-destructive testing,” remarks Prof. Takemura. “By visualizing the invisible, we aim to accelerate the development of medicine and improve the quality of life of physicians as well as patients.” More

  • in

    When does a conductor not conduct?

    An Australian-led study has found unusual insulating behaviour in a new atomically-thin material — and the ability to switch it on and off.
    Materials that feature strong interactions between electrons can display unusual properties such as the ability to act as insulators even when they are expected to conduct electricity. These insulators, known as Mott insulators, occur when electrons become frozen because of strong repulsion they feel from other electrons nearby, preventing them from carrying a current.
    Led by FLEET at Monash University, a new study (published in Nature Communications) has demonstrated a Mott insulating phase within an atomically-thin metal-organic framework (MOF), and the ability to controllably switch this material from an insulator to a conductor. This material’s ability to act as an efficient ‘switch’ makes it a promising candidate for application in new electronic devices such as transistors.
    Electron interactions written in the stars
    The atomically thin (or ‘2D’) material at the heart of the study is a type of MOF, a class of materials composed from organic molecules and metal atoms.
    “Thanks to the versatility of supramolecular chemistry approaches — in particular applied on surfaces as substrates — we have an almost infinite number of combinations to construct materials from the bottom-up, with atomic scale precision,” explains corresponding author A/Prof Schiffrin. “In these approaches, organic molecules are used as building blocks, By carefully choosing the right ingredients, we can tune the properties of MOFs.”
    The important tailor-made property of the MOF in this study is its star-shaped geometry, known as a kagome structure. This geometry enhances the influence of electron-electron interactions, directly leading to the realisation of a Mott insulator.

    The on-off switch: electron population
    The authors constructed the star-shaped kagome MOF from a combination of copper atoms and 9,10-dicyanoanthracene (DCA) molecules. They grew the material upon another atomically thin insulating material, hexagonal boron nitride (hBN), on an atomically flat copper surface, Cu(111).
    “We measured the structural and electronic properties of the MOF at the atomic scale using scanning tunnelling microscopy and spectroscopy,” explains lead author Dr. Benjamin Lowe, who recently completed his PhD with FLEET. “This allowed us to measure an unexpected energy gap — the hallmark of an insulator.”
    The authors’ suspicion that the experimentally measured energy gap was a signature of a Mott insulating phase was confirmed by comparing experimental results with dynamical mean-field theory calculations.
    “The electronic signature in our calculations showed remarkable agreement with experimental measurements and provided conclusive evidence of a Mott insulating phase,” explains FLEET alum Dr. Bernard Field, who performed the theoretical calculations in collaboration with researchers from the University of Queensland and the Okinawa Institute of Science and Technology Graduate University in Japan.
    The authors were also able to change the electron population in the MOF by using variations in the chemical environment of the hBN substrate and the electric field underneath the scanning tunnelling microscope tip.

    When some electrons are removed from the MOF, the repulsion that the remaining electrons feel is reduced and they become unfrozen — allowing the material to behave like a metal. The authors were able to observe this metallic phase from a vanishing of the measured energy gap when they removed some electrons from the MOF. Electron population is the on-off switch for controllable Mott insulator to metal phase transitions.
    What’s next?
    The ability of this MOF to switch between Mott insulator and metal phases by modifying the electron population is a promising result that could be exploited in new types of electronic devices (for example, transistors). A promising next step towards such applications would be to reproduce these findings within a device structure in which an electric field is applied uniformly across the whole material.
    The observation of a Mott insulator in a MOF which is easy to synthesise and contains abundant elements also makes these materials attractive candidates for further studies of strongly correlated phenomena — potentially including superconductivity, magnetism, or spin liquids. More

  • in

    Computer scientists unveil novel attacks on cybersecurity

    Researchers have found two novel types of attacks that target the conditional branch predictor found in high-end Intel processors, which could be exploited to compromise billions of processors currently in use.
    The multi-university and industry research team led by computer scientists at University of California San Diego will present their work at the 2024 ACM ASPLOS Conference that begins tomorrow. The paper, “Pathfinder: High-Resolution Control-Flow Attacks Exploiting the Conditional Branch Predictor,” is based on findings from scientists from UC San Diego, Purdue University, Georgia Tech, the University of North Carolina Chapel Hill and Google.
    They discover a unique attack that is the first to target a feature in the branch predictor called the Path History Register, which tracks both branch order and branch addresses. As a result, more information with more precision is exposed than with prior attacks that lacked insight into the exact structure of the branch predictor.
    Their research has resulted in Intel and Advanced Micro Devices (AMD) addressing the concerns raised by the researchers and advising users about the security issues. Today, Intel is set to issue a Security Announcement, while AMD will release a Security Bulletin.
    In software, frequent branching occurs as programs navigate different paths based on varying data values. The direction of these branches, whether “taken” or “not taken,” provides crucial insights into the executed program data. Given the significant impact of branches on modern processor performance, a crucial optimization known as the “branch predictor” is employed. This predictor anticipates future branch outcomes by referencing past histories stored within prediction tables. Previous attacks have exploited this mechanism by analyzing entries in these tables to discern recent branch tendencies at specific addresses.
    In this new study, researchers leverage modern predictors’ utilization of a Path History Register (PHR) to index prediction tables. The PHR records the addresses and precise order of the last 194 taken branches in recent Intel architectures. With innovative techniques for capturing the PHR, the researchers demonstrate the ability to not only capture the most recent outcomes but also every branch outcome in sequential order. Remarkably, they uncover the global ordering of all branches. Despite the PHR typically retaining the most recent 194 branches, the researchers present an advanced technique to recover a significantly longer history.
    “We successfully captured sequences of tens of thousands of branches in precise order, utilizing this method to leak secret images during processing by the widely used image library, libjpeg,” said Hosein Yavarzadeh, a UC San Diego Computer Science and Engineering Department PhD student and lead author of the paper.

    The researchers also introduce an exceptionally precise Spectre-style poisoning attack, enabling attackers to induce intricate patterns of branch mispredictions within victim code. “This manipulation leads the victim to execute unintended code paths, inadvertently exposing its confidential data,” said UC San Diego computer science Professor Dean Tullsen.
    “While prior attacks could misdirect a single branch or the first instance of a branch executed multiple times, we now have such precise control that we could misdirect the 732nd instance of a branch taken thousands of times,” said Tullsen.
    The team presents a proof-of-concept where they force an encryption algorithm to transiently exit earlier, resulting in the exposure of reduced-round ciphertext. Through this demonstration, they illustrate the ability to extract the secret AES encryption key.
    “Pathfinder can reveal the outcome of almost any branch in almost any victim program, making it the most precise and powerful microarchitectural control-flow extraction attack that we have seen so far,” said Kazem Taram, an assistant professor of computer science at Purdue University and a UC San Diego computer science PhD graduate.
    In addition to Dean Tullsen and Hosein Yavarzadeh, other UC San Diego coauthors are. Archit Agarwal and Deian Stefan. Other coauthors include Christina Garman and Kazem Taram, Purdue University; Daniel Moghimi, Google; Daniel Genkin, Georgia Tech; Max Christman and Andrew Kwong, University of North Carolina Chapel Hill.
    This work was partially supported by the Air Force Office of Scientific Research (FA9550- 20-1-0425); the Defense Advanced Research Projects Agency (W912CG-23-C-0022 and HR00112390029); the National Science Foundation (CNS-2155235, CNS-1954712, and CAREER CNS-2048262); the Alfred P. Sloan Research Fellowship; and gifts from Intel, Qualcomm, and Cisco. More

  • in

    The end of the quantum tunnel

    Quantum mechanical effects such as radioactive decay, or more generally: ‘tunneling’, display intriguing mathematical patterns. Two researchers at the University of Amsterdam now show that a 40-year-old mathematical discovery can be used to fully encode and understand this structure.
    Quantum physics — easy and hard
    In the quantum world, processes can be separated into two distinct classes. One class, that of the so-called ‘perturbative’ phenomena, is relatively easy to detect, both in an experiment and in a mathematical computation. Examples are plentiful: the light that atoms emit, the energy that solar cells produce, the states of qubits in a quantum computer. These quantum phenomena depend on Planck’s constant, the fundamental constant of nature that determines how the quantum world differs from our large-scale world, but in a simple way. Despite the ridiculous smallness of this constant — expressed in everyday units of kilograms, metres and seconds it takes a value that starts at the 34th decimal place after the comma — the fact that Planck’s constant is not exactly zero is enough to compute such quantum effects.
    Then, there are the ‘nonperturbative’ phenomena. One of the best known is radioactive decay: a process where due to quantum effects, elementary particles can escape the attractive force that ties them to atomic nuclei. If the world were ‘classical’ — that is, if Planck’s constant were exactly zero — this attractive force would be impossible to overcome. In the quantum world, decay does occur, but still only occasionally; a single uranium atom, for example, would on average take over four billion years to decay. The collective name for such rare quantum events is ‘tunneling’: for the particle to escape, it has to ‘dig a tunnel’ through the energy barrier that keeps it tied to the nucleus. A tunnel that can take billions of years to dig, and makes The Shawshank Redemption look like child’s play.
    Mathematics to the rescue
    Mathematically, nonperturbative quantum effects are much more difficult to describe than their perturbative cousins. Still, over the century that quantum mechanics has existed, physicists have found many ways to deal with these effects, and to describe and predict them accurately. “Still, in this century-old problem, there was work left to be done,” says Alexander van Spaendonck, one of the authors of the new publication. “The descriptions of tunneling phenomena in quantum mechanics needed further unification — a framework in which all such phenomena could be described and investigated using a single mathematical structure.”
    Surprisingly, such a structure was found in 40-year-old mathematics. In the 1980s, French mathematician Jean Écalle had set up a framework that he dubbed resurgence, and that had precisely this goal: giving structure to nonperturbative phenomena. So why did it take 40 years for the natural combination of Écalle’s formalism and the application to tunneling phenomena to be taken to their logical conclusion? Marcel Vonk, the other author of the publication, explains: “Écalle’s original papers were lengthy — over 1000 pages all combined — highly technical, and only published in French. As a result, it took until the mid-2000s before a significant number of physicists started getting familiar with this ‘toolbox’ of resurgence. Originally, it was mostly applied to simple ‘toy models’, but of course the tools were also tried on real-life quantum mechanics. Our work takes these developments to their logical conclusion.”
    Beautiful structure

    That conclusion is that one of the tools in Écalle’s toolbox, that of a ‘transseries’, is perfectly suited to describe tunneling phenomena in essentially any quantum mechanics problem, and does so always in the same way. By spelling out the mathematical details, the authors found that it became possible not only to unify all tunneling phenomena into a single mathematical object, but also to describe certain ‘jumps’ in how big the role of these phenomena is — an effect known as Stokes’ phenomenon.
    Van Spaendonck: “Using our description Stokes’ phenomenon, we were able to show that certain ambiguities that had plagued the ‘classical’ methods of computing nonperturbative effects — infinitely many, in fact — all dropped out in our method. The underlying structure turned out to be even more beautiful than we originally expected. The transseries that describes quantum tunneling turns out to split — or ‘factorize’ — in a surprising way: into a ‘minimal’ transseries that describes the basic tunneling phenomena that essentially exist in any quantum mechanics problem, and an object that we called the ‘median transseries’ that describes the more problem-specific details, and that depends for example on how symmetric a certain quantum setting is.”
    With this mathematical structure completely clarified, the next question is of course where the new lessons can be applied and what physicists can learn from them. In the case of radioactivity, for example, some atoms are stable whereas others decay. In other physical models, the lists of stable and unstable particles may vary as one slightly changes the setup — a phenomenon known as ‘wall-crossing’. What the researchers have in mind next is to clarify this notion of wall-crossing using the same techniques. This difficult problem has again been studied by many groups in many different ways, but now a similar unifying structure might be just around the corner. There is certainly light at the end of the tunnel. More

  • in

    New algorithm cuts through ‘noisy’ data to better predict tipping points

    Whether you’re trying to predict a climate catastrophe or mental health crisis, mathematics tells us to look for fluctuations.
    Changes in data, from wildlife population to anxiety levels, can be an early warning signal that a system is reaching a critical threshold, known as a tipping point, in which those changes may accelerate or even become irreversible.
    But which data points matter most? And which are simply just noise?
    A new algorithm developed by University at Buffalo researchers can identify the most predictive data points that a tipping point is near. Detailed in Nature Communications, this theoretical framework uses the power of stochastic differential equations to observe the fluctuation of data points, or nodes, and then determine which should be used to calculate an early warning signal.
    Simulations confirmed this method was more accurate at predicting theoretical tipping points than randomly selecting nodes.
    “Every node is somewhat noisy — in other words, it changes over time — but some may change earlier and more drastically than others when a tipping point is near. Selecting the right set of nodes may improve the quality of the early warning signal, as well as help us avoid wasting resources observing uninformative nodes,” says the study’s lead author, Naoki Masuda, PhD, professor and director of graduate studies in the UB Department of Mathematics, within the College of Arts and Sciences.
    The study was co-authored by Neil Maclaren, a postdoctoral research associate in the Department of Mathematics, and Kazuyuki Aihara, executive director of the International Research Center for Neurointelligence at the University of Tokyo.

    The work was supported by the National Science Foundation and the Japan Science and Technology Agency.
    Warning signals connected via networks
    The algorithm is unique in that it fully incorporates network science into the process. While early warning signals have been applied to ecology and psychology for the last two decades, little research has focused on how those signals are connected within a network, Masuda says.
    Consider depression. Recent research has considered it and other mental disorders as a network of symptoms influencing each other by creating feedback loops. A loss of appetite could mean the onset of five other symptoms in the near future, depending on how close those symptoms are on the network.
    “As a network scientist, I felt network science could offer a unique or perhaps even improved approach to early warning signals,” Masuda says.
    By thoroughly considering systems as networks, researchers found that simply selecting the nodes with highest fluctuations was not the best strategy. That’s because some selected nodes may be too closely related to other selected nodes.
    “Even if we combine two nodes with nice early warning signals, we don’t necessarily get a more accurate signal. Sometimes combining a node with a good signal and another node with a mid-quality signal actually gives us a better signal,” Masuda says.
    While the team validated the algorithm with numerical simulations, they say it can readily be applied to actual data because it does not require information about the network structure itself; it only requires two different states of the networked system to determine an optimal set of nodes.
    “The next steps will be to collaborate with domain experts such as ecologists, climate scientists and medical doctors to further develop and test the algorithm with their empirical data and get insights into their problems,” Masuda says. More