More stories

  • in

    Self-organization: What robotics can learn from amoebae

    LMU researchers have developed a new model to describe how biological or technical systems form complex structures without external guidance.
    Amoebae are single-cell organisms. By means of self-organization, they can form complex structures — and do this purely through local interactions: If they have a lot of food, they disperse evenly through a culture medium. But if food becomes scarce, they emit the messenger known as cyclic adenosine monophosphate (cAMP). This chemical signal induces amoebae to gather in one place and form a multicellular aggregation. The result is a fruiting body.
    “The phenomenon is well known,” says Prof. Erwin Frey from LMU’s Faculty of Physics. “Before now, however, no research group has investigated how information processing, at a general level, affects the aggregation of systems of agents when individual agents — in our case, amoebae — are self-propelled.” More knowledge about these mechanisms would also be interesting, adds Frey, as regards translating them to artificial technical systems.
    Together with other researchers, Frey describes in Nature Communications how active systems that process information in their environment can be used — for technological or biological applications. It is not about understanding all details of the communication between individual agents, but about the specific structures formed through self-organization. This applies to amoebae — and also to certain kinds of robots. The research was undertaken in collaboration with Prof. Igor Aronson during his stay at LMU as a Humboldt Research Award winner.
    From biological mechanism to technological application
    Background: The term “active matter” refers to biological or technical systems from which larger structures are formed by means of self-organization. Such processes are based upon exclusively local interactions between identical, self-propelled units, such as amoebae or indeed robots.
    Inspired by biological systems, Frey and his co-authors propose a new model in which self-propelled agents communicate with each other. These agents recognize chemical, biological, or physical signals at a local level and make individual decisions using their internal machinery that result in collective self-organization. This orientation gives rise to larger structures, which can span multiple length scales.
    The new paradigm of communicating active matter forms the basis of the study. Local decisions in response to a signal and the transmission of information, lead to collectively controlled self-organization.
    Frey sees a possible application of the new model in soft robots — which is to say, robots that are made of soft materials. Such robots are suitable, for example, for performing tasks in human bodies. They can communicate with other soft robots via electromagnetic waves for purposes such as administering drugs at specific sites in the body. The new model can help nanotechnologists design such robot systems by describing the collective properties of robot swarms.
    “It’s sufficient to roughly understand how individual agents communicate with each other; self-organization takes care of the rest,” says Frey. “This is a paradigm shift specifically in robotics, where researchers are attempting to do precisely the opposite — they want to obtain extremely high levels of control.” But that does not always succeed. “Our proposal, by contrast, is to exploit the capacity for self-organization.”
    Story Source:
    Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length. More

  • in

    The interplay between epidemics, prevention information, and mass media

    When an epidemic strikes, more than just infections spread. As cases mount, information about the disease, how to spot it, and how to prevent it propagates rapidly among people in affected areas as well. Relatively little is known, however, about the interplay between the course of epidemics and this diffusion of information to the public.
    A pair of researchers developed a model that examines epidemics through two lenses — the spread of disease and the spread of information — to understand how reliable information can be better disseminated during these events. In Chaos, by AIP Publishing, Xifen Wu and Haibo Bao report their two-layered model can predict the effects of mass media and infection prevention information on the epidemic threshold.
    “In recent years, epidemics spread all over the world together with preventive information. And the mass media affected the people’s attitudes toward epidemic prevention,” said Bao. “Our aim is to find how these factors influence the epidemic propagation and provide certain guidance for epidemic prevention and control.”
    To tackle their question, the researchers’ model compares the interactions between two layers of information. The first is the transmission of the disease itself, propagated through physical contact between people. The second occupies the information space of social networks, where different voices are sharing the do’s and don’ts of infection prevention, called positive and negative information respectively.
    The model provides a set of equations that can be used to calculate the epidemic threshold using a technique called microscopic Markov chains.
    Central to this calculation is the time delay between becoming infected and recovering. The longer it takes for patients to recover from an infection, they found, the less likely a patient is cured, leading to a lower recovery rate and making it easier for a disease to break out.
    Disseminating effective prevention practices and using mass media, however, can increase the epidemic threshold, making it more difficult for the infection to spread. They simulate this by reducing the time delays related to recovery, which boosts recovery rates.
    “The major challenge in our work is how to analyze the impact of positive information, negative information, and the mass media on the recovery rate and epidemic prevalence at the same time,” Bao said. “What surprised us the most is that it is not always possible to improve the recovery rate by increasing the communication rate of mass media.”
    Bao hopes the work inspires others to use high-level mathematics to tackle such cross-disciplinary questions. They next look to analyze the impact of population mobility and vaccination.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Microlaser chip adds new dimensions to quantum communication

    Researchers at Penn Engineering have created a chip that outstrips the security and robustness of existing quantum communications hardware. Their technology communicates in “qudits,” doubling the quantum information space of any previous on-chip laser.
    Liang Feng, Professor in the Departments of Materials Science and Engineering (MSE) and Electrical Systems and Engineering (ESE), along with MSE postdoctoral fellow Zhifeng Zhang and ESE Ph.D. student Haoqi Zhao, debuted the technology in a recent study published in Nature. The group worked in collaboration with scientists from the Polytechnic University of Milan, the Institute for Cross-Disciplinary Physics and Complex Systems, Duke University and the City University of New York (CUNY).
    Bits, Qubits and Qudits
    While non-quantum chips store, transmit and compute data using bits, state-of-the-art quantum devices use qubits. Bits can be 1s or 0s, while qubits are units of digital information capable of being both 1 and 0 at the same time. In quantum mechanics, this state of simultaneity is called “superposition.”
    A quantum bit in a state of superposition greater than two levels is called a qudit to signal these additional dimensions.
    “In classical communications,” says Feng, “a laser can emit a pulse coded as either 1 or 0. These pulses can easily be cloned by an interceptor looking to steal information and are therefore not very secure. In quantum communications with qubits, the pulse can have any superposition state between 1 and 0. Superposition makes it so a quantum pulse cannot be copied. Unlike algorithmic encryption, which blocks hackers using complex math, quantum cryptography is a physical system that keeps information secure.”
    Qubits, however, aren’t perfect. With only two levels of superposition, qubits have limited storage space and low tolerance for interference. More

  • in

    'Brain-like computing' at molecular level is possible

    A breakthrough discovery at University of Limerick in Ireland has revealed for the first time that unconventional brain-like computing at the tiniest scale of atoms and molecules is possible.
    Researchers at University of Limerick’s Bernal Institute worked with an international team of scientists to create a new type of organic material that learns from its past behaviour.
    The discovery of the ‘dynamic molecular switch’ that emulate synaptic behaviour is revealed in a new study in the international journal Nature Materials.
    The study was led by Damien Thompson, Professor of Molecular Modelling in UL’s Department of Physics and Director of SSPC, the UL-hosted Science Foundation Ireland Research Centre for Pharmaceuticals, together with Christian Nijhuis at the Centre for Molecules and Brain-Inspired Nano Systems in University of Twente and Enrique del Barco from University of Central Florida.
    Working during lockdowns, the team developed a two-nanometre thick layer of molecules, which is 50,000 times thinner than a strand of hair and remembers its history as electrons pass through it.
    Professor Thompson explained that the “switching probability and the values of the on/off states continually change in the molecular material, which provides a disruptive new alternative to conventional silicon-based digital switches that can only ever be either on or off.”
    The newly discovered dynamic organic switch displays all the mathematical logic functions necessary for deep learning, successfully emulating Pavlovian ‘call and response’ synaptic brain-like behaviour. More

  • in

    A possible game changer for next generation microelectronics

    Tiny magnetic whirlpools could transform memory storage in high performance computers.
    Magnets generate invisible fields that attract certain materials. A common example is fridge magnets. Far more important to our everyday lives, magnets also can store data in computers. Exploiting the direction of the magnetic field (say, up or down), microscopic bar magnets each can store one bit of memory as a zero or a one — the language of computers.
    Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory want to replace the bar magnets with tiny magnetic vortices. As tiny as billionths of a meter, these vortices are called skyrmions, which form in certain magnetic materials. They could one day usher in a new generation of microelectronics for memory storage in high performance computers.
    “The bar magnets in computer memory are like shoelaces tied with a single knot; it takes almost no energy to undo them,” said Arthur McCray, a Northwestern University graduate student working in Argonne’s Materials Science Division (MSD). And any bar magnets malfunctioning due to some disruption will affect the others.
    “By contrast, skyrmions are like shoelaces tied with a double knot. No matter how hard you pull on a strand, the shoelaces remain tied.” The skyrmions are thus extremely stable to any disruption. Another important feature is that scientists can control their behavior by changing the temperature or applying an electric current.
    Scientists have much to learn about skyrmion behavior under different conditions. To study them, the Argonne-led team developed an artificial intelligence (AI) program that works with a high-power electron microscope at the Center for Nanoscale Materials (CNM), a DOE Office of Science user facility at Argonne. The microscope can visualize skyrmions in samples at very low temperatures. More

  • in

    Artificial neural networks learn better when they spend time not learning at all

    Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.
    “The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, PhD, professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”
    In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.
    Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.
    “In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”
    Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests. More

  • in

    'Butterfly bot' is fastest swimming soft robot yet

    Inspired by the biomechanics of the manta ray, researchers at North Carolina State University have developed an energy-efficient soft robot that can swim more than four times faster than previous swimming soft robots. The robots are called “butterfly bots,” because their swimming motion resembles the way a person’s arms move when they are swimming the butterfly stroke.
    “To date, swimming soft robots have not been able to swim faster than one body length per second, but marine animals — such as manta rays — are able to swim much faster, and much more efficiently,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. “We wanted to draw on the biomechanics of these animals to see if we could develop faster, more energy-efficient soft robots. The prototypes we’ve developed work exceptionally well.”
    The researchers developed two types of butterfly bots. One was built specifically for speed, and was able to reach average speeds of 3.74 body lengths per second. A second was designed to be highly maneuverable, capable of making sharp turns to the right or left. This maneuverable prototype was able to reach speeds of 1.7 body lengths per second.
    “Researchers who study aerodynamics and biomechanics use something called a Strouhal number to assess the energy efficiency of flying and swimming animals,” says Yinding Chi, first author of the paper and a recent Ph.D. graduate of NC State. “Peak propulsive efficiency occurs when an animal swims or flies with a Strouhal number of between 0.2 and 0.4. Both of our butterfly bots had Strouhal numbers in this range.”
    The butterfly bots derive their swimming power from their wings, which are “bistable,” meaning the wings have two stable states. The wing is similar to a snap hair clip. A hair clip is stable until you apply a certain amount of energy (by bending it). When the amount of energy reaches critical point, the hair clip snaps into a different shape — which is also stable.
    In the butterfly bots, the hair clip-inspired bistable wings are attached to a soft, silicone body. Users control the switch between the two stable states in the wings by pumping air into chambers inside the soft body. As those chambers inflate and deflate, the body bends up and down — forcing the wings to snap back and forth with it.
    “Most previous attempts to develop flapping robots have focused on using motors to provide power directly to the wings,” Yin says. “Our approach uses bistable wings that are passively driven by moving the central body. This is an important distinction, because it allows for a simplified design, which lowers the weight.”
    The faster butterfly bot has only one “drive unit” — the soft body — which controls both of its wings. This makes it very fast, but difficult to turn left or right. The maneuverable butterfly bot essentially has two drive units, which are connected side by side. This design allows users to manipulate the wings on both sides, or to “flap” only one wing, which is what enables it to make sharp turns.
    “This work is an exciting proof of concept, but it has limitations,” Yin says. “Most obviously, the current prototypes are tethered by slender tubing, which is what we use to pump air into the central bodies. We’re currently working to develop an untethered, autonomous version.”
    The paper, “Snapping for high-speed and high-efficient, butterfly stroke-like soft swimmer,” will be published Nov. 18 in the open-access journal Science Advances. The paper was co-authored by Yaoye Hong, a Ph.D. student at NC State; and by Yao Zhao and Yanbin Li, who are postdoctoral researchers at NC State. The work was done with support from the National Science Foundation under grants CMMI-2005374 and CMMI-2126072.
    Video of the butterfly bots can be found at https://youtu.be/Pi-2pPDWC1w.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    With training, people in mind-controlled wheelchairs can navigate normal, cluttered spaces

    A mind-controlled wheelchair can help a paralyzed person gain new mobility by translating users’ thoughts into mechanical commands. On November 18 in the journal iScience, researchers demonstrate that tetraplegic users can operate mind-controlled wheelchairs in a natural, cluttered environment after training for an extended period.
    “We show that mutual learning of both the user and the brain-machine interface algorithm are both important for users to successfully operate such wheelchairs,” says José del R. Millán, the study’s corresponding author at The University of Texas at Austin. “Our research highlights a potential pathway for improved clinical translation of non-invasive brain-machine interface technology.”
    Millán and his colleagues recruited three tetraplegic people for the longitudinal study. Each of the participants underwent training sessions three times per week for 2 to 5 months. The participants wore a skullcap that detected their brain activities through electroencephalography (EEG), which would be converted to mechanical commands for the wheelchairs via a brain-machine interface device. The participants were asked to control the direction of the wheelchair by thinking about moving their body parts. Specifically, they needed to think about moving both hands to turn left and both feet to turn right.
    In the first training session, three participants had similar levels of accuracy — when the device’s responses aligned with users’ thoughts — of around 43% to 55%. Over the course of training, the brain-machine interface device team saw significant improvement in accuracy in participant 1, who reached an accuracy of over 95% by the end of his training. The team also observed an increase in accuracy in participant 3 to 98% halfway through his training before the team updated his device with a new algorithm.
    The improvement seen in participants 1 and 3 is correlated with improvement in feature discriminancy, which is the algorithm’s ability to discriminate the brain activity pattern encoded for “go left” thoughts from that for “go right.” The team found that the better feature discrimnancy is not only a result of machine learning of the device but also learning in the brain of the participants. The EEG of participants 1 and 3 showed clear shifts in brainwave patterns as they improved accuracy in mind-controlling the device.
    “We see from the EEG results that the subject has consolidated a skill of modulating different parts of their brains to generate a pattern for ‘go left’ and a different pattern for ‘go right,'” Millán says. “We believe there is a cortical reorganization that happened as a result of the participants’ learning process.”
    Compared with participants 1 and 3, participant 2 had no significant changes in brain activity patterns throughout the training. His accuracy increased only slightly during the first few sessions, which remained stable for the rest of the training period. It suggests machine learning alone is insufficient for successfully maneuvering such a mind-controlled device, Millán says
    By the end of the training, all participants were asked to drive their wheelchairs across a cluttered hospital room. They had to go around obstacles such as a room divider and hospital beds, which are set up to simulate the real-world environment. Both participants 1 and 3 finished the task while participant 2 failed to complete it.
    “It seems that for someone to acquire good brain-machine interface control that allows them to perform relatively complex daily activity like driving the wheelchair in a natural environment, it requires some neuroplastic reorganization in our cortex,” Millán says.
    The study also emphasized the role of long-term training in users. Although participant 1 performed exceptionally at the end, he struggled in the first few training sessions as well, Millán says. The longitudinal study is one of the first to evaluate the clinical translation of non-invasive brain-machine interface technology in tetraplegic people.
    Next, the team wants to figure out why participant 2 didn’t experience the learning effect. They hope to conduct a more detailed analysis of all participants’ brain signals to understand their differences and possible interventions for people struggling with the learning process in the future.
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More