More stories

  • in

    Researchers identify features that could make someone a virus super-spreader

    New research from the University of Central Florida has identified physiological features that could make people super-spreaders of viruses such as COVID-19.
    In a study appearing this month in the journal Physics of Fluids, researchers in UCF’s Department of Mechanical and Aerospace Engineering used computer-generated models to numerically simulate sneezes in different types of people and determine associations between people’s physiological features and how far their sneeze droplets travel and linger in the air.
    They found that people’s features, like a stopped-up nose or a full set of teeth, could increase their potential to spread viruses by affecting how far droplets travel when they sneeze.
    According to the U.S. Centers for Disease Control and Prevention, the main way people are infected by the virus that causes COVID-19 is through exposure to respiratory droplets, such as from sneezes and coughs that are carrying infectious virus.
    Knowing more about factors affecting how far these droplets travel can inform efforts to control their spread, says Michael Kinzel, an assistant professor with UCF’s Department of Mechanical Engineering and study co-author.
    “This is the first study that aims to understand the underlying ‘why’ of how far sneezes travel,” Kinzel says. “We show that the human body has influencers, such as a complex duct system associated with the nasal flow that actually disrupts the jet from your mouth and prevents it from dispersing droplets far distances.”
    For instance, when people have a clear nose, such as from blowing it into a tissue, the speed and distance sneeze droplets travel decrease, according to the study.

    advertisement

    This is because a clear nose provides a path in addition to the mouth for the sneeze to exit. But when people’s noses are congested, the area that the sneeze can exit is restricted, thus causing sneeze droplets expelled from the mouth to increase in velocity.
    Similarly, teeth also restrict the sneeze’s exit area and cause droplets to increase in velocity.
    “Teeth create a narrowing effect in the jet that makes it stronger and more turbulent,” Kinzel says. “They actually appear to drive transmission. So, if you see someone without teeth, you can actually expect a weaker jet from the sneeze from them.”
    To perform the study, the researchers used 3D modeling and numerical simulations to recreate four mouth and nose types: a person with teeth and a clear nose; a person with no teeth and a clear nose; a person with no teeth and a congested nose; and a person with teeth and a congested nose.
    When they simulated sneezes in the different models, they found that the spray distance of droplets expelled when a person has a congested nose and a full set of teeth is about 60 percent greater than when they do not.

    advertisement

    The results indicate that when someone keeps their nose clear, such as by blowing it into a tissue, that they could be reducing the distance their germs travel.
    The researchers also simulated three types of saliva: thin, medium and thick.
    They found that thinner saliva resulted in sneezes composed of smaller droplets, which created a spray and stayed in the air longer than medium and thick saliva.
    For instance, three seconds after a sneeze, when thick saliva was reaching the ground and thus diminishing its threat, the thinner saliva was still floating in the air as a potential disease transmitter.
    The work ties back to the researchers’ project to create a COVID-19 cough drop that would give people thicker saliva to reduce the distance droplets from a sneeze or cough would travel, and thus decrease disease-transmission likelihood.
    The findings yield novel insight into variability of exposure distance and indicate how physiological factors affect transmissibility rates, says Kareem Ahmed, an associate professor in UCF’s Department of Mechanical and Aerospace Engineering and study co-author.
    “The results show exposure levels are highly dependent on the fluid dynamics that can vary depending on several human features,” Ahmed says. “Such features may be underlying factors driving superspreading events in the COVID-19 pandemic.”
    The researchers say they hope to move the work toward clinical studies next to compare their simulation findings with those from real people from varied backgrounds.
    Study co-authors were Douglas Fontes, a postdoctoral researcher with the Florida Space Institute and the study’s lead author, and Jonathan Reyes, a postdoctoral researcher in UCF’s Department of Mechanical and Aerospace Engineering.
    Fontes says to advance the findings of the study, the research team wants to investigate the interactions between gas flow, mucus film and tissue structures within the upper respiratory tract during respiratory events.
    “Numerical models and experimental techniques should work side by side to provide accurate predictions of the primary breakup inside the upper respiratory tract during those events,” he says.
    “This research potentially will provide information for more accurate safety measures and solutions to reduce pathogen transmission, giving better conditions to deal with the usual diseases or with pandemics in the future,” he says.
    The work was funded by the National Science Foundation. More

  • in

    A neural network learns when it should not be trusted

    Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.
    They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.”
    Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” accelerates the process and could lead to safer outcomes. “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” says Amini, a PhD student in Professor Daniela Rus’ group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “This idea is important and applicable broadly. It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” says Rus.
    Amini will present the research at next month’s NeurIPS conference, along with Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, director of CSAIL, and deputy dean of research for the MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.
    Efficient uncertainty
    After an up-and-down history, deep learning has demonstrated remarkable performance on a variety of tasks, in some cases even surpassing human accuracy. And nowadays, deep learning seems to go wherever computers go. It fuels search engine results, social media feeds, and facial recognition. “We’ve had huge successes using deep learning,” says Amini. “Neural networks are really good at knowing the right answer 99 percent of the time.” But 99 percent won’t cut it when lives are on the line.

    advertisement

    “One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong,” says Amini. “We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”
    Neural networks can be massive, sometimes brimming with billions of parameters. So it can be a heavy computational lift just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks isn’t new. But previous approaches, stemming from Bayesian deep learning, have relied on running, or sampling, a neural network many times over to understand its confidence. That process takes time and memory, a luxury that might not exist in high-speed traffic.
    The researchers devised a way to estimate uncertainty from only a single run of the neural network. They designed the network with bulked up output, producing not only a decision but also a new probabilistic distribution capturing the evidence in support of that decision. These distributions, termed evidential distributions, directly capture the model’s confidence in its prediction. This includes any uncertainty present in the underlying input data, as well as in the model’s final decision. This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.
    Confidence check
    To put their approach to the test, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. distance from the camera lens) for each pixel. An autonomous vehicle might use similar calculations to estimate its proximity to a pedestrian or to another vehicle, which is no simple task.

    advertisement

    Their network’s performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” Amini says.
    To stress-test their calibration, the team also showed that the network projected higher uncertainty for “out-of-distribution” data — completely new types of images never encountered during training. After they trained the network on indoor home scenes, they fed it a batch of outdoor driving scenes. The network consistently warned that its responses to the novel outdoor scenes were uncertain. The test highlighted the network’s ability to flag when users should not place full trust in its decisions. In these cases, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” says Amini.
    The network even knew when photos had been doctored, potentially hedging against data-manipulation attacks. In another trial, the researchers boosted adversarial noise levels in a batch of images they fed to the network. The effect was subtle — barely perceptible to the human eye — but the network sniffed out those images, tagging its output with high levels of uncertainty. This ability to sound the alarm on falsified data could help detect and deter adversarial attacks, a growing concern in the age of deepfakes.
    Deep evidential regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work. “This is done in a novel way that avoids some of the messy aspects of other approaches — e.g. sampling or ensembles — which makes it not only elegant but also computationally more efficient — a winning combination.”
    Deep evidential regression could enhance safety in AI-assisted decision making. “We’re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences,” says Amini. “Any user of the method, whether it’s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.” He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decision making in risky scenarios like an autonomous vehicle approaching an intersection.
    “Any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness,” he says.
    This work was supported, in part, by the National Science Foundation and Toyota Research Institute through the Toyota-CSAIL Joint Research Center. More

  • in

    Virtual reality helps measure vulnerability to stress

    We all react to stress in different ways. A sudden loud noise or flash of light can elicit different degrees of response from people, which indicates that some of us are more susceptible to the impact of stress than others.
    Any event that causes stress is called a “stressor.” Our bodies are equipped to handle acute exposure to stressors, but chronic exposure can result in mental disorders, e.g. anxiety and depression and even physical changes, e.g. cardiovascular alterations as seen in hypertension or stroke-disorders.
    There has been significant effort to find a way to identify people who would be vulnerable to develop stress-related disorders. The problem is that most of that research has relied on self-reporting and subjective clinical rankings, or exposing subjects to non-naturalistic environments. Employing wearables and other sensing technologies have made some headway in the elderly and at-risk individuals, but given how different our lifestyles are, it has been hard to find objective markers of psychogenic disease.
    Approaching the problem with VR
    Now, behavioral scientists led by Carmen Sandi at EPFL’s School of Life Sciences have developed a virtual-reality (VR) method that measures a person’s susceptibility to psychogenic stressors. Building from previous animal studies, the new approach captures high-density locomotion information from a person while they explore two virtual environments in order to predict heart-rate variability when exposed to threatening or highly stressful situations.
    Heart rate variability is emerging in the field as a strong indicator of vulnerability to physiological stress, and for developing psychopathologies and cardiovascular disorders.

    advertisement

    VR stress scenarios
    In the study, 135 participants were immersed in three different VR scenarios. In the first scenario they explored an empty virtual room, starting from a small red step, facing on of the walls. The virtual room itself had the same dimensions as the real one that the participants were in so that if they touched a virtual wall, they would actually feel it. After 90 seconds of exploration, the participants were told to return to the small red step they’d started from. The VR room would fade to black and then the second scenario would begin.
    In the second scenario, the participants found themselves on an elevated virtual alley several meters above the ground of a virtual city. They were then asked to explore the alley for 90 seconds, and then to return to the red step. Once on it, the step began to descend faster and faster it reached the ground level. Another fade, and then came the final scenario.
    In the third scenario, the participants were “placed” in a completely dark room. Armed with nothing but a virtual flashlight, they were told to explore a darkened maze corridor, in which four human-like figures were placed in corner areas, while three sudden bursts of white noise came through the participant’s headphones every twenty seconds.
    Developing a predictive model
    The researchers measured the heart rates of the participants as they went through each VR scenario, collecting a large body of heart-rate variation data under controlled experimental conditions. Joao Rodrigues, a postdoc at EPFL and the study’s first author, then analyzed the locomotor data from the first two scenarios using machine-learning methods, and developed a model that can predict a person’s stress response — changes in heart rate variability — in the third threatening scenario.

    advertisement

    The team then tested the model and found that its predictions can work on different groups of participants. They also confirmed that the model can predict stress vulnerability to a different stressful challenge in which participants were put through a final VR test, where they had to quickly perform arithmetic exercises, and see their score compared to others’. The idea here was to add a timed and social aspect to stress. In addition, when they gave wrong answers, parts of the virtual floor broke down while a distressing noise played.
    Finally, the researchers also confirmed that their model outperforms other stress-prediction tools, such as anxiety questionnaires. Carmen Sandi says: “The advantage of our study, is that we have developed a model in which capturing behavioral parameters of how people explore two novel virtual environments is enough to predict how their heart rate variability would change if they were exposed to highly stressful situations; hence, eliminating the need of testing them in those highly stressful conditions.”
    Measuring stress vulnerability in the future
    The research offers a standardized tool for measuring vulnerability to stressors based on objective markers, and paves the way for the further development of such methods.
    “Our study shows the impressive power of behavioral data to reveal individuals’ physiological vulnerability. It is remarkable how high-density locomotor parameters during VR exploration can help identify persons at risk of developing a myriad of pathologies -cardiovascular, mental disorders, etc — if exposed to high stress levels. We expect that our study will help the application of early interventions for those individuals at risk.” More

  • in

    Meeting a 100-year-old challenge could lead the way to digital aromas

    Fragrances — promising mystery, intrigue and forbidden thrills — are blended by master perfumers, their recipes kept secret. In a new study on the sense of smell, Weizmann Institute of Science researchers have managed to strip much of the mystery from even complex blends of odorants, not by uncovering their secret ingredients, but by recording and mapping how they are perceived. The scientists can now predict how any complex odorant will smell from its molecular structure alone. This study may not only revolutionize the closed world of perfumery, but eventually lead to the ability to digitize and reproduce smells on command. The proposed framework for odors, created by neurobiologists, computer scientists, and a master-perfumer, and funded by a European initiative for Future Emerging Technologies (FET-OPEN), was published in Nature.
    “The challenge of plotting smells in an organized and logical manner was first proposed by Alexander Graham Bell over 100 years ago,” says Prof. Noam Sobel of the Institute’s Neurobiology Department. Bell threw down the gauntlet: “We have very many different kinds of smells, all the way from the odor of violets and roses up to asafoetida. But until you can measure their likenesses and differences you can have no science of odor.” This challenge had remained unresolved until now.
    This century-old challenge indeed highlighted the difficulty in fitting odors into a logical system: There are millions of odor receptors in our noses, consisting hundreds of different subtypes, each shaped to detect particular molecular features. Our brains potentially perceive millions of smells in which these single molecules are mixed and blended at varying intensities. Thus, mapping this information has been a challenge. But Sobel and his colleagues, led by graduate student Aharon Ravia and Dr. Kobi Snitz, found there is an underlying order to odors. They reached this conclusion by adopting Bell’s concept — namely to describe not the smells themselves, but rather the relationships between smells as they are perceived.
    In a series of experiments, the team presented volunteer participants with pairs of smells and asked them to rate these smells on how similar the two seemed to one another, ranking the pairs on a similarity scale ranging from “identical” to “extremely different.” In the initial experiment, the team created 14 aromatic blends, each made of about 10 molecular components, and presented them two at a time to nearly 200 volunteers, so that by the end of the experiment each volunteer had evaluated 95 pairs.
    To translate the resulting database of thousands of reported perceptual similarity ratings into a useful layout, the team refined a physicochemical measure they had previously developed. In this calculation, each odorant is represented by a single vector that combines 21 physical measures (polarity, molecular weight, etc.). To compare two odorants, each represented by a vector, the angle between the vectors is taken to reflect the perceptual similarity between them. A pair of odorants with a low angle distance between them are predicted similar, those with high angle distance between them are predicted different.
    To test this model, the team first applied it to data collected by others, primarily a large study in odor discrimination by Bushdid and colleagues from the lab of Prof. Leslie Vosshall at the Rockefeller Institute in New York. The Weizmann team found that their model and measurements accurately predicted the Bushdid results: Odorants with low angle distance between them were hard to discriminate; odors with high angle distance between them were easy to discriminate. Encouraged by the model accurately predicting data collected by others, the team continued to test for themselves.
    The team concocted new scents and invited a fresh group of volunteers to smell them, again using their method to predict how this set of participants would rate the pairs — at first 14 new blends and then, in the next experiment, 100 blends. The model performed exceptionally well. In fact, the results were in the same ballpark as those for color perception — sensory information that is grounded in well-defined parameters. This was especially surprising considering each individual likely has a unique complement of smell receptor subtypes, which can vary by as much as 30% across individuals.
    Because the “smell map,” or “metric” predicts the similarity of any two odorants, it can also be used to predict how an odorant will ultimately smell. For example, any novel odorant that is within 0.05 radians or less from banana will smell exactly like banana. As the novel odorant gains distance from banana, it will smell banana-ish, and beyond a certain distance, it will stop resembling banana.
    The team is now developing a web-based tool. This set of tools not only predicts how a novel odorant will smell, but can also synthesize odorants by design. For example, one can take any perfume with a known set of ingredients, and using the map and metric, generate a new perfume with no components in common with the original perfume, but with exactly the same smell. Such creations in color vision, namely non-overlapping spectral compositions that generate the same perceived color, are called color metamers, and here the team generated olfactory metamers.
    The study’s findings are a significant step toward realizing a vision of Prof. David Harel of the Computer and Applied Mathematics Department, who also serves as Vice President of the Israel Academy of Sciences and Humanities and who was a co-author of the study: Enabling computers to digitize and reproduce smells. In addition, of course, to being able to add realistic flower or sea aromas to your vacation pictures on social media, giving computers the ability to interpret odors in the way that humans do could have an impact on environmental monitoring and the biomedical and food industries, to name a few. Still, master perfumer Christophe Laudamiel, who is also a co-author of the study, remarks that he is not concerned for his profession just yet.
    Sobel concludes: “100 years ago, Alexander Graham Bell posed a challenge. We have now answered it: The distance between rose and violet is 0.202 radians (they are remotely similar), the distance between violet and asafoetida is 0.5 radians (they are very different), and the difference between rose and asafoetida is 0.565 radians (they are even more different). We have converted odor percepts into numbers, and this should indeed advance the science of odor.” More

  • in

    New understanding of mobility paves way for tomorrow's transport systems

    In recent years, big data sets from mobile phones have been used to provide increasingly accurate analyses of how we all move between home, work, and leisure, holiday and everything else. The strength of basing analyses on mobile phone data is that they provide accurate data on when, how, and how far each individual moves without any particular focus on whether they are passing geographical boundaries along the way — we simply move from one coordinate to another in a system of longitude and latitude.
    “The problem with existing big data models however is that they do not capture what geographical structures such as neighbourhoods, towns, cities, regions, countries etc. mean for our mobility. This makes it difficult, for example, to generate good models for future mobility. And it is insights of this kind we need when new forms of transport crop up, or when urbanization takes hold,” explains Sune Lehmann, professor at DTU and at the University of Copenhagen.
    In fact, the big data approach to modelling location data has erased the usual dimensions that characterize geographical areas and their significance for our daily journeys and patterns of movement. In mobility research, these are known as scales.
    “Within mobility research, things are sometimes presented as if scale does not come into the equation. At the same time, however, common sense tells us that there have to be typical trips or patterns of movement, which are determined by geography. Intuitively it seems wrong that you cannot see, for example, that a neighbourhood or urban zone has a typical area. A neighbourhood is a place where you can go down and pick up a pizza or buy a bag of sweets. It doesn’t make sense to have a neighbourhood the size of a small country. Geography must play a role. It’s a bit of a paradox,” says Laura Alessandretti, Assistant Professor at DTU and University of Copenhagen
    Finds new, natural, and flexible geographical boundaries
    The authors of the article have therefore developed a new mathematical model that defines new geographical scales from mobile tracking data, and which in this way brings the geography — the usual sizes and length — back to our understanding of mobility.

    advertisement

    The model uses anonymized mobile data from more than 700,000 individuals worldwide and identifies scales — neighbourhoods, towns, cities, regions, countries — for each person based on their movement data.
    “And if you look at the results, it’s clear that distance plays a role in our patterns of movement, but that when it comes to travel there are typical distances and choices that correspond to geographical boundaries — only it’s not the same boundaries you can find on a map. And to make it all a bit more complex, ‘our geographical areas’ also change depending on who we are. If you live on the boundary between city districts, your neighbourhood is located with, for example, a centre where you live and includes parts of both city districts. Our model also shows that who we are plays a role. The size of a neighbourhood varies depending on whether you are male, female, young, or old. Whether you live in the city or the countryside, or whether you live in Saudi Arabia or the UK,” explains Sune Lehmann.
    Important for the green transition and combating epidemics
    The new model provides a more nuanced and accurate picture of how we move around in different situations and, not least, it makes it possible to predict mobility in relation to geographical developments in general. This has implications for some of society’s most important decisions:
    “Better models of mobility are important. For example, in traffic planning, in the transport sector, and in the fight against epidemics. We can save millions of tonnes of CO2, billions of dollars and many lives by using the most precise models when planning the society of the future,” says Ulf Aslak Jensen, Post Doc at DTU and Copenhagen University
    Fact box: The boarders move depending on who you are
    In the article, the researchers use i.a. the model to study mobility differences in different population groups in 53 countries. Among other things, they find that:
    Women in 21 of the 53 countries surveyed daily switch between more geographical levels than men
    Women move within smaller distances than men
    The local areas of the rural population are larger than those of the urban population. More

  • in

    Smartphone screen time linked to preference for quicker but smaller rewards

    In a new study, people who spent more time on their phones — particularly on gaming or social media apps — were more likely to reject larger, delayed rewards in favor of smaller, immediate rewards. Tim van Endert and Peter Mohr of Freie Universität in Berlin, Germany, present these findings in the open-access journal PLOS ONE on November 18, 2020.
    Previous research has suggested behavioral similarities between excessive smartphone use and maladaptive behaviors such as alcohol abuse, compulsive gambling, or drug abuse. However, most investigations of excessive smartphone use and personality factors linked to longer screen time have relied on self-reported measurements of smartphone engagement.
    To gain further clarity, van Endert and Mohr recruited volunteers who agreed to let the researchers collect actual data on the amount of time they spent on each app on their iPhones for the previous seven to ten days. Usage data was collected from 101 participants, who also completed several tasks and questionnaires that assessed their self-control and their behaviors regarding rewards.
    The analysis found that participants with greater total screen time were more likely to prefer smaller, immediate rewards to larger, delayed rewards. A preference for smaller, immediate rewards was linked to heavier use of two specific types of apps: gaming and social media.
    Participants who demonstrated greater self-control spent less time on their phones, but a participant’s level of consideration of future consequences showed no correlation with their screen time. Neither self-control nor consideration of future consequences appeared to impact the relationship between screen time and preference for smaller, immediate rewards.
    These findings add to growing evidence for a link between smartphone use and impulsive decision-making, and they support the similarity between smartphone use and other behaviors thought to be maladaptive. The authors suggest that further research on smartphone engagement could help inform policies to guide prudent use.
    The authors add: “Our findings provide further evidence that smartphone use and impulsive decision-making go hand in hand and that engagement with this device needs to be critically examined by researchers to guide prudent behavior.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Novel magnetic spray transforms objects into millirobots for biomedical applications

    An easy way to make millirobots by coating objects with a glue-like magnetic spray was developed in a joint research led by a scientist from City University of Hong Kong (CityU). Driven by the magnetic field, the coated objects can crawl, walk, or roll on different surfaces. As the magnetic coating is biocompatible and can be disintegrated into powders when needed, this technology demonstrates the potential for biomedical applications, including catheter navigation and drug delivery.
    The research team is led by Dr Shen Yajing, Associate Professor of the Department of Biomedical Engineering (BME) at CityU in collaboration with the Shenzhen Institutes of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS). The research findings have been published in the scientific journal Science Robotics, titled “An agglutinate magnetic spray transforms inanimate objects into millirobots for biomedical applications.”
    Transforming objects into millirobots with a “magnetic coat”
    Scientists have been developing millirobots or insect-scale robots that can adapt to different environments for exploration and biomedical applications.
    Dr Shen’s research team came up with a simple approach to construct millirobots by coating objects with a composited glue-like magnetic spray, called M-spray. “Our idea is that by putting on this ‘magnetic coat’, we can turn any objects into a robot and control their locomotion. The M-spray we developed can stick on the targeted object and ‘activate’ the object when driven by a magnetic field,” explained Dr Shen.
    Composed of polyvinyl alcohol (PVA), gluten and iron particles, M-spray can adhere to the rough and smooth surfaces of one (1D), two (2D) or three-dimensional (3D) objects instantly, stably and firmly. The film it formed on the surface is just about 0.1 to 0.25mm thick, which is thin enough to preserve the original size, form and structure of the objects.

    advertisement

    After coating the object with M-spray, the researchers magnetised it with single or multiple magnetisation directions, which could control how the object moved by a magnetic field. Then they applied heat on the object until the coating was solidified.
    In this way, when driven by a magnetic field, the objects can be transformed into millirobots with different locomotion modes, such as crawling, flipping, walking, and rolling, on various surfaces from glass, skin, wood to sand. The team demonstrated this feature by converting cotton thread (1D), origami (2D flat plane), polydimethylsiloxane (PDMS) film (2D curved/soft surface) and plastic pipe (3D round object) into soft reptile robot, multi-foot robot, walking robot and rolling robot respectively.
    On-demand reprogramming to change locomotion mode
    What makes this approach special is that the team can reprogramme the millirobot’s locomotion mode on demand.
    Mr Yang Xiong, the co-first author of this paper, explained that conventionally, the robot’s initial structure is usually fixed once it is constructed, hence constraining its versatility in motion. However, by wetting the solidified M-spray coating fully to make it adhesive like glue and then by applying a strong magnetic field, the distribution and alignment direction of the magnetic particles (easy magnetisation axis) of the M-spray coating can be changed.

    advertisement

    Their experiments showed that the same millirobot could switch between different locomotion modes, such as from a faster 3D caterpillar movement in a spacious environment to a slower 2D concertina movement for passing through a narrow gap.
    Navigating ability and disintegrable property
    This reprogrammable actuation feature is also helpful for navigation towards targets. To explore the potential in biomedical applications, the team carried out experiments with a catheter, which is widely used for inserting into the body to treat disease or perform surgical procedures. They demonstrated that the M-spray coated catheter could perform sharp or smooth turns. And the impact of blood/liquid flow on the motion ability and stability on the M-spray coated catheter was limited.
    By reprograming the M-spray coating of different sections of a cotton thread based on the delivery task and environment, they further showed that it could achieve a fast-steering and smoothly pass through an irregular, narrow structure. Dr Shen pointed out that from the view of clinical application, this can prevent the unexpected plunging in the throat wall during insertion. “Task-based reprogramming offers promising potential for catheter manipulation in the complex esophagus, vessel and urethra where navigation is always required,” he said.
    Another important feature of this technology is that the M-spray coating can be disintegrated into powders on demand with the manipulation of a magnetic field. “All the raw materials of M-spray, namely PVA, gluten and iron particles, are biocompatible. The disintegrated coating could be absorbed or excreted by the human body,” said Dr Shen, stressing the side effect of the disintegration of M-spray is negligible.
    Successful drug delivery in rabbit stomach
    To further verify the feasibility and effectiveness of the M-spray enabled millirobot for drug delivery, the team conducted in vivo test with rabbits and capsule coated with M-spray. During the delivery process, the rabbits were anaesthetised, and the position of the capsule in the stomach was tracked by radiology imaging. When the capsule reached the targeted region, the researchers disintegrated the coating by applying an oscillating magnetic field. “The controllable disintegration property of M-spray enables the drug to be released in a targeted location rather than scattering in the organ,” Dr Shen added.
    Though the M-spray coating will start to disintegrate in about eight minutes under strongly acidic environment (pH level 1), the team showed that an additional PVA layer on the surface of the M-spray coating could prolong it to about 15 minutes. And if replacing the iron particles with nickel particles, the coating could keep stable in a strongly acidic environment even after 30 minutes.
    “Our experiment results indicated that different millirobots could be constructed with the M-spray adapting to various environment, surface conditions and obstacles. We hope this construction strategy can contribute to the development and application of millirobots in different fields, such as active transportation, moveable sensor and devices, particularly for the tasks in limited space,” said Dr Shen.
    The research was supported by the National Science Foundation of China and the Research Grants Council of Hong Kong. More

  • in

    Curved origami provides new range of stiffness-to-flexibility in robots

    New research that employs curved origami structures has dramatic implications in the development of robotics going forward, providing tunable flexibility — the ability to adjust stiffness based on function — that historically has been difficult to achieve using simple design.
    “The incorporation of curved origami structures into robotic design provides a remarkable possibility in tunable flexibility, or stiffness, as its complementary concept,” explained Hanqing Jiang, a mechanical engineering professor at Arizona State University. “High flexibility, or low stiffness, is comparable to the soft landing navigated by a cat. Low flexibility, or high stiffness, is similar to executing of a hard jump in a pair of stiff boots,” he said.
    Jiang is the lead author of a paper, “In Situ Stiffness Manipulation Using Elegant Curved Origami,” published this week in Science Advances. “Curved Origami can add both strength and cat-like flexibility to robotic actions,” he said.
    Jiang also compared employing curved origami to the operational differences between sporty cars sought by drivers who want to feel the rigidity of the road and vehicles desired by those who seek a comfortable ride that alleviates jarring movements. “Similar to switching between a sporty car mode to a comfortable ride mode, these curved origami structures will simultaneously offer a capability to on-demand switch between soft and hard modes depending on how the robots interact with the environment,” he said.
    Robotics requires a variety of stiffness modes: high rigidity is necessary for lifting weights; high flexibility is needed for impact absorption, and negative stiffness, or the ability to quickly release stored energy like a spring, is needed for sprint.
    Traditionally, the mechanics of accommodating rigidity variances can be bulky with nominal range, whereas curved origami can compactly support an expanded stiffness scale with on-demand flexibility. The structures covered in Jiang and team’s research combine the folding energy at the origami creases with the bending of the panel, tuned by switching among multiple curved creases between two points.
    Curved origami enables a single robot to accomplish a variety of movements. A pneumatic, swimming robot developed by the team can accomplish a range of nine different movements, including fast, medium, slow, linear and rotational movements, by simply adjusting which creases are used.
    In addition to applications for robotics, the curved origami research principles are also relevant for the design of mechanical metamaterials in the fields of electromagnetics, automobile and aerospace components, and biomedical devices. “The beauty of this work is that the design of curved origami is very similar, just by changing the straight creases to curved creases, and each curved crease corresponds to a particular flexibility,” Jiang said.
    The research was funded by the Mechanics of Materials and Structures program of the National Science Foundation. Authors contributing to the paper are Hanqing Jiang, Zirui Zhai and Lingling Wu from the School for Engineering, Matter, Transport and Energy, Arizona State University, and Yong Wang, Ken Lin from the Department of Engineering Mechanics at Zhejiang University, China.

    Story Source:
    Materials provided by Arizona State University. Note: Content may be edited for style and length. More