More stories

  • in

    Virtual reality helps measure vulnerability to stress

    We all react to stress in different ways. A sudden loud noise or flash of light can elicit different degrees of response from people, which indicates that some of us are more susceptible to the impact of stress than others.
    Any event that causes stress is called a “stressor.” Our bodies are equipped to handle acute exposure to stressors, but chronic exposure can result in mental disorders, e.g. anxiety and depression and even physical changes, e.g. cardiovascular alterations as seen in hypertension or stroke-disorders.
    There has been significant effort to find a way to identify people who would be vulnerable to develop stress-related disorders. The problem is that most of that research has relied on self-reporting and subjective clinical rankings, or exposing subjects to non-naturalistic environments. Employing wearables and other sensing technologies have made some headway in the elderly and at-risk individuals, but given how different our lifestyles are, it has been hard to find objective markers of psychogenic disease.
    Approaching the problem with VR
    Now, behavioral scientists led by Carmen Sandi at EPFL’s School of Life Sciences have developed a virtual-reality (VR) method that measures a person’s susceptibility to psychogenic stressors. Building from previous animal studies, the new approach captures high-density locomotion information from a person while they explore two virtual environments in order to predict heart-rate variability when exposed to threatening or highly stressful situations.
    Heart rate variability is emerging in the field as a strong indicator of vulnerability to physiological stress, and for developing psychopathologies and cardiovascular disorders.

    advertisement

    VR stress scenarios
    In the study, 135 participants were immersed in three different VR scenarios. In the first scenario they explored an empty virtual room, starting from a small red step, facing on of the walls. The virtual room itself had the same dimensions as the real one that the participants were in so that if they touched a virtual wall, they would actually feel it. After 90 seconds of exploration, the participants were told to return to the small red step they’d started from. The VR room would fade to black and then the second scenario would begin.
    In the second scenario, the participants found themselves on an elevated virtual alley several meters above the ground of a virtual city. They were then asked to explore the alley for 90 seconds, and then to return to the red step. Once on it, the step began to descend faster and faster it reached the ground level. Another fade, and then came the final scenario.
    In the third scenario, the participants were “placed” in a completely dark room. Armed with nothing but a virtual flashlight, they were told to explore a darkened maze corridor, in which four human-like figures were placed in corner areas, while three sudden bursts of white noise came through the participant’s headphones every twenty seconds.
    Developing a predictive model
    The researchers measured the heart rates of the participants as they went through each VR scenario, collecting a large body of heart-rate variation data under controlled experimental conditions. Joao Rodrigues, a postdoc at EPFL and the study’s first author, then analyzed the locomotor data from the first two scenarios using machine-learning methods, and developed a model that can predict a person’s stress response — changes in heart rate variability — in the third threatening scenario.

    advertisement

    The team then tested the model and found that its predictions can work on different groups of participants. They also confirmed that the model can predict stress vulnerability to a different stressful challenge in which participants were put through a final VR test, where they had to quickly perform arithmetic exercises, and see their score compared to others’. The idea here was to add a timed and social aspect to stress. In addition, when they gave wrong answers, parts of the virtual floor broke down while a distressing noise played.
    Finally, the researchers also confirmed that their model outperforms other stress-prediction tools, such as anxiety questionnaires. Carmen Sandi says: “The advantage of our study, is that we have developed a model in which capturing behavioral parameters of how people explore two novel virtual environments is enough to predict how their heart rate variability would change if they were exposed to highly stressful situations; hence, eliminating the need of testing them in those highly stressful conditions.”
    Measuring stress vulnerability in the future
    The research offers a standardized tool for measuring vulnerability to stressors based on objective markers, and paves the way for the further development of such methods.
    “Our study shows the impressive power of behavioral data to reveal individuals’ physiological vulnerability. It is remarkable how high-density locomotor parameters during VR exploration can help identify persons at risk of developing a myriad of pathologies -cardiovascular, mental disorders, etc — if exposed to high stress levels. We expect that our study will help the application of early interventions for those individuals at risk.” More

  • in

    Meeting a 100-year-old challenge could lead the way to digital aromas

    Fragrances — promising mystery, intrigue and forbidden thrills — are blended by master perfumers, their recipes kept secret. In a new study on the sense of smell, Weizmann Institute of Science researchers have managed to strip much of the mystery from even complex blends of odorants, not by uncovering their secret ingredients, but by recording and mapping how they are perceived. The scientists can now predict how any complex odorant will smell from its molecular structure alone. This study may not only revolutionize the closed world of perfumery, but eventually lead to the ability to digitize and reproduce smells on command. The proposed framework for odors, created by neurobiologists, computer scientists, and a master-perfumer, and funded by a European initiative for Future Emerging Technologies (FET-OPEN), was published in Nature.
    “The challenge of plotting smells in an organized and logical manner was first proposed by Alexander Graham Bell over 100 years ago,” says Prof. Noam Sobel of the Institute’s Neurobiology Department. Bell threw down the gauntlet: “We have very many different kinds of smells, all the way from the odor of violets and roses up to asafoetida. But until you can measure their likenesses and differences you can have no science of odor.” This challenge had remained unresolved until now.
    This century-old challenge indeed highlighted the difficulty in fitting odors into a logical system: There are millions of odor receptors in our noses, consisting hundreds of different subtypes, each shaped to detect particular molecular features. Our brains potentially perceive millions of smells in which these single molecules are mixed and blended at varying intensities. Thus, mapping this information has been a challenge. But Sobel and his colleagues, led by graduate student Aharon Ravia and Dr. Kobi Snitz, found there is an underlying order to odors. They reached this conclusion by adopting Bell’s concept — namely to describe not the smells themselves, but rather the relationships between smells as they are perceived.
    In a series of experiments, the team presented volunteer participants with pairs of smells and asked them to rate these smells on how similar the two seemed to one another, ranking the pairs on a similarity scale ranging from “identical” to “extremely different.” In the initial experiment, the team created 14 aromatic blends, each made of about 10 molecular components, and presented them two at a time to nearly 200 volunteers, so that by the end of the experiment each volunteer had evaluated 95 pairs.
    To translate the resulting database of thousands of reported perceptual similarity ratings into a useful layout, the team refined a physicochemical measure they had previously developed. In this calculation, each odorant is represented by a single vector that combines 21 physical measures (polarity, molecular weight, etc.). To compare two odorants, each represented by a vector, the angle between the vectors is taken to reflect the perceptual similarity between them. A pair of odorants with a low angle distance between them are predicted similar, those with high angle distance between them are predicted different.
    To test this model, the team first applied it to data collected by others, primarily a large study in odor discrimination by Bushdid and colleagues from the lab of Prof. Leslie Vosshall at the Rockefeller Institute in New York. The Weizmann team found that their model and measurements accurately predicted the Bushdid results: Odorants with low angle distance between them were hard to discriminate; odors with high angle distance between them were easy to discriminate. Encouraged by the model accurately predicting data collected by others, the team continued to test for themselves.
    The team concocted new scents and invited a fresh group of volunteers to smell them, again using their method to predict how this set of participants would rate the pairs — at first 14 new blends and then, in the next experiment, 100 blends. The model performed exceptionally well. In fact, the results were in the same ballpark as those for color perception — sensory information that is grounded in well-defined parameters. This was especially surprising considering each individual likely has a unique complement of smell receptor subtypes, which can vary by as much as 30% across individuals.
    Because the “smell map,” or “metric” predicts the similarity of any two odorants, it can also be used to predict how an odorant will ultimately smell. For example, any novel odorant that is within 0.05 radians or less from banana will smell exactly like banana. As the novel odorant gains distance from banana, it will smell banana-ish, and beyond a certain distance, it will stop resembling banana.
    The team is now developing a web-based tool. This set of tools not only predicts how a novel odorant will smell, but can also synthesize odorants by design. For example, one can take any perfume with a known set of ingredients, and using the map and metric, generate a new perfume with no components in common with the original perfume, but with exactly the same smell. Such creations in color vision, namely non-overlapping spectral compositions that generate the same perceived color, are called color metamers, and here the team generated olfactory metamers.
    The study’s findings are a significant step toward realizing a vision of Prof. David Harel of the Computer and Applied Mathematics Department, who also serves as Vice President of the Israel Academy of Sciences and Humanities and who was a co-author of the study: Enabling computers to digitize and reproduce smells. In addition, of course, to being able to add realistic flower or sea aromas to your vacation pictures on social media, giving computers the ability to interpret odors in the way that humans do could have an impact on environmental monitoring and the biomedical and food industries, to name a few. Still, master perfumer Christophe Laudamiel, who is also a co-author of the study, remarks that he is not concerned for his profession just yet.
    Sobel concludes: “100 years ago, Alexander Graham Bell posed a challenge. We have now answered it: The distance between rose and violet is 0.202 radians (they are remotely similar), the distance between violet and asafoetida is 0.5 radians (they are very different), and the difference between rose and asafoetida is 0.565 radians (they are even more different). We have converted odor percepts into numbers, and this should indeed advance the science of odor.” More

  • in

    New understanding of mobility paves way for tomorrow's transport systems

    In recent years, big data sets from mobile phones have been used to provide increasingly accurate analyses of how we all move between home, work, and leisure, holiday and everything else. The strength of basing analyses on mobile phone data is that they provide accurate data on when, how, and how far each individual moves without any particular focus on whether they are passing geographical boundaries along the way — we simply move from one coordinate to another in a system of longitude and latitude.
    “The problem with existing big data models however is that they do not capture what geographical structures such as neighbourhoods, towns, cities, regions, countries etc. mean for our mobility. This makes it difficult, for example, to generate good models for future mobility. And it is insights of this kind we need when new forms of transport crop up, or when urbanization takes hold,” explains Sune Lehmann, professor at DTU and at the University of Copenhagen.
    In fact, the big data approach to modelling location data has erased the usual dimensions that characterize geographical areas and their significance for our daily journeys and patterns of movement. In mobility research, these are known as scales.
    “Within mobility research, things are sometimes presented as if scale does not come into the equation. At the same time, however, common sense tells us that there have to be typical trips or patterns of movement, which are determined by geography. Intuitively it seems wrong that you cannot see, for example, that a neighbourhood or urban zone has a typical area. A neighbourhood is a place where you can go down and pick up a pizza or buy a bag of sweets. It doesn’t make sense to have a neighbourhood the size of a small country. Geography must play a role. It’s a bit of a paradox,” says Laura Alessandretti, Assistant Professor at DTU and University of Copenhagen
    Finds new, natural, and flexible geographical boundaries
    The authors of the article have therefore developed a new mathematical model that defines new geographical scales from mobile tracking data, and which in this way brings the geography — the usual sizes and length — back to our understanding of mobility.

    advertisement

    The model uses anonymized mobile data from more than 700,000 individuals worldwide and identifies scales — neighbourhoods, towns, cities, regions, countries — for each person based on their movement data.
    “And if you look at the results, it’s clear that distance plays a role in our patterns of movement, but that when it comes to travel there are typical distances and choices that correspond to geographical boundaries — only it’s not the same boundaries you can find on a map. And to make it all a bit more complex, ‘our geographical areas’ also change depending on who we are. If you live on the boundary between city districts, your neighbourhood is located with, for example, a centre where you live and includes parts of both city districts. Our model also shows that who we are plays a role. The size of a neighbourhood varies depending on whether you are male, female, young, or old. Whether you live in the city or the countryside, or whether you live in Saudi Arabia or the UK,” explains Sune Lehmann.
    Important for the green transition and combating epidemics
    The new model provides a more nuanced and accurate picture of how we move around in different situations and, not least, it makes it possible to predict mobility in relation to geographical developments in general. This has implications for some of society’s most important decisions:
    “Better models of mobility are important. For example, in traffic planning, in the transport sector, and in the fight against epidemics. We can save millions of tonnes of CO2, billions of dollars and many lives by using the most precise models when planning the society of the future,” says Ulf Aslak Jensen, Post Doc at DTU and Copenhagen University
    Fact box: The boarders move depending on who you are
    In the article, the researchers use i.a. the model to study mobility differences in different population groups in 53 countries. Among other things, they find that:
    Women in 21 of the 53 countries surveyed daily switch between more geographical levels than men
    Women move within smaller distances than men
    The local areas of the rural population are larger than those of the urban population. More

  • in

    Smartphone screen time linked to preference for quicker but smaller rewards

    In a new study, people who spent more time on their phones — particularly on gaming or social media apps — were more likely to reject larger, delayed rewards in favor of smaller, immediate rewards. Tim van Endert and Peter Mohr of Freie Universität in Berlin, Germany, present these findings in the open-access journal PLOS ONE on November 18, 2020.
    Previous research has suggested behavioral similarities between excessive smartphone use and maladaptive behaviors such as alcohol abuse, compulsive gambling, or drug abuse. However, most investigations of excessive smartphone use and personality factors linked to longer screen time have relied on self-reported measurements of smartphone engagement.
    To gain further clarity, van Endert and Mohr recruited volunteers who agreed to let the researchers collect actual data on the amount of time they spent on each app on their iPhones for the previous seven to ten days. Usage data was collected from 101 participants, who also completed several tasks and questionnaires that assessed their self-control and their behaviors regarding rewards.
    The analysis found that participants with greater total screen time were more likely to prefer smaller, immediate rewards to larger, delayed rewards. A preference for smaller, immediate rewards was linked to heavier use of two specific types of apps: gaming and social media.
    Participants who demonstrated greater self-control spent less time on their phones, but a participant’s level of consideration of future consequences showed no correlation with their screen time. Neither self-control nor consideration of future consequences appeared to impact the relationship between screen time and preference for smaller, immediate rewards.
    These findings add to growing evidence for a link between smartphone use and impulsive decision-making, and they support the similarity between smartphone use and other behaviors thought to be maladaptive. The authors suggest that further research on smartphone engagement could help inform policies to guide prudent use.
    The authors add: “Our findings provide further evidence that smartphone use and impulsive decision-making go hand in hand and that engagement with this device needs to be critically examined by researchers to guide prudent behavior.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Novel magnetic spray transforms objects into millirobots for biomedical applications

    An easy way to make millirobots by coating objects with a glue-like magnetic spray was developed in a joint research led by a scientist from City University of Hong Kong (CityU). Driven by the magnetic field, the coated objects can crawl, walk, or roll on different surfaces. As the magnetic coating is biocompatible and can be disintegrated into powders when needed, this technology demonstrates the potential for biomedical applications, including catheter navigation and drug delivery.
    The research team is led by Dr Shen Yajing, Associate Professor of the Department of Biomedical Engineering (BME) at CityU in collaboration with the Shenzhen Institutes of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS). The research findings have been published in the scientific journal Science Robotics, titled “An agglutinate magnetic spray transforms inanimate objects into millirobots for biomedical applications.”
    Transforming objects into millirobots with a “magnetic coat”
    Scientists have been developing millirobots or insect-scale robots that can adapt to different environments for exploration and biomedical applications.
    Dr Shen’s research team came up with a simple approach to construct millirobots by coating objects with a composited glue-like magnetic spray, called M-spray. “Our idea is that by putting on this ‘magnetic coat’, we can turn any objects into a robot and control their locomotion. The M-spray we developed can stick on the targeted object and ‘activate’ the object when driven by a magnetic field,” explained Dr Shen.
    Composed of polyvinyl alcohol (PVA), gluten and iron particles, M-spray can adhere to the rough and smooth surfaces of one (1D), two (2D) or three-dimensional (3D) objects instantly, stably and firmly. The film it formed on the surface is just about 0.1 to 0.25mm thick, which is thin enough to preserve the original size, form and structure of the objects.

    advertisement

    After coating the object with M-spray, the researchers magnetised it with single or multiple magnetisation directions, which could control how the object moved by a magnetic field. Then they applied heat on the object until the coating was solidified.
    In this way, when driven by a magnetic field, the objects can be transformed into millirobots with different locomotion modes, such as crawling, flipping, walking, and rolling, on various surfaces from glass, skin, wood to sand. The team demonstrated this feature by converting cotton thread (1D), origami (2D flat plane), polydimethylsiloxane (PDMS) film (2D curved/soft surface) and plastic pipe (3D round object) into soft reptile robot, multi-foot robot, walking robot and rolling robot respectively.
    On-demand reprogramming to change locomotion mode
    What makes this approach special is that the team can reprogramme the millirobot’s locomotion mode on demand.
    Mr Yang Xiong, the co-first author of this paper, explained that conventionally, the robot’s initial structure is usually fixed once it is constructed, hence constraining its versatility in motion. However, by wetting the solidified M-spray coating fully to make it adhesive like glue and then by applying a strong magnetic field, the distribution and alignment direction of the magnetic particles (easy magnetisation axis) of the M-spray coating can be changed.

    advertisement

    Their experiments showed that the same millirobot could switch between different locomotion modes, such as from a faster 3D caterpillar movement in a spacious environment to a slower 2D concertina movement for passing through a narrow gap.
    Navigating ability and disintegrable property
    This reprogrammable actuation feature is also helpful for navigation towards targets. To explore the potential in biomedical applications, the team carried out experiments with a catheter, which is widely used for inserting into the body to treat disease or perform surgical procedures. They demonstrated that the M-spray coated catheter could perform sharp or smooth turns. And the impact of blood/liquid flow on the motion ability and stability on the M-spray coated catheter was limited.
    By reprograming the M-spray coating of different sections of a cotton thread based on the delivery task and environment, they further showed that it could achieve a fast-steering and smoothly pass through an irregular, narrow structure. Dr Shen pointed out that from the view of clinical application, this can prevent the unexpected plunging in the throat wall during insertion. “Task-based reprogramming offers promising potential for catheter manipulation in the complex esophagus, vessel and urethra where navigation is always required,” he said.
    Another important feature of this technology is that the M-spray coating can be disintegrated into powders on demand with the manipulation of a magnetic field. “All the raw materials of M-spray, namely PVA, gluten and iron particles, are biocompatible. The disintegrated coating could be absorbed or excreted by the human body,” said Dr Shen, stressing the side effect of the disintegration of M-spray is negligible.
    Successful drug delivery in rabbit stomach
    To further verify the feasibility and effectiveness of the M-spray enabled millirobot for drug delivery, the team conducted in vivo test with rabbits and capsule coated with M-spray. During the delivery process, the rabbits were anaesthetised, and the position of the capsule in the stomach was tracked by radiology imaging. When the capsule reached the targeted region, the researchers disintegrated the coating by applying an oscillating magnetic field. “The controllable disintegration property of M-spray enables the drug to be released in a targeted location rather than scattering in the organ,” Dr Shen added.
    Though the M-spray coating will start to disintegrate in about eight minutes under strongly acidic environment (pH level 1), the team showed that an additional PVA layer on the surface of the M-spray coating could prolong it to about 15 minutes. And if replacing the iron particles with nickel particles, the coating could keep stable in a strongly acidic environment even after 30 minutes.
    “Our experiment results indicated that different millirobots could be constructed with the M-spray adapting to various environment, surface conditions and obstacles. We hope this construction strategy can contribute to the development and application of millirobots in different fields, such as active transportation, moveable sensor and devices, particularly for the tasks in limited space,” said Dr Shen.
    The research was supported by the National Science Foundation of China and the Research Grants Council of Hong Kong. More

  • in

    Curved origami provides new range of stiffness-to-flexibility in robots

    New research that employs curved origami structures has dramatic implications in the development of robotics going forward, providing tunable flexibility — the ability to adjust stiffness based on function — that historically has been difficult to achieve using simple design.
    “The incorporation of curved origami structures into robotic design provides a remarkable possibility in tunable flexibility, or stiffness, as its complementary concept,” explained Hanqing Jiang, a mechanical engineering professor at Arizona State University. “High flexibility, or low stiffness, is comparable to the soft landing navigated by a cat. Low flexibility, or high stiffness, is similar to executing of a hard jump in a pair of stiff boots,” he said.
    Jiang is the lead author of a paper, “In Situ Stiffness Manipulation Using Elegant Curved Origami,” published this week in Science Advances. “Curved Origami can add both strength and cat-like flexibility to robotic actions,” he said.
    Jiang also compared employing curved origami to the operational differences between sporty cars sought by drivers who want to feel the rigidity of the road and vehicles desired by those who seek a comfortable ride that alleviates jarring movements. “Similar to switching between a sporty car mode to a comfortable ride mode, these curved origami structures will simultaneously offer a capability to on-demand switch between soft and hard modes depending on how the robots interact with the environment,” he said.
    Robotics requires a variety of stiffness modes: high rigidity is necessary for lifting weights; high flexibility is needed for impact absorption, and negative stiffness, or the ability to quickly release stored energy like a spring, is needed for sprint.
    Traditionally, the mechanics of accommodating rigidity variances can be bulky with nominal range, whereas curved origami can compactly support an expanded stiffness scale with on-demand flexibility. The structures covered in Jiang and team’s research combine the folding energy at the origami creases with the bending of the panel, tuned by switching among multiple curved creases between two points.
    Curved origami enables a single robot to accomplish a variety of movements. A pneumatic, swimming robot developed by the team can accomplish a range of nine different movements, including fast, medium, slow, linear and rotational movements, by simply adjusting which creases are used.
    In addition to applications for robotics, the curved origami research principles are also relevant for the design of mechanical metamaterials in the fields of electromagnetics, automobile and aerospace components, and biomedical devices. “The beauty of this work is that the design of curved origami is very similar, just by changing the straight creases to curved creases, and each curved crease corresponds to a particular flexibility,” Jiang said.
    The research was funded by the Mechanics of Materials and Structures program of the National Science Foundation. Authors contributing to the paper are Hanqing Jiang, Zirui Zhai and Lingling Wu from the School for Engineering, Matter, Transport and Energy, Arizona State University, and Yong Wang, Ken Lin from the Department of Engineering Mechanics at Zhejiang University, China.

    Story Source:
    Materials provided by Arizona State University. Note: Content may be edited for style and length. More

  • in

    Deep learning helps robots grasp and move objects with ease

    In the past year, lockdowns and other COVID-19 safety measures have made online shopping more popular than ever, but the skyrocketing demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees.
    Researchers at the University of California, Berkeley, have created new artificial intelligence software that gives robots the speed and skill to grasp and smoothly move objects, making it feasible for them to soon assist humans in warehouse environments. The technology is described in a paper published online today (Wednesday, Nov. 18) in the journal Science Robotics.
    Automating warehouse tasks can be challenging because many actions that come naturally to humans — like deciding where and how to pick up different types of objects and then coordinating the shoulder, arm and wrist movements needed to move each object from one location to another — are actually quite difficult for robots. Robotic motion also tends to be jerky, which can increase the risk of damaging both the products and the robots.
    “Warehouses are still operated primarily by humans, because it’s still very hard for robots to reliably grasp many different objects,” said Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley and senior author of the study. “In an automobile assembly line, the same motion is repeated over and over again, so that it can be automated. But in a warehouse, every order is different.”
    In earlier work, Goldberg and UC Berkeley postdoctoral researcher Jeffrey Ichnowski created a Grasp-Optimized Motion Planner that could compute both how a robot should pick up an object and how it should move to transfer the object from one location to another.
    However, the motions generated by this planner were jerky. While the parameters of the software could be tweaked to generate smoother motions, these calculations took an average of about half a minute to compute.

    advertisement

    In the new study, Goldberg and Ichnowski, in collaboration with UC Berkeley graduate student Yahav Avigal and undergraduate student Vishal Satish, dramatically sped up the computing time of the motion planner by integrating a deep learning neural network.
    Neural networks allow a robot to learn from examples. Later, the robot can often generalize to similar objects and motions.
    However, these approximations aren’t always accurate enough. Goldberg and Ichnowski found that the approximation generated by the neural network could then be optimized using the motion planner.
    “The neural network takes only a few milliseconds to compute an approximate motion. It’s very fast, but it’s inaccurate,” Ichnowski said. “However, if we then feed that approximation into the motion planner, the motion planner only needs a few iterations to compute the final motion.”
    By combining the neural network with the motion planner, the team cut average computation time from 29 seconds to 80 milliseconds, or less than one-tenth of a second.
    Goldberg predicts that, with this and other advances in robotic technology, robots could be assisting in warehouse environments in the next few years.
    “Shopping for groceries, pharmaceuticals, clothing and many other things has changed as a result of COVID-19, and people are probably going to continue shopping this way even after the pandemic is over,” Goldberg said. “This is an exciting new opportunity for robots to support human workers.”

    Story Source:
    Materials provided by University of California – Berkeley. Original written by Kara Manke. Note: Content may be edited for style and length. More

  • in

    Versatile building blocks make structures with surprising mechanical properties

    Researchers at MIT’s Center for Bits and Atoms have created tiny building blocks that exhibit a variety of unique mechanical properties, such as the ability to produce a twisting motion when squeezed. These subunits could potentially be assembled by tiny robots into a nearly limitless variety of objects with built-in functionality, including vehicles, large industrial parts, or specialized robots that can be repeatedly reassembled in different forms.
    The researchers created four different types of these subunits, called voxels (a 3D variation on the pixels of a 2D image). Each voxel type exhibits special properties not found in typical natural materials, and in combination they can be used to make devices that respond to environmental stimuli in predictable ways. Examples might include airplane wings or turbine blades that respond to changes in air pressure or wind speed by changing their overall shape.
    The findings, which detail the creation of a family of discrete “mechanical metamaterials,” are described in a paper published today in the journal Science Advances, authored by recent MIT doctoral graduate Benjamin Jenett PhD ’20, Professor Neil Gershenfeld, and four others.
    Metamaterials get their name because their large-scale properties are different from the microlevel properties of their component materials. They are used in electromagnetics and as “architected” materials, which are designed at the level of their microstructure. “But there hasn’t been much done on creating macroscopic mechanical properties as a metamaterial,” Gershenfeld says.
    With this approach, engineers should be able to build structures incorporating a wide range of material properties — and produce them all using the same shared production and assembly processes, Gershenfeld says.
    The voxels are assembled from flat frame pieces of injection-molded polymers, then combined into three-dimensional shapes that can be joined into larger functional structures. They are mostly open space and thus provide an extremely lightweight but rigid framework when assembled. Besides the basic rigid unit, which provides an exceptional combination of strength and light weight, there are three other variations of these voxels, each with a different unusual property.

    advertisement

    The “auxetic” voxels have a strange property in which a cube of the material, when compressed, instead of bulging out at the sides actually bulges inward. This is the first demonstration of such a material produced through conventional and inexpensive manufacturing methods.
    There are also “compliant” voxels, with a zero Poisson ratio, which is somewhat similar to the auxetic property, but in this case, when the material is compressed, the sides do not change shape at all. Few known materials exhibit this property, which can now be produced through this new approach.
    Finally, “chiral” voxels respond to axial compression or stretching with a twisting motion. Again, this is an uncommon property; research that produced one such material through complex fabrication techniques was hailed last year as a significant finding. This work makes this property easily accessible at macroscopic scales.
    “Each type of material property we’re showing has previously been its own field,” Gershenfeld says. “People would write papers on just that one property. This is the first thing that shows all of them in one single system.”
    To demonstrate the real-world potential of large objects constructed in a LEGO-like manner out of these mass-produced voxels, the team, working in collaboration with engineers at Toyota, produced a functional super-mileage race car, which they demonstrated in the streets during an international robotics conference earlier this year.

    advertisement

    They were able to assemble the lightweight, high-performance structure in just a month, Jenett says, whereas building a comparable structure using conventional fiberglass construction methods had previously taken a year.
    During the demonstration, the streets were slick from rain, and the race car ended up crashing into a barrier. To the surprise of everyone involved, the car’s lattice-like internal structure deformed and then bounced back, absorbing the shock with little damage. A conventionally built car, Jenett says, would likely have been severely dented if it was made of metal, or shattered if it was composite.
    The car provided a vivid demonstration of the fact that these tiny parts can indeed be used to make functional devices at human-sized scales. And, Gershenfeld points out, in the structure of the car, “these aren’t parts connected to something else. The whole thing is made out of nothing but these parts,” except for the motors and power supply.
    Because the voxels are uniform in size and composition, they can be combined in any way needed to provide different functions for the resulting device. “We can span a wide range of material properties that before now have been considered very specialized,” Gershenfeld says. “The point is that you don’t have to pick one property. You can make, for example, robots that bend in one direction and are stiff in another direction and move only in certain ways. And so, the big change over our earlier work is this ability to span multiple mechanical material properties, that before now have been considered in isolation.”
    Jenett, who carried out much of this work as the basis for his doctoral thesis, says “these parts are low-cost, easily produced, and very fast to assemble, and you get this range of properties all in one system. They’re all compatible with each other, so there’s all these different types of exotic properties, but they all play well with each other in the same scalable, inexpensive system.”
    “Think about all the rigid parts and moving parts in cars and robots and boats and planes,” Gershenfeld says. “And we can span all of that with this one system.”
    A key factor is that a structure made up of one type of these voxels will behave exactly the same way as the subunit itself, Jenett says. “We were able to demonstrate that the joints effectively disappear when you assemble the parts together. It behaves as a continuum, monolithic material.”
    Whereas robotics research has tended to be divided between hard and soft robots, “this is very much neither,” Gershenfeld says, because of its potential to mix and match these properties within a single device.
    One of the possible early application of this technology, Jenett says, could be for building the blades of wind turbines. As these structures become ever larger, transporting the blades to their operating site becomes a serious logistical issue, whereas if they are assembled from thousands of tiny subunits, that job can be done at the site, eliminating the transportation issue. Similarly, the disposal of used turbine blades is already becoming a serious problem because of their large size and lack of recyclability. But blades made up of tiny voxels could be disassembled on site, and the voxels then reused to make something else.
    And in addition, the blades themselves could be more efficient, because they could have a mix of mechanical properties designed into the structure that would allow them to respond dynamically, passively, to changes in wind strength, he says.
    Overall, Jenett says, “Now we have this low-cost, scalable system, so we can design whatever we want to. We can do quadrupeds, we can do swimming robots, we can do flying robots. That flexibility is one of the key benefits of the system.”
    The research team included Filippos Tourlomousis, Alfonso Parra Rubio, and Megan Ochalek at MIT, and Christopher Cameron at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory and the Center for Bits and Atoms Consortia. More