More stories

  • in

    Artificial intelligence paves way for new medicines

    Researchers have developed an AI model that can predict where a drug molecule can be chemically altered.
    A team of researchers from LMU, ETH Zurich, and Roche Pharma Research and Early Development (pRED) Basel has used artificial intelligence (AI) to develop an innovative method that predicts the optimal method for synthesizing drug molecules. “This method has the potential to significantly reduce the number of required lab experiments, thereby increasing both the efficiency and sustainability of chemical synthesis,” says David Nippa, lead author of the corresponding paper, which has been published in the journal Nature Chemistry. Nippa is a doctoral student in Dr. David Konrad’s research group at the Faculty of Chemistry and Pharmacy at LMU and at Roche.
    Active pharmaceutical ingredients typically consist of a framework to which functional groups are attached. These groups enable a specific biological function. To achieve new or improved medical effects, functional groups are altered and added to new positions in the framework. However, this process is particularly challenging in chemistry, as the frameworks, which mainly consist of carbon and hydrogen atoms, are hardly reactive themselves. One method of activating the framework is the so-called borylation reaction. In this process, a chemical group containing the element boron is attached to a carbon atom of the framework. This boron group can then be replaced by a variety of medically effective groups. Although borylation has great potential, it is difficult to control in the lab.
    Together with Kenneth Atz, a doctoral student at ETH Zurich, David Nippa developed an AI model that was trained on data from trustworthy scientific works and experiments from an automated lab at Roche. It can successfully predict the position of borylation for any molecule and provides the optimal conditions for the chemical transformation. “Interestingly, the predictions improved when the three-dimensional information of the starting materials were taken into account, not just their two-dimensional chemical formulas,” says Atz.
    The method has already been successfully used to identify positions in existing active ingredients where additional active groups can be introduced. This helps researchers develop new and more effective variants of known drug active ingredients more quickly. More

  • in

    What was thought of as noise, points to new type of ultrafast magnetic switching

    Noise on the radio when reception is poor is a typical example of how fluctuations mask a physical signal. In fact, such interference or noise occurs in every physical measurement in addition to the actual signal. “Even in the loneliest place in the universe, where there should be nothing at all, there are still fluctuations of the electromagnetic field,” says physicist Ulrich Nowak. In the Collaborative Research Centre (CRC) 1432 “Fluctuations and Nonlinearities in Classical and Quantum Matter beyond Equilibrium” at the University of Konstanz, researchers do not see this omnipresent noise as a disturbing factor that needs to be eliminated as far as possible, but as a source of information that tells us something about the signal.
    No magnetic effect, but fluctuations
    This approach has now proved successful when investigating antiferromagnets. Antiferromagnets are magnetic materials in which the magnetizations of several sub-lattices cancel out each other. Nevertheless, antiferromagnetic insulators are considered promising for energy-efficient components in the field of information technology. As they have hardly any magnetic fields on the outside, they are very difficult to characterize physically. Yet, antiferromagnets are surrounded by magnetic fluctuations, which can tell us a lot about this weakly magnetic material.
    In this spirit, the groups of the two materials scientists Ulrich Nowak and Sebastian Gönnenwein analysed the fluctuations of antiferromagnetic materials in the context of the CRC. The decisive factor in their theoretical as well as experimental study, recently published in the journal Nature Communications, was the specific frequency range. “We measure very fast fluctuations and have developed a method with which fluctuations can still be detected on the ultrashort time scale of femtoseconds,” says experimental physicist Sebastian Gönnenwein. A femtosecond is one millionth of a billionth of a second.
    New experimental approach for ultrafast time scales
    On slower time scales, one could use electronics that are fast enough to measure these fluctuations. On ultrafast time scales, this no longer works, which is why a new experimental approach had to be developed. It is based on an idea from the research group of Alfred Leitenstorfer, who is also a member of the Collaborative Research Centre. Employing laser technology, the researchers use pulse sequences or pulse pairs in order to obtain information about fluctuations. Initially, this measurement approach was developed to investigate quantum fluctuations, and has now been extended to fluctuations in magnetic systems. Takayuki Kurihara from the University of Tokyo played a key role in this development as the third cooperation partner. He was a member of the Leitenstorfer research group and the Zukunftskolleg at the University of Konstanz from 2018 to 2020.
    Detection of fluctuations using ultrashort light pulses
    In the experiment, two ultrashort light pulses are transmitted through the magnet with a time delay, testing the magnetic properties during the transit time of each pulse, respectively. The light pulses are then checked for similarity using sophisticated electronics. The first pulse serves as a reference, the second contains information about how much the antiferromagnet has changed in the time between the first and second pulse. Different measurement results at the two points of time confirm the fluctuations. Ulrich Nowak’s research group also modelled the experiment in elaborate computer simulations in order to better understand its results.
    One unexpected result was the discovery of what is known as telegraph noise on ultrashort time scales. This means that there is not only unsorted noise, but also fluctuations in which the system switches back and forth between two well-defined states.Such fast, purely random switching has never been observed before and could be interesting for applications such as random number generators. In any case, the new methodological possibilities for analyzing fluctuations on ultrashort time scales offer great potential for further discoveries in the field of functional materials. More

  • in

    A single Bitcoin transaction could cost as much water as a backyard swimming pool

    Cryptocurrency mining uses a significant amount of water amid the global water crisis, and its water demand may grow further. In a commentary published November 29 in the journal Cell Reports Sustainability, financial economist Alex de Vries provides the first comprehensive estimate of Bitcoin’s water use. He warns that its sheer scale could impact drinking water if it continues to operate without constraints, especially in countries that are already battling water scarcity, including the U.S.
    “Many parts of the world are experiencing droughts, and fresh water is becoming an increasing scarce resource,” says de Vries, a PhD student at Vrije Universiteit Amsterdam. “If we continue to use this valuable resource for making useless computations, I think that reality is really painful.”
    Previous research on crypto’s resource use has primarily focused on electricity consumption. When mining Bitcoins, the most popular cryptocurrency, miners around the world are essentially racing to solve mathematical equations on the internet, and the winners get a share of Bitcoin’s value. In the Bitcoin network, miners make about 350 quintillion — that is, 350 followed by 18 zeros — guesses every second of the day, an activity that consumes a tremendous amount of computing power.
    “The right answer emerges every 10 minutes, and the rest of the data, quintillions of them, are computations that serve no further purpose and are therefore immediately discarded,” de Vries says.
    During the same process, a large amount of water is used to cool the computers at large data centers. Based on data from previous research, de Vries calculates that Bitcoin mining consumes about 8.6 to 35.1 gigaliters (GL) of water per year in the U.S. In addition to cooling computers, coal- and gas-fired power plants that provide electricity to run the computers also use water to lower the temperature. This cooling water is evaporated and not available to be reused. Water evaporated from hydropower plants also adds to the water footprint of Bitcoin’s power demand.
    In total, de Vries estimates that in 2021, Bitcoin mining consumed over 1,600 GL of water worldwide. Each transaction on the Bitcoin blockchain uses 16,000 liters of water on average, about 6.2 million times more than a credit card swipe, or enough to fill a backyard swimming pool. Bitcoin’s water consumption is expected to increase to 2,300 GL in 2023, de Vries says,
    In the U.S., Bitcoin mining consumes about 93 GL to 120 GL of water every year, equivalent to the average water consumption of 300,000 U.S. households or a city like Washington, D.C.

    “The price of Bitcoin just increased recently and reached its highest point of the year, despite the recent collapse of several cryptocurrency platforms. This will have serious consequences, because the higher the price, the higher the environmental impact,” de Vries says. “The most painful thing about cryptocurrency mining is that it uses so much computational power and so much resources, but these resources are not going into creating some kind of model, like artificial intelligence, that you can then use for something else. It’s just making useless computations.”
    At a value of more than $37,000 per coin, Bitcoin continues to expand across the world. In countries in Central Asia, where the dry climate is already putting pressure on freshwater supply, increased Bitcoin mining activities will worsen the problem. In Kazakhstan, a global cryptocurrency mining hub, Bitcoin transactions consumed 997.9 GL of water in 2021. The Central Asia country is already grappling with a water crisis, and Bitcoin mining’s growing water footprint could exacerbate the shortage.
    De Vries suggests that approaches such as modifying Bitcoin mining’s software could cut down on the power and water needed for this process. Incorporating renewable energy sources that don’t involve water, including wind and solar, can also reduce water consumption.
    “But do you really want to spend wind and solar power for crypto? In many countries including the U.S., the amount of renewable energy is limited. Sure you can move some of these renewable energy sources to crypto, but that means something else will be powered with fossil fuels. I’m not sure how much you gain,” he says. More

  • in

    Quantum tool opens door to uncharted phenomena

    Entanglement is a quantum phenomenon where the properties of two or more particles become interconnected in such a way that one cannot assign a definite state to each individual particle anymore. Rather, we have to consider all particles at once that share a certain state. The entanglement of the particles ultimately determines the properties of a material.
    “Entanglement of many particles is the feature that makes the difference,” emphasizes Christian Kokail, one of the first authors of the paper now published in Nature. “At the same time, however, it is very difficult to determine.” The researchers led by Peter Zoller at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences (ÖAW) now provide a new approach that can significantly improve the study and understanding of entanglement in quantum materials. In order to describe large quantum systems and extract information from them about the existing entanglement, one would naively need to perform an impossibly large number of measurements. “We have developed a more efficient description, that allows us to extract entanglement information from the system with drastically fewer measurements,” explains theoretical physicist Rick van Bijnen.
    In an ion trap quantum simulator with 51 particles, the scientists have imitated a real material by recreating it particle by particle and studying it in a controlled laboratory environment. Very few research groups worldwide have the necessary control of so many particles as the Innsbruck experimental physicists led by Christian Roos and Rainer Blatt. “The main technical challenge we face here is how to maintain low error rates while controlling 51 ions trapped in our trap and ensuring the feasibility of individual qubit control and readout,” explains experimentalist Manoj Joshi. In the process, the scientists witnessed for the first time effects in the experiment that had previously only been described theoretically. “Here we have combined knowledge and methods that we have painstakingly worked out together over the past years. It’s impressive to see that you can do these things with the resources available today,” says an excited Christian Kokail, who recently joined the Institute for Theoretical Atomic Molecular and Optical Physics at Harvard.
    Shortcut via temperature profiles
    In a quantum material, particles can be more or less strongly entangled. Measurements on a strongly entangled particle yield only random results. If the results of the measurements fluctuate very much — i.e., if they are purely random — then scientists refer to this as “hot.” If the probability of a certain result increases, it is a “cold” quantum object. Only the measurement of all entangled objects reveals the exact state. In systems consisting of very many particles, the effort for the measurement increases enormously. Quantum field theory has predicted that subregions of a system of many entangled particles can be assigned a temperature profile. These profiles can be used to derive the degree of entanglement of the particles.
    In the Innsbruck quantum simulator, these temperature profiles are determined via a feedback loop between a computer and the quantum system, with the computer constantly generating new profiles and comparing them with the actual measurements in the experiment. The temperature profiles obtained by the researchers show that particles that interact strongly with the environment are “hot” and those that interact little are “cold.” “This is exactly in line with expectations that entanglement is particularly large where the interaction between particles is strong,” says Christian Kokail.
    Opening doors to new areas of physics
    “The methods we have developed provide a powerful tool for studying large-scale entanglement in correlated quantum matter. This opens the door to the study of a new class of physical phenomena with quantum simulators that already are available today,” says quantum mastermind Peter Zoller. “With classical computers, such simulations can no longer be computed with reasonable effort.” The methods developed in Innsbruck will also be used to test new theory on such platforms.
    The results have been published in Nature. Financial support for the research was provided by the Austrian Science Fund FWF, the Austrian Research Promotion Agency FFG, the European Union, the Federation of Austrian Industries Tyrol and others. More

  • in

    Nearly 400,000 new compounds added to open-access materials database

    New technology often calls for new materials — and with supercomputers and simulations, researchers don’t have to wade through inefficient guesswork to invent them from scratch.
    The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials. Researchers can focus on promising materials for future technologies — think lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, or faster transistors for the next generation of computers.
    Now, Google DeepMind — Google’s artificial intelligence lab — is contributing nearly 400,000 new compounds to the Materials Project, expanding the amount of information researchers can draw upon. The dataset includes how the atoms of a material are arranged (the crystal structure) and how stable it is (formation energy).
    “We have to create new materials if we are going to address the global environmental and climate challenges,” said Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley. “With innovation in materials, we can potentially develop recyclable plastics, harness waste energy, make better batteries, and build cheaper solar panels that last longer, among many other things.”
    To generate the new data, Google DeepMind developed a deep learning tool called Graph Networks for Materials Exploration, or GNoME. Researchers trained GNoME using workflows and data that were developed over a decade by the Materials Project, and improved the GNoME algorithm through active learning. GNoME researchers ultimately produced 2.2 million crystal structures, including 380,000 that they are adding to the Materials Project and predict are stable, making them potentially useful in future technologies. The new results from Google DeepMind are published today in the journal Nature.
    Some of the computations from GNoME were used alongside data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. A-Lab’s first paper, also published today in Nature, showed that the autonomous lab can quickly discover novel materials with minimal human input.
    Over 17 days of independent operation, A-Lab successfully produced 41 new compounds out of an attempted 58 — a rate of more than two new materials per day. For comparison, it can take a human researcher months of guesswork and experimentation to create one new material, if they ever reach the desired material at all.

    To make the novel compounds predicted by the Materials Project, A-Lab’s AI created new recipes by combing through scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were used to evaluate the materials’ predicted stability.
    “We had this staggering 71% success rate, and we already have a few ways to improve it,” said Gerd Ceder, the principal investigator for A-Lab and a scientist at Berkeley Lab and UC Berkeley. “We’ve shown that combining the theory and data side with automation has incredible results. We can make and test materials faster than ever before, and adding more data points to the Materials Project means we can make even smarter choices.”
    The Materials Project is the most widely used open-access repository of information on inorganic materials in the world. The database holds millions of properties on hundreds of thousands of structures and molecules, information primarily processed at Berkeley Lab’s National Energy Research Science Computing Center. More than 400,000 people are registered as users of the site and, on average, more than four papers citing the Materials Project are published every day. The contribution from Google DeepMind is the biggest addition of structure-stability data from a group since the Materials Project began.
    “We hope that the GNoME project will drive forward research into inorganic crystals,” said Ekin Dogus Cubuk, lead of Google DeepMind’s Materials Discovery team. “External researchers have already verified more than 736 of GNoME’s new materials through concurrent, independent physical experiments, demonstrating that our model’s discoveries can be realized in laboratories.”
    The Materials Project is now processing the compounds from Google DeepMind and adding them into the online database. The new data will be freely available to researchers, and also feed into projects such as A-Lab that partner with the Materials Project.
    “I’m really excited that people are using the work we’ve done to produce an unprecedented amount of materials information,” said Persson, who is also the director of Berkeley Lab’s Molecular Foundry. “This is what I set out to do with the Materials Project: To not only make the data that I produced free and available to accelerate materials design for the world, but also to teach the world what computations can do for you. They can scan large spaces for new compounds and properties more efficiently and rapidly than experiments alone can.”
    By following promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed useful properties in new materials across several areas. Some show potential for use: in carbon capture (pulling carbon dioxide from the atmosphere) as photocatalysts (materials that speed up chemical reactions in response to light and could be used to break down pollutants or generate hydrogen) as thermoelectrics (materials that could help harness waste heat and turn it into electrical power) as transparent conductors (which might be useful in solar cells, touch screens, or LEDs)Of course, finding these prospective materials is only one of many steps to solving some of humanity’s big technology challenges.
    “Making a material is not for the faint of heart,” Persson said. “It takes a long time to take a material from computation to commercialization. It has to have the right properties, work within devices, be able to scale, and have the right cost efficiency and performance. The goal with the Materials Project and facilities like A-Lab is to harness data, enable data-driven exploration, and ultimately give companies more viable shots on goal.” More

  • in

    Network of robots can successfully monitor pipes using acoustic wave sensors

    An inspection design method and procedure by which mobile robots can inspect large pipe structures has been demonstrated with the successful inspection of multiple defects on a three-meter long steel pipe using guided acoustic wave sensors.
    The University of Bristol team, led by Professor Bruce Drinkwater and Professor Anthony Croxford, developed approach was used to review a long steel pipe with multiple defects, including circular holes with different sizes, a crack-like defect and pits, through a designed inspection path to achieve 100% detection coverage for a defined reference defect.
    In the study, published today in NDT and E International, they show how they were able to effectively examine large plate-like structures using a network of independent robots, each carrying sensors capable of both sending and receiving guided acoustic waves, working in pulse-echo mode.
    This approach has the major advantage of minimizing communication between robots, requires no synchronization and raises the possibility of on-board processing to lower data transfer costs and hence reducing overall inspection expenses. The inspection was divided into a defect detection and a defect localization stage.
    Lead author Dr Jie Zhang explained: “There are many robotic systems with integrated ultrasound sensors used for automated inspection of pipelines from their inside to allow the pipeline operator to perform required inspections without stopping the flow of product in the pipeline. However, available systems struggle to cope with varying pipe cross-sections or network complexity, inevitably leading to pipeline disruption during inspection. This makes them suitable for specific inspections of high value assets, such as oil and gas pipelines, but not generally applicable.
    “As the cost of mobile robots has reduced over recent years, it is increasingly possible to deploy multiple robots for a large area inspection. We take the existence of small inspection robots as its starting point, and explore how they can be used for generic monitoring of a structure. This requires inspection strategies, methodologies and assessment procedures that can be integrated with the mobile robots for accurate defect detection and localization that is low cost and efficient.
    “We investigate this problem by considering a network of robots, each with a single omnidirectional guided acoustic wave transducer. This configuration is considered as it is arguably the simplest, with good potential for integration in a low cost platform.”
    The methods employed are generally applicable to other related scenarios and allow the impact of any detection or localization method decisions to be quickly quantified. The methods could be used across other materials, pipe geometries, noise levels or guided wave modes, allowing the full range of sensor performance parameters, defects sizes and types and operating modalities to be explored. Also the techniques can be used to assess the detection and localization performance for specified inspection parameters, for example, predict the minimum detectable defect under a specified probability of detection and probability of false alarm.
    The team will now investigate collaboration opportunities with industries to advance current prototypes for actual pipe inspections. This work is funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC) as a part of the Pipebots project. More

  • in

    How do you make a robot smarter? Program it to know what it doesn’t know

    Modern robots know how to sense their environment and respond to language, but what they don’t know is often more important than what they do know. Teaching robots to ask for help is key to making them safer and more efficient.
    Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don’t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty — and triggers the robot to ask for clarification.
    Because tasks are typically more complex than a simple “pick up a bowl” command, the engineers use large language models (LLMs) — the technology behind tools such as ChatGPT — to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method.
    “Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don’t know,” said Majumdar.
    The system also allows a robot’s user to set a target degree of success, which is tied to a particular uncertainty threshold that will lead a robot to ask for help. For example, a user would set a surgical robot to have a much lower error tolerance than a robot that’s cleaning up a living room.
    “We want the robot to ask for enough help such that we reach the level of success that the user wants. But meanwhile, we want to minimize the overall amount of help that the robot needs,” said Allen Ren, a graduate student in mechanical and aerospace engineering at Princeton and the study’s lead author. Ren received a best student paper award for his Nov. 8 presentation at the Conference on Robot Learning in Atlanta. The new method produces high accuracy while reducing the amount of help required by a robot compared to other methods of tackling this issue.
    The researchers tested their method on a simulated robotic arm and on two types of robots at Google facilities in New York City and Mountain View, California, where Ren was working as a student research intern. One set of hardware experiments used a tabletop robotic arm tasked with sorting a set of toy food items into two different categories; a setup with a left and right arm added an additional layer of ambiguity.

    The most complex experiments involved a robotic arm mounted on a wheeled platform and placed in an office kitchen with a microwave and a set of recycling, compost and trash bins. In one example, a human asks the robot to “place the bowl in the microwave,” but there are two bowls on the counter — a metal one and a plastic one.
    The robot’s LLM-based planner generates four possible actions to carry out based on this instruction, like multiple-choice answers, and each option is assigned a probability. Using a statistical approach called conformal prediction and a user-specified guaranteed success rate, the researchers designed their algorithm to trigger a request for human help when the options meet a certain probability threshold. In this case, the top two options — place the plastic bowl in the microwave or place the metal bowl in the microwave — meet this threshold, and the robot asks the human which bowl to place in the microwave.
    In another example, a person tells the robot, “There is an apple and a dirty sponge … It is rotten. Can you dispose of it?” This does not trigger a question from the robot, since the action “put the apple in the compost” has a sufficiently higher probability of being correct than any other option.
    “Using the technique of conformal prediction, which quantifies the language model’s uncertainty in a more rigorous way than prior methods, allows us to get to a higher level of success” while minimizing the frequency of triggering help, said the study’s senior author Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton.
    Robots’ physical limitations often give designers insights not readily available from abstract systems. Large language models “might talk their way out of a conversation, but they can’t skip gravity,” said coauthor Andy Zeng, a research scientist at Google DeepMind. “I’m always keen on seeing what we can do on robots first, because it often sheds light on the core challenges behind building generally intelligent machines.”
    Ren and Majumdar began collaborating with Zeng after he gave a talk as part of the Princeton Robotics Seminar series, said Majumdar. Zeng, who earned a computer science Ph.D. from Princeton in 2019, outlined Google’s efforts in using LLMs for robotics, and brought up some open challenges. Ren’s enthusiasm for the problem of calibrating the level of help a robot should ask for led to his internship and the creation of the new method.
    “We enjoyed being able to leverage the scale that Google has” in terms of access to large language models and different hardware platforms, said Majumdar.
    Ren is now extending this work to problems of active perception for robots: For instance, a robot may need to use predictions to determine the location of a television, table or chair within a house, when the robot itself is in a different part of the house. This requires a planner based on a model that combines vision and language information, bringing up a new set of challenges in estimating uncertainty and determining when to trigger help, said Ren. More

  • in

    Researchers engineer a material that can perform different tasks depending on temperature

    Researchers report that they have developed a new composite material designed to change behaviors depending on temperature in order to perform specific tasks. These materials are poised to be part of the next generation of autonomous robotics that will interact with the environment.
    The new study conducted by University of Illinois Urbana-Champaign civil and environmental engineering professor Shelly Zhang and graduate student Weichen Li, in collaboration with professor Tian Chen and graduate student Yue Wang from the University of Houston, uses computer algorithms, two distinct polymers and 3D printing to reverse engineer a material that expands and contracts in response to temperature change with or without human intervention.
    The study findings are reported in the journal Science Advances.
    “Creating a material or device that will respond in specific ways depending on its environment is very challenging to conceptualize using human intuition alone — there are just so many design possibilities out there,” Zhang said. “So, instead, we decided to work with a computer algorithm to help us determine the best combination of materials and geometry.”
    The team first used computer modeling to conceptualize a two-polymer composite that can behave differently under various temperatures based on user input or autonomous sensing.
    “For this study, we developed a material that can behave like soft rubber in low temperatures and as a stiff plastic in high temperatures,” Zhang said.
    Once fabricated into a tangible device, the team tested the new composite material’s ability to respond to temperature changes to perform a simple task — switch on LED lights.

    “Our study demonstrates that it is possible to engineer a material with intelligent temperature sensing capabilities, and we envision this being very useful in robotics,” Zhang said. “For example, if a robot’s carrying capacity needs to change when the temperature changes, the material will ‘know’ to adapt its physical behavior to stop or perform a different task.”
    Zhang said that one of the hallmarks of the study is the optimization process that helps the researchers interpolate the distribution and geometries of the two different polymer materials needed.
    “Our next goal is to use this technique to add another level of complexity to a material’s programmed or autonomous behavior, such as the ability to sense the velocity of some sort of impact from another object,” she said. “This will be critical for robotics materials to know how to respond to various hazards in the field.”
    The National Science Foundation supported this research. More