More stories

  • in

    Large language models can’t effectively recognize users’ motivation, but can support behavior change for those ready to act

    Large language model-based chatbots have the potential to promote healthy changes in behavior. But researchers from the ACTION Lab at the University of Illinois Urbana-Champaign have found that the artificial intelligence tools don’t effectively recognize certain motivational states of users and therefore don’t provide them with appropriate information.
    Michelle Bak, a doctoral student in information sciences, and information sciences professor Jessie Chin reported their research in the Journal of the American Medical Informatics Association.
    Large language model-based chatbots — also known as generative conversational agents — have been used increasingly in healthcare for patient education, assessment and management. Bak and Chin wanted to know if they also could be useful for promoting behavior change.
    Chin said previous studies showed that existing algorithms did not accurately identify various stages of users’ motivation. She and Bak designed a study to test how well large language models, which are used to train chatbots, identify motivational states and provide appropriate information to support behavior change.
    They evaluated large language models from ChatGPT, Google Bard and Llama 2 on a series of 25 different scenarios they designed that targeted health needs that included low physical activity, diet and nutrition concerns, mental health challenges, cancer screening and diagnosis, and others such as sexually transmitted disease and substance dependency.
    In the scenarios, the researchers used each of the five motivational stages of behavior change: resistance to change and lacking awareness of problem behavior; increased awareness of problem behavior but ambivalent about making changes; intention to take action with small steps toward change; initiation of behavior change with a commitment to maintain it; and successfully sustaining the behavior change for six months with a commitment to maintain it.
    The study found that large language models can identify motivational states and provide relevant information when a user has established goals and a commitment to take action. However, in the initial stages when users are hesitant or ambivalent about behavior change, the chatbot is unable to recognize those motivational states and provide appropriate information to guide them to the next stage of change.

    Chin said that language models don’t detect motivation well because they are trained to represent the relevance of a user’s language, but they don’t understand the difference between a user who is thinking about a change but is still hesitant and a user who has the intention to take action. Additionally, she said, the way users generate queries is not semantically different for the different stages of motivation, so it’s not obvious from the language what their motivational states are.
    “Once a person knows they want to start changing their behavior, large language models can provide the right information. But if they say, ‘I’m thinking about a change. I have intentions but I’m not ready to start action,’ that is the state where large language models can’t understand the difference,” Chin said.
    The study results found that when people were resistant to habit change, the large language models failed to provide information to help them evaluate their problem behavior and its causes and consequences and assess how their environment influenced the behavior. For example, if someone is resistant to increasing their level of physical activity, providing information to help them evaluate the negative consequences of sedentary lifestyles is more likely to be effective in motivating users through emotional engagement than information about joining a gym. Without information that engaged with the users’ motivations, the language models failed to generate a sense of readiness and the emotional impetus to progress with behavior change, Bak and Chin reported.
    Once a user decided to take action, the large language models provided adequate information to help them move toward their goals. Those who had already taken steps to change their behaviors received information about replacing problem behaviors with desired health behaviors and seeking support from others, the study found.
    However, the large language models didn’t provide information to those users who were already working to change their behaviors about using a reward system to maintain motivation or about reducing the stimuli in their environment that might increase the risk of a relapse of the problem behavior, the researchers found.
    “The large language model-based chatbots provide resources on getting external help, such as social support. They’re lacking information on how to control the environment to eliminate a stimulus that reinforces problem behavior,” Bak said.
    Large language models “are not ready to recognize the motivation states from natural language conversations, but have the potential to provide support on behavior change when people have strong motivations and readiness to take actions,” the researchers wrote.
    Chin said future studies will consider how to finetune large language models to use linguistic cues, information search patterns and social determinants of health to better understand a users’ motivational states, as well as providing the models with more specific knowledge for helping people change their behaviors. More

  • in

    Scientists use generative AI to answer complex questions in physics

    When water freezes, it transitions from a liquid phase to a solid phase, resulting in a drastic change in properties like density and volume. Phase transitions in water are so common most of us probably don’t even think about them, but phase transitions in novel materials or complex physical systems are an important area of study.
    To fully understand these systems, scientists must be able to recognize phases and detect the transitions between. But how to quantify phase changes in an unknown system is often unclear, especially when data are scarce.
    Researchers from MIT and the University of Basel in Switzerland applied generative artificial intelligence models to this problem, developing a new machine-learning framework that can automatically map out phase diagrams for novel physical systems.
    Their physics-informed machine-learning approach is more efficient than laborious, manual techniques which rely on theoretical expertise. Importantly, because their approach leverages generative models, it does not require huge, labeled training datasets used in other machine-learning techniques.
    Such a framework could help scientists investigate the thermodynamic properties of novel materials or detect entanglement in quantum systems, for instance. Ultimately, this technique could make it possible for scientists to discover unknown phases of matter autonomously.
    “If you have a new system with fully unknown properties, how would you choose which observable quantity to study? The hope, at least with data-driven tools, is that you could scan large new systems in an automated way, and it will point you to important changes in the system. This might be a tool in the pipeline of automated scientific discovery of new, exotic properties of phases,” says Frank Schäfer, a postdoc in the Julia Lab in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-author of a paper on this approach.
    Joining Schäfer on the paper are first author Julian Arnold, a graduate student at the University of Basel; Alan Edelman, applied mathematics professor in the Department of Mathematics and leader of the Julia Lab; and senior author Christoph Bruder, professor in the Department of Physics at the University of Basel. The research is published today in Physical Review Letters.
    Detecting phase transitions using AI

    While water transitioning to ice might be among the most obvious examples of a phase change, more exotic phase changes, like when a material transitions from being a normal conductor to a superconductor, are of keen interest to scientists.
    These transitions can be detected by identifying an “order parameter,” a quantity that is important and expected to change. For instance, water freezes and transitions to a solid phase (ice) when its temperature drops below 0 degrees Celsius. In this case, an appropriate order parameter could be defined in terms of the proportion of water molecules that are part of the crystalline lattice versus those that remain in a disordered state.
    In the past, researchers have relied on physics expertise to build phase diagrams manually, drawing on theoretical understanding to know which order parameters are important. Not only is this tedious for complex systems, and perhaps impossible for unknown systems with new behaviors, but it also introduces human bias into the solution.
    More recently, researchers have begun using machine learning to build discriminative classifiers that can solve this task by learning to classify a measurement statistic as coming from a particular phase of the physical system, the same way such models classify an image as a cat or dog.
    The MIT researchers demonstrated how generative models can be used to solve this classification task much more efficiently, and in a physics-informed manner.
    The Julia Programming Language, a popular language for scientific computing that is also used in MIT’s introductory linear algebra classes, offers many tools that make it invaluable for constructing such generative models, Schäfer adds.

    Generative models, like those that underlie ChatGPT and Dall-E, typically work by estimating the probability distribution of some data, which they use to generate new data points that fit the distribution (such as new cat images that are similar to existing cat images).
    However, when simulations of a physical system using tried-and-true scientific techniques are available, researchers get a model of its probability distribution for free. This distribution describes the measurement statistics of the physical system.
    A more knowledgeable model
    The MIT team’s insight is that this probability distribution also defines a generative model upon which a classifier can be constructed. They plug the generative model into standard statistical formulas to directly construct a classifier instead of learning it from samples, as was done with discriminative approaches.
    “This is a really nice way of incorporating something you know about your physical system deep inside your machine-learning scheme. It goes far beyond just performing feature engineering on your data samples or simple inductive biases,” Schäfer says.
    This generative classifier can determine what phase the system is in given some parameter, like temperature or pressure. And because the researchers directly approximate the probability distributions underlying measurements from the physical system, the classifier has system knowledge.
    This enables their method to perform better than other machine-learning techniques. And because it can work automatically without the need for extensive training, their approach significantly enhances the computational efficiency of identifying phase transitions.
    At the end of the day, similar to how one might ask ChatGPT to solve a math problem, the researchers can ask the generative classifier questions like “does this sample belong to phase I or phase II?” or “was this sample generated at high temperature or low temperature?”
    Scientists could also use this approach to solve different binary classification tasks in physical systems, possibly to detect entanglement in quantum systems (Is the state entangled or not?) or determine whether theory A or B is best suited to solve a particular problem. They could also use this approach to better understand and improve large language models like ChatGPT by identifying how certain parameters should be tuned so the chatbot gives the best outputs.
    In the future, the researchers also want to study theoretical guarantees regarding how many measurements they would need to effectively detect phase transitions and estimate the amount of computation that would require.
    This work was funded, in part, by the Swiss National Science Foundation, the MIT-Switzerland Lockheed Martin Seed Fund, and MIT International Science and Technology Initiatives. More

  • in

    Researchers wrestle with accuracy of AI technology used to create new drug candidates

    Artificial intelligence (AI) has numerous applications in healthcare, from analyzing medical imaging to optimizing the execution of clinical trials, and even facilitating drug discovery.
    AlphaFold2, an artificial intelligence system that predicts protein structures, has made it possible for scientists to identify and conjure an almost infinite number of drug candidates for the treatment of neuropsychiatric disorders. However recent studies have sown doubt about the accuracy of AlphaFold2 in modeling ligand binding sites, the areas on proteins where drugs attach and begin signaling inside cells to cause a therapeutic effect, as well as possible side effects.
    In a new paper, Bryan Roth, MD, PhD, the Michael Hooker Distinguished Professor of Pharmacology and director of the NIMH Psychoactive Drug Screening Program at the University of North Carolina School of Medicine, and colleagues at UCSF, Stanford and Harvard determined that AlphaFold2 can yield accurate results for ligand binding structures, even when the technology has nothing to go off of. Their results were published in Science.
    “Our results suggest that AF2 structures can be useful for drug discovery,” said Roth, senior author who holds a joint appointment at the UNC Eshelman School of Pharmacy. “With a nearly infinite number of possibilities to create drugs that hit their intended target to treat a disease, this sort of AI tool can be invaluable.”
    AlphaFold2 and Prospective Modeling
    Much like weather forecasting or stock market prediction, AlphaFold2 works by pulling from a massive database of known proteins to create models of protein structures. Then, it can simulate how different molecular compounds (like drug candidates) fit into the protein’s binding sites and produce wanted effects. Researchers can use the resulting combinations to better understand protein interactions and create new drug candidates.
    To determine the accuracy of AlphaFold2, researchers had to compare the results of a retrospective study against that of a prospective study. A retrospective study involves researchers feeding the prediction software compounds they already know bind to the receptor. Whereas, a prospective study requires researchers to use the technology as a fresh slate, and then feed the AI platform information about compounds that may or may not interact with the receptor.

    Researchers used two proteins, sigma-2 and 5-HT2A, for the study. These proteins, which belong to two different protein families, are important in cell communication and have been implicated in neuropsychiatric conditions such as Alzheimer’s disease and schizophrenia. The 5-HT2A serotonin receptor is also the main target for psychedelic drugs which show promise for treating a large number of neuropsychiatric disorders.
    Roth and colleagues selected these proteins because AlphaFold2 had no prior information about sigma-2 and 5-HT2A or the compounds that might bind to them. Essentially, the technology was given two proteins for which it wasn’t trained on — essentially giving the researchers a “blank slate.”
    First, researchers fed the AlphaFold system the protein structures for sigma-2 and 5-HT2A, creating a prediction model. Researchers then accessed physical models of the two proteins that were produced using complex microscopy and x-ray crystallography techniques. With a press of a button, as many as 1.6 billion potential drugs were targeted to the experimental models and AlphaFold2 models. Interestingly, every model had a different drug candidate outcome.
    Successful Hit Rates
    Despite the models having differing results, they show great promise for drug discovery. Researchers determined that the proportion of compounds that actually altered protein activity for each of the models were around 50% and 20% for the sigma-2 receptor and 5-HT2A receptors, respectively. A result greater than 5% is exceptional.
    Out of the hundreds of millions of potential combinations, 54% of the drug-protein interactions using the sigma-2 AlphaFold2 protein models were successfully activated through a bound drug candidate. The experimental model for sigma-2 produced similar results with a success rate of 51%.
    “This work would be impossible without collaborations among several leading experts at UCSF, Stanford, Harvard, and UNC-Chapel Hill,” Roth said. “Going forward we will test whether these results might be applicable to other therapeutic targets and target classes.” More

  • in

    Building a better sarcasm detector

    Oscar Wilde once said that sarcasm was the lowest form of wit, but the highest form of intelligence. Perhaps that is due to how difficult it is to use and understand. Sarcasm is notoriously tricky to convey through text — even in person, it can be easily misinterpreted. The subtle changes in tone that convey sarcasm often confuse computer algorithms as well, limiting virtual assistants and content analysis tools.
    Xiyuan Gao, Shekhar Nayak, and Matt Coler of Speech Technology Lab at the University of Groningen, Campus Fryslân developed a multimodal algorithm for improved sarcasm detection that examines multiple aspects of audio recordings for increased accuracy. Gao will present their work Thursday, May 16, as part of a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association, running May 13-17 at the Shaw Centre located in downtown Ottawa, Ontario, Canada.
    Traditional sarcasm detection algorithms often rely on a single parameter to produce their results, which is the main reason they often fall short. Gao, Nayak, and Coler instead used two complementary approaches — sentiment analysis using text and emotion recognition using audio — for a more complete picture.
    “We extracted acoustic parameters such as pitch, speaking rate, and energy from speech, then used Automatic Speech Recognition to transcribe the speech into text for sentiment analysis,” said Gao. “Next, we assigned emoticons to each speech segment, reflecting its emotional content. By integrating these multimodal cues into a machine learning algorithm, our approach leverages the combined strengths of auditory and textual information along with emoticons for a comprehensive analysis.”
    The team is optimistic about the performance of their algorithm, but they are already looking for ways to improve it further.
    “There are a range of expressions and gestures people use to highlight sarcastic elements in speech,” said Gao. “These need to be better integrated into our project. In addition, we would like to include more languages and adopt developing sarcasm recognition techniques.”
    This approach can be used for more than identifying a dry wit. The researchers highlight that this technique can be widely applied in many fields.
    “The development of sarcasm recognition technology can benefit other research domains using sentiment analysis and emotion recognition,” said Gao. “Traditionally, sentiment analysis mainly focuses on text and is developed for applications such as online hate speech detection and customer opinion mining. Emotion recognition based on speech can be applied to AI-assisted health care. Sarcasm recognition technology that applies a multimodal approach is insightful to these research domains.” More

  • in

    To optimize guide-dog robots, first listen to the visually impaired

    What features does a robotic guide dog need? Ask the blind, say the authors of an award-winning paper. Led by researchers at the University of Massachusetts Amherst, a study identifying how to develop robot guide dogs with insights from guide dog users and trainers won a Best Paper Award at CHI 2024: Conference on Human Factors in Computing Systems (CHI).
    Guide dogs enable remarkable autonomy and mobility for their handlers. However, only a fraction of people with visual impairments have one of these companions. The barriers include the scarcity of trained dogs, cost (which is $40,000 for training alone), allergies and other physical limitations that preclude caring for a dog.
    Robots have the potential to step in where canines can’t and address a truly gaping need — if designers can get the features right.
    “We’re not the first ones to develop guide-dog robots,” says Donghyun Kim, assistant professor in the UMass Amherst Manning College of Information and Computer Science (CICS) and one of the corresponding authors of the award-winning paper. “There are 40 years of study there, and none of these robots are actually used by end users. We tried to tackle that problem first so that, before we develop the technology, we understand how they use the animal guide dog and what technology they are waiting for.”
    The research team conducted semistructured interviews and observation sessions with 23 visually impaired dog-guide handlers and five trainers. Through thematic analysis, they distilled the current limitations of canine guide dogs, the traits handlers are looking for in an effective guide and considerations to make for future robotic guide dogs.
    One of the more nuanced themes that came from these interviews was the delicate balance between robot autonomy and human control. “Originally, we thought that we were developing an autonomous driving car,” says Kim. They envisioned that the user would tell the robot where they want to go and the robot would navigate autonomously to that location with the user in tow.
    This is not the case.

    The interviews revealed that handlers do not use their dog as a global navigation system. Instead, the handler controls the overall route while the dog is responsible for local obstacle avoidance. However, even this isn’t a hard-and-fast rule. Dogs can also learn routes by habit and may eventually navigate a person to regular destinations without directional commands from the handler.
    “When the handler trusts the dog and gives more autonomy to the dog, it’s a bit delicate,” says Kim. “We cannot just make a robot that is fully passive, just following the handler, or just fully autonomous, because then [the handler] feels unsafe.”
    The researchers hope this paper will serve as a guide, not only in Kim’s lab, but for other robot developers as well. “In this paper, we also give directions on how we should develop these robots to make them actually deployable in the real world,” says Hochul Hwang, first author on the paper and a doctoral candidate in Kim’s robotics lab.
    For instance, he says that a two-hour battery life is an important feature for commuting, which can be an hour on its own. “About 90% of the people mentioned the battery life,” he says. “This is a critical part when designing hardware because the current quadruped robots don’t last for two hours.”
    These are just a few of the findings in the paper. Others include: adding more camera orientations to help address overhead obstacles; adding audio sensors for hazards approaching from the occluded regions; understanding ‘sidewalk’ to convey the cue, “go straight,” which means follow the street (not travel in a perfectly straight line); and helping users get on the right bus (and then find a seat as well).
    The researchers say this paper is a great starting point, adding that there is even more information to unpack from their 2,000 minutes of audio and 240 minutes of video data.

    Winning the Best Paper Award was a distinction that put the work in the top 1% of all papers submitted to the conference.
    “The most exciting aspect of winning this award is that the research community recognizes and values our direction,” says Kim. “Since we don’t believe that guide dog robots will be available to individuals with visual impairments within a year, nor that we’ll solve every problem, we hope this paper inspires a broad range of robotics and human-robot interaction researchers, helping our vision come to fruition sooner.”
    Other researchers who contributed to the paper include:
    Ivan Lee, associate professor in CICS and a co-corresponding author of the article along with Donghyun, an expert in adaptive technologies and human-centered design; Joydeep Biswas, associate professor at the University of Texas Austin, who brought his experience in creating artificial intelligence (AI) algorithms that allow robots to navigate through unstructured environments; Hee Tae Jung, assistant professor at Indiana University, who brought his expertise in human factors and qualitative research to participatory study with people with chronic conditions; and Nicholas Giudice, a professor at the University of Maine who is blind and provided valuable insight and interpretation of the interviews.
    Ultimately, Kim understands that robotics can do the most good when scientists remember the human element. “My Ph.D. and postdoctoral research is all about how to make these robots work better,” Kim adds. “We tried to find [an application that is] practical and something meaningful for humanity.” More

  • in

    Automated news video production is better with a human touch

    AI-generated videos for short messages are only as well received as manually created ones if they are edited by humans.
    News organizations — including Bloomberg, Reuters, and The Economist — have been using AI powered video services to meet growing audience demand for audio-visual material. A study recently published in the journal Journalism now shows that the automated production of news videos is better with human supervision.
    Technology providers like Wochit and Moovly are allowing publishers to mass produce videos at scale. But what do audiences think of the results? Researchers led by LMU communication scientist Professor Neil Thurman have found that only automated videos which have been post-edited by humans were as well liked as fully human-made videos.
    “Our research shows that, on average, news consumers liked short-form, automated news videos as much as manually made ones, as long as the automation process involved human supervision,” says Neil Thurman, from LMU’s Department of Media and Communication.
    Together with Dr. Sally Stares (London School of Economic) and Dr. Michael Koliska (Georgetown University), Thurman evaluated the reactions of 4,200 UK news consumers to human-made, highly-automated, and partly-automated videos that covered a variety of topics including Christiano Ronaldo, Donald Trump, and the Wimbledon tennis championships. The partly-automated videos were post-edited by humans after the initial automation process.
    The results show that there were no significant differences in how much news audiences liked the human-made and partly-automated videos overall. By contrast, highly-automated videos were liked significantly less. In other words, the results show that news video automation is better with human supervision.
    According to Thurman, “one key takeaway of the study is that video automation output may be best when it comes in a hybrid form, meaning a human-machine collaboration. Such hybridity involves more human supervision, ensuring that automated video production maintains quality standards while taking advantage of computers’ strengths, such as speed and scale.” More

  • in

    Jet-propelled sea creatures could improve ocean robotics

    Scientists at the University of Oregon have discovered that colonies of gelatinous sea animals swim through the ocean in giant corkscrew shapes using coordinated jet propulsion, an unusual kind of locomotion that could inspire new designs for efficient underwater vehicles.
    The research involves salps, small creatures that look similar to jellyfish that take a nightly journey from the depths of the ocean to the surface. Observing that migration with special cameras helped UO researchers and their colleagues capture the macroplankton’s graceful, coordinated swimming behavior.
    “The largest migration on the planet happens every single night: the vertical migration of planktonic organisms from the deep sea to the surface,” said Kelly Sutherland, an associate professor in biology at the UO’s Oregon Institute of Marine Biology, who led the research. “They’re running a marathon every day using novel fluid mechanics. These organisms can be platforms for inspiration on how to build robots that efficiently traverse the deep sea.”
    The researchers’ findings were published May 15 in the journal Science Advances. The study included collaborations from Louisiana Universities Marine Consortium, University of South Florida, Roger Williams University, Marine Biological Laboratory and Providence College.
    Despite looking similar to jellyfish, salps are barrel-shaped, watery macroplankton that are more closely related to vertebrates like fish and humans, said Alejandro Damian-Serrano, an adjunct professor in biology at the UO. They live far from shore and can live either as solitary individuals or operate in colonies, he said. Colonies consist of hundreds of individuals linked in chains that can be up to several meters long.
    “Salps are really weird animals,” Damian-Serrano said. “While their common ancestor with us probably looked like a little boneless fish, their lineage lost a lot of those features and magnified others. The solitary individuals behave like this mothership that asexually breeds a chain of individual clones, cojoined together to produce a colony.”
    But the most unique thing about these ocean creatures was found during the researchers’ ocean expeditions: their swimming techniques.

    Exploring off the coast of Kailua-Kona, Hawaii, Sutherland and her team developed specialized 3D camera systems to bring their lab underwater. They conducted daytime scuba dives, “immersed in infinite blue,” as Damian-Serrano described, for high visibility investigations.
    They also performed nighttime dives, when the black backdrop allowed for high-contrast imaging of the transparent critters. They encountered an immense flurry of different salps that were doing their nightly migration to the surface — and many photobombing sharks, squids and crustaceans, Sutherland noted.
    Through imaging and recordings, the researchers noticed two modes of swimming. Where shorter colonies spun around an axis, like a spiraling football, longer chains would buckle and coil like a corkscrew. That’s called helical swimming.
    Helical swimming is nothing new in biology, Sutherland said. Many microorganisms also spin and corkscrew through water, but the mechanisms behind the salps’ motion are different. Microbes beat water with hair-like projections or tail whips, but salps swim via jet propulsion, Sutherland said. They have contracting muscle bands, like those in the human throat, that pump water sucked from one side of the body and squirted out the other end to create thrust, Damian-Serrano said.
    The researchers also noticed that individual jets contracted at different times, causing the whole colony to steadily travel without pause. The jets were also angled, contributing to the spinning and coil swimming, Sutherland said.
    “My initial reaction was really one of wonder and awe,” she said. “I would describe their motion as snake-like and graceful. They have multiple units pulsing at different times, creating a whole chain that moves very smoothly. It’s a really beautiful way of moving.”
    Microrobots inspired by microbial swimmers already exist, Sutherland said, but this discovery paves the way for engineers to construct larger underwater vehicles. It may be possible to create robots that are silent and less turbulent when modeled after these efficient swimmers, Damian-Serrano said. A multijet design also may be energetically advantageous for saving fuel, he said.
    Beyond microbes, larger organisms like plankton have yet to be described in this way, Sutherland said. With Sutherland’s new and innovative methods of studying sea creatures, scientists might come to realize that helical swimming is more pervasive than previously thought.
    “It’s a study that opens up more questions than provides answers,” Sutherland said. “There’s this new way of swimming that hadn’t been described before, and when we started the study we sought to explain how it works. But we found that there are a lot more open questions, like what are the advantages of swimming this way? How many different organisms spin or corkscrew?” More

  • in

    Robotic ‘SuperLimbs’ could help moonwalkers recover from falls

    Need a moment of levity? Try watching videos of astronauts falling on the moon. NASA’s outtakes of Apollo astronauts tripping and stumbling as they bounce in slow motion are delightfully relatable.
    For MIT engineers, the lunar bloopers also highlight an opportunity to innovate.
    “Astronauts are physically very capable, but they can struggle on the moon, where gravity is one-sixth that of Earth’s but their inertia is still the same. Furthermore, wearing a spacesuit is a significant burden and can constrict their movements,” says Harry Asada, professor of mechanical engineering at MIT. “We want to provide a safe way for astronauts to get back on their feet if they fall.”
    Asada and his colleagues are designing a pair of wearable robotic limbs that can physically support an astronaut and lift them back on their feet after a fall. The system, which the researchers have dubbed Supernumerary Robotic Limbs or “SuperLimbs” is designed to extend from a backpack, which would also carry the astronaut’s life support system, along with the controller and motors to power the limbs.
    The researchers have built a physical prototype, as well as a control system to direct the limbs, based on feedback from the astronaut using it. The team tested a preliminary version on healthy subjects who also volunteered to wear a constrictive garment similar to an astronaut’s spacesuit. When the volunteers attempted to get up from a sitting or lying position, they did so with less effort when assisted by SuperLimbs, compared to when they had to recover on their own.
    The MIT team envisions that SuperLimbs can physically assist astronauts after a fall and, in the process, help them conserve their energy for other essential tasks. The design could prove especially useful in the coming years, with the launch of NASA’s Artemis mission, which plans to send astronauts back to the moon for the first time in over 50 years. Unlike the largely exploratory mission of Apollo, Artemis astronauts will endeavor to build the first permanent moon base — a physically demanding task that will require multiple extended extravehicular activities (EVAs).
    “During the Apollo era, when astronauts would fall, 80 percent of the time it was when they were doing excavation or some sort of job with a tool,” says team member and MIT doctoral student Erik Ballesteros. “The Artemis missions will really focus on construction and excavation, so the risk of falling is much higher. We think that SuperLimbs can help them recover so they can be more productive, and extend their EVAs.”
    Asada, Ballesteros, and their colleagues will present their design and study this week at the IEEE International Conference on Robotics and Automation (ICRA). Their co-authors include MIT postdoc Sang-Yoep Lee and Kalind Carpenter of the Jet Propulsion Laboratory.

    Taking a stand
    The team’s design is the latest application of SuperLimbs, which Asada first developed about a decade ago and has since adapted for a range of applications, including assisting workers in aircraft manufacturing, construction, and ship building.
    Most recently, Asada and Ballesteros wondered whether SuperLimbs might assist astronauts, particularly as NASA plans to send astronauts back to the surface of the moon.
    “In communications with NASA, we learned that this issue of falling on the moon is a serious risk,” Asada says. “We realized that we could make some modifications to our design to help astronauts recover from falls and carry on with their work.”
    The team first took a step back, to study the ways in which humans naturally recover from a fall. In their new study, they asked several healthy volunteers to attempt to stand upright after lying on their side, front, and back.
    The researchers then looked at how the volunteers’ attempts to stand changed when their movements were constricted, similar to the way astronauts’ movements are limited by the bulk of their spacesuits. The team built a suit to mimic the stiffness of traditional spacesuits, and had volunteers don the suit before again attempting to stand up from various fallen positions. The volunteers’ sequence of movements was similar, though required much more effort compared to their unencumbered attempts.

    The team mapped the movements of each volunteer as they stood up, and found that they each carried out a common sequence of motions, moving from one pose, or “waypoint,” to the next, in a predictable order.
    “Those ergonomic experiments helped us to model in a straightforward way, how a human stands up,” Ballesteros says. “We could postulate that about 80 percent of humans stand up in a similar way. Then we designed a controller around that trajectory.”
    Helping hand
    The team developed software to generate a trajectory for a robot, following a sequence that would help support a human and lift them back on their feet. They applied the controller to a heavy, fixed robotic arm, which they attached to a large backpack. The researchers then attached the backpack to the bulky suit and helped volunteers back into the suit. They asked the volunteers to again lie on their back, front, or side, and then had them attempt to stand as the robot sensed the person’s movements and adapted to help them to their feet.
    Overall, the volunteers were able to stand stably with much less effort when assisted by the robot, compared to when they tried to stand alone while wearing the bulky suit.
    “It feels kind of like an extra force moving with you,” says Ballesteros, who also tried out the suit and arm assist. “Imagine wearing a backpack and someone grabs the top and sort of pulls you up. Over time, it becomes sort of natural.”
    The experiments confirmed that the control system can successfully direct a robot to help a person stand back up after a fall. The researchers plan to pair the control system with their latest version of SuperLimbs, which comprises two multijointed robotic arms that can extend out from a backpack. The backpack would also contain the robot’s battery and motors, along with an astronaut’s ventilation system.
    “We designed these robotic arms based on an AI search and design optimization, to look for designs of classic robot manipulators with certain engineering constraints,” Ballesteros says. “We filtered through many designs and looked for the design that consumes the least amount of energy to lift a person up. This version of SuperLimbs is the product of that process.”
    Over the summer, Ballesteros will build out the full SuperLimbs system at NASA’s Jet Propulsion Laboratory, where he plans to streamline the design and minimize the weight of its parts and motors using advanced, lightweight materials. Then, he hopes to pair the limbs with astronaut suits, and test them in low-gravity simulators, with the goal of someday assisting astronauts on future missions to the moon and Mars.
    “Wearing a spacesuit can be a physical burden,” Asada notes. “Robotic systems can help ease that burden, and help astronauts be more productive during their missions.”
    This research was supported, in part, by NASA. More