More stories

  • in

    Allowing robots to explore on their own

    A research group in Carnegie Mellon University’s Robotics Institute is creating the next generation of explorers — robots.
    The Autonomous Exploration Research Team has developed a suite of robotic systems and planners enabling robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way and creating a map without human intervention.
    “You can set it in any environment, like a department store or a residential building after a disaster, and off it goes,” said Ji Zhang, a systems scientist in the Robotics Institute. “It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map. You don’t even have to step into the space. Just let the robots explore and map the environment.”
    The team has worked on exploration systems for more than three years. They’ve explored and mapped several underground mines, a parking garage, the Cohon University Center, and several other indoor and outdoor locations on the CMU campus. The system’s computers and sensors can be attached to nearly any robotic platform, transforming it into a modern-day explorer. The group uses a modified motorized wheelchair and drones for much of its testing.
    Robots can explore in three modes using the group’s systems. In one mode, a person can control the robot’s movements and direction while autonomous systems keep it from crashing into walls, ceilings or other objects. In another mode, a person can select a point on a map and the robot will navigate to that point. The third mode is pure exploration. The robot sets off on its own, investigates the entire space and creates a map.
    “This is a very flexible system to use in many applications, from delivery to search-and-rescue,” said Howie Choset, a professor in the Robotics Institute.
    The group combined a 3D scanning lidar sensor, forward-looking camera and inertial measurement unit sensors with an exploration algorithm to enable the robot to know where it is, where it has been and where it should go next. The resulting systems are substantially more efficient than previous approaches, creating more complete maps while reducing the algorithm run time by half.
    The new systems work in low-light, treacherous conditions where communication is spotty, like caves, tunnels and abandoned structures. A version of the group’s exploration system powered Team Explorer, an entry from CMU and Oregon State University in DARPA’s Subterranean Challenge. Team Explorer placed fourth in the final competition but won the Most Sectors Explored Award for mapping more of the route than any other team.
    “All of our work is open-sourced. We are not holding anything back. We want to strengthen society with the capabilities of building autonomous exploration robots,” said Chao Cao, a Ph.D. student in robotics and the lead operator for Team Explorer. “It’s a fundamental capability. Once you have it, you can do a lot more.”
    Video: https://youtu.be/pNtC3Twx_2w More

  • in

    Learning from superheroes and AI: Researchers study how a chatbot can teach kids supportive self-talk

    At first, some parents were wary: An audio chatbot was supposed to teach their kids to speak positively to themselves through lessons about a superhero named Zip. In a world of Siri and Alexa, many people are skeptical that the makers of such technologies are putting children’s welfare first.
    Researchers at the University of Washington created a new web app aimed to help children develop skills like self-awareness and emotional management. In Self-Talk with Superhero Zip, a chatbot guided pairs of siblings through lessons. The UW team found that, after speaking with the app for a week, most children could explain the concept of supportive self-talk (the things people say to themselves either audibly or mentally) and apply it in their daily lives. And kids who’d engaged in negative self-talk before the study were able to turn that habit positive.
    The UW team published its findings in June at the 2023 Interaction Design and Children conference. The app is still a prototype and is not yet publicly available.
    The UW team saw a few reasons to develop an educational chatbot. Positive self-talk has shown a range of benefits for kids, from improved sport performance to increased self-esteem and lower risk of depression. And previous studies have shown children can learn various tasks and abilities from chatbots. Yet little research explores how chatbots can help kids effectively acquire socioemotional skills.
    “There is room to design child-centric experiences with a chatbot that provide fun and educational practice opportunities without invasive data harvesting that compromises children’s privacy,” said senior author Alexis Hiniker, an associate professor in the UW Information School. “Over the last few decades, television programs like ‘Sesame Street,’ ‘Mister Rogers,’ and ‘Daniel Tiger’s Neighborhood’ have shown that it is possible for TV to help kids cultivate socioemotional skills. We asked: Can we make a space where kids can practice these skills in an interactive app? We wanted to create something useful and fun — a ‘Sesame Street’ experience for a smart speaker.”
    The UW researchers began with two prototype ideas with the goal to teach socioemotional skills broadly. After testing, they narrowed the scope, focusing on a superhero named Zip and the aim of teaching supportive self-talk. They decided to test the app on siblings, since research shows that children are more engaged when they use technology with another person.
    Ten pairs of Seattle-area siblings participated in the study. For a week, they opened the app and met an interactive narrator who told them stories about Zip and asked them to reflect on Zip’s encounters with other characters, including a supervillain. During and after the study, kids described applying positive self-talk; several mentioned using it when they were upset or angry. More

  • in

    AI-guided brain stimulation aids memory in traumatic brain injury

    Traumatic brain injury (TBI) has disabled 1 to 2% of the population, and one of their most common disabilities is problems with short-term memory. Electrical stimulation has emerged as a viable tool to improve brain function in people with other neurological disorders.
    Now, a new study in the journal Brain Stimulation shows that targeted electrical stimulation in patients with traumatic brain injury led to an average 19% boost in recalling words.
    Led by University of Pennsylvania psychology professor Michael Jacob Kahana, a team of neuroscientists studied TBI patients with implanted electrodes, analyzed neural data as patients studied words, and used a machine learning algorithm to predict momentary memory lapses. Other lead authors included Wesleyan University psychology professor Youssef Ezzyat and Penn research scientist Paul Wanda.
    “The last decade has seen tremendous advances in the use of brain stimulation as a therapy for several neurological and psychiatric disorders including epilepsy, Parkinson’s disease, and depression,” Kahana says. “Memory loss, however, represents a huge burden on society. We lack effective therapies for the 27 million Americans suffering.”
    Study co-author Ramon Diaz-Arrastia, director of the Traumatic Brain Injury Clinical Research Center at Penn Medicine, says the technology Kahana and his team developed delivers “the right stimulation at the right time, informed by the wiring of the individual’s brain and that individual’s successful memory retrieval.”
    He says the top causes of TBI are motor vehicle accidents, which are decreasing, and falls, which are rising because of the aging population. The next most common causes are assaults and head injuries from participation in contact sports.
    This new study builds off the previous work of Ezzyat, Kahana, and their collaborators. Publishing their findings in 2017, they showed that stimulation delivered when memory is expected to fail can improve memory, whereas stimulation administered during periods of good functioning worsens memory. The stimulation in that study was open-loop, meaning it was applied by a computer without regard to the state of the brain. More

  • in

    A faster way to teach a robot

    Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug (perhaps because this mug is painted with an unusual image, say, of MIT’s mascot, Tim the Beaver). So, the robot fails.
    “Right now, the way we train these robots, when they fail, we don’t really know why. So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” says Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT.
    Peng and her collaborators at MIT, New York University, and the University of California at Berkeley created a framework that enables humans to quickly teach a robot what they want it to do, with a minimal amount of effort.
    When a robot fails, the system uses an algorithm to generate counterfactual explanations that describe what needed to change for the robot to succeed. For instance, maybe the robot would have been able to pick up the mug if the mug were a certain color. It shows these counterfactuals to the human and asks for feedback on why the robot failed. Then the system utilizes this feedback and the counterfactual explanations to generate new data it uses to fine-tune the robot.
    Fine-tuning involves tweaking a machine-learning model that has already been trained to perform one task, so it can perform a second, similar task.
    The researchers tested this technique in simulations and found that it could teach a robot more efficiently than other methods. The robots trained with this framework performed better, while the training process consumed less of a human’s time.
    This framework could help robots learn faster in new environments without requiring a user to have technical knowledge. In the long run, this could be a step toward enabling general-purpose robots to efficiently perform daily tasks for the elderly or individuals with disabilities in a variety of settings. More

  • in

    Efficient discovery of improved energy materials by a new AI-guided workflow

    Scientists of the NOMAD Laboratory at the Fritz Haber Institute of the Max Planck Society recently proposed a workflow that can dramatically accelerate the search for novel materials with improved properties. They demonstrated the power of the approach by identifying more than 50 strongly thermally insulating materials. These can help alleviate the ongoing energy crisis, by allowing for more efficient thermoelectric elements, i.e., devices able to convert otherwise wasted heat into useful electrical voltage.
    Discovering new and reliable thermoelectric materials is paramount for making use of the more than 40% of energy given off as waste heat globally and help mitigate the growing challenges of climate change. One way to increase the thermoelectric efficiency of a material is to reduce its thermal conductivity, κ, and thereby maintaining the temperature gradient needed to generate electricity. However, the cost associated with studying these properties limited the computational and experimental investigations of κ to only a minute subset of all possible materials. A team of the NOMAD Laboratory recently made efforts to reduce these costs by creating an AI-guided workflow that hierarchically screens out materials to efficiently find new and better thermal insulators.
    The work recently published in npj Computational Materials proposes a new way of using Artificial Intelligence (AI) to guide the high-throughput search for new materials. Instead of using physical/chemical intuition to screen out materials based on general, known or suspected trends, the new procedure learns the conditions that lead to the desired outcome with advanced AI methods. This work has the potential to quantify the search for new energy materials and increase the efficiency of these searches.
    The first step in designing these workflows is to use advanced statistical and AI methods to approximate the target property of interest, κ in this case. To this end, the sure-independence screening and sparsifying operator (SISSO) approach is used. SISSO is a machine learning method that reveals the fundamental dependencies between different materials properties from a set of billions of possible expressions. Compared to other “black-box” AI models, this approach is similarly accurate, but additionally yields analytic relationships between different material properties. This allows us to apply modern feature importance metrics to shed light on which material properties are the most important. In the case of κ, these are the molar volume, Vm; the high-temperature limit Debye Temperature, θD,∞; and the anharmonicity metricfactor, σA.
    Furthermore, the described statistical analysis allows to distill out rule-of-thumbs for the individual features that enable to a priori estimate the potential of material to be a thermal insulator. Working with the three most important primary features hence allowed to create AI-guided computational workflows for discovering new thermal insulators. These workflows use state-of-the-art electronic structure programs to calculate each of the selected features. During each step materials were screened out that are unlikely to be good insulators based on their values of Vm, θD,∞, and σA. With this, it is possible to reduce the number of calculations needed to find thermally insulating materials by over two orders of magnitude. In this work, this is demonstrated by identifying 96 thermal insulators (κ < 10 Wm-1K-1) in an initial set of 732 materials. The reliability of this approach was further verified by calculating κ for 4 of these predictions with highest possible accuracy. Besides facilitating the active search for new thermoelectric materials, the formalisms proposed by the NOMAD team can be also applied to solve other urgent material science problems. More

  • in

    Bot inspired by baby turtles can swim under the sand

    This robot can swim under the sand and dig itself out too, thanks to two front limbs that mimic the oversized flippers of turtle hatchlings.
    It’s the only robot that is able to travel in sand at a depth of 5 inches. It can also travel at a speed of 1.2 millimeters per second-roughly 4 meters, or 13 feet, per hour. This may seem slow but is comparable to other subterranean animals like worms and clams. The robot is equipped with force sensors at the end of its limbs that allow it to detect obstacles while in motion. It can operate untethered and be controlled via WiFi.
    Robots that can move through sand face significant challenges like dealing with higher forces than robots that move in air or water. They also get damaged more easily. However, the potential benefits of solving locomotion in sand include inspection of grain silos, measurements for soil contaminants, seafloor digging, extraterrestrial exploration,and search and rescue.
    The robot is the result of several experiments conducted by a team of roboticists at the University of California San Diego to better understand sand and how robots could travel through it. Sand is particularly challenging because of the friction between sand grains that leads to large forces; difficulty sensing obstacles; and the fact that it switches between behaving like a liquid and a solid depending on the context.
    The team believed that observing animals would be key to developing a bot that can swim in sand and dig itself out of sand as well. After considering worms, they landed on sea turtle hatchlings, which have enlarged front fins that allow them to surface after hatching. Turtle-like flippers can generate large propulsive forces; allow the robot to steer; and have the potential to detect obstacles.
    Scientists still do not fully understand how robots with flipper-like appendages move within sand. The research team at UC San Diego conducted extensive simulations and testing, and finally landed on a tapered body design and a shovel-shaped nose.
    “We needed to build a robot that is both strong and streamlined,” said Shivam Chopra, lead author of the paper describing the robot in the journal Advanced Intelligent Systems and a Ph.D. student in the research group of professor Nick Gravish at the Jacobs School of Engineering at UC San Diego. More

  • in

    Researchers develop AI model to better predict which drugs may cause birth defects

    Data scientists at the Icahn School of Medicine at Mount Sinai in New York and colleagues have created an artificial intelligence model that may more accurately predict which existing medicines, not currently classified as harmful, may in fact lead to congenital disabilities.
    The model, or “knowledge graph,” described in the July 17 issue of the Naturejournal Communications Medicine, also has the potential to predict the involvement of pre-clinical compounds that may harm the developing fetus. The study is the first known of its kind to use knowledge graphs to integrate various data types to investigate the causes of congenital disabilities.
    Birth defects are abnormalities that affect about 1 in 33 births in the United States. They can be functional or structural and are believed to result from various factors, including genetics. However, the causes of most of these disabilities remain unknown. Certain substances found in medicines, cosmetics, food, and environmental pollutants can potentially lead to birth defects if exposed during pregnancy.
    “We wanted to improve our understanding of reproductive health and fetal development, and importantly, warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed,” says Avi Ma’ayan, PhD, Professor, Pharmacological Sciences, and Director of the Mount Sinai Center for Bioinformatics at Icahn Mount Sinai, and senior author of the paper. “Although identifying the underlying causes is a complicated task, we offer hope that through complex data analysis like this that integrates evidence from multiple sources, we will be able, in some cases, to better predict, regulate, and protect against the significant harm that congenital disabilities could cause.”
    The researchers gathered knowledge across several datasets on birth-defect associations noted in published work, including those produced by NIH Common Fund programs, to demonstrate how integrating data from these resources can lead to synergistic discoveries. Particularly, the combined data is from the known genetics of reproductive health, classification of medicines based on their risk during pregnancy, and how drugs and pre-clinical compounds affect the biological mechanisms inside human cells.
    Specifically, the data included studies on genetic associations, drug- and preclinical-compound-induced gene expression changes in cell lines, known drug targets, genetic burden scores for human genes, and placental crossing scores for small molecule drugs.
    Importantly, using ReproTox-KG, with semi-supervised learning (SSL), the research team prioritized 30,000 preclinical small molecule drugs for their potential to cross the placenta and induce birth defects. SSL is a branch of machine learning that uses a small amount of labeled data to guide predictions for much larger unlabeled data. In addition, by analyzing the topology of the ReproTox-KG more than 500 birth-defect/gene/drug cliques were identified that could explain molecular mechanisms that underlie drug-induced birth defects. In graph theory terms, cliques are subsets of a graph where all the nodes in the clique are directly connected to all other nodes in the clique.
    The investigators caution that the study’s findings are preliminary and that further experiments are needed for validation.
    Next, the investigators plan to use a similar graph-based approach for other projects focusing on the relationship between genes, drugs, and diseases. They also aim to use the processed dataset as training materials for courses and workshops on bioinformatics analysis. In addition, they plan to extend the study to consider more complex data, such as gene expression from specific tissues and cell types collected at multiple stages of development.
    “We hope that our collaborative work will lead to a new global framework to assess potential toxicity for new drugs and explain the biological mechanisms by which some drugs, known to cause birth defects, may operate. It’s possible that at some point in the future, regulatory agencies such as the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency may use this approach to evaluate the risk of new drugs or other chemical applications,” says Dr. Ma’ayan. More

  • in

    Robotics: New skin-like sensors fit almost everywhere

    Researchers from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM) have developed an automatic process for making soft sensors. These universal measurement cells can be attached to almost any kind of object. Applications are envisioned especially in robotics and prosthetics.
    “Detecting and sensing our environment is essential for understanding how to interact with it effectively,” says Sonja Groß. An important factor for interactions with objects is their shape. “This determines how we can perform certain tasks,” says the researcher from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM. In addition, physical properties of objects, such as their hardness and flexibility, influence how we can grasp and manipulate them, for example.
    Artificial hand: interaction with the robotic system
    The holy grail in robotics and prosthetics is a realistic emulation of the sensorimotoric skills of a person such as those in a human hand. In robotics, force and torque sensors are fully integrated into most devices. These measurement sensors provide valuable feedback on the interactions of the robotic system, such as an artificial hand, with its surroundings. However, traditional sensors have been limited in terms of customization possibilities. Nor can they be attached to arbitrary objects. In short: until now, no process existed for producing sensors for rigid objects of arbitrary shapes and sizes.
    New framework for soft sensors presented for the first time
    This was the starting point for the research of Sonja Groß and Diego Hidalgo, which they have now presented at the ICRA robotics conference in London. The difference: a soft, skin-like material that wraps around objects. The research group has also developed a framework that largely automates the production process for this skin. It works as follows: “We use software to build the structure for the sensory systems,” says Hidalgo. “We then send this information to a 3D printer where our soft sensors are made.” The printer injects a conductive black paste into liquid silicone. The silicone hardens, but the paste is enclosed by it and remains liquid. When the sensors are squeezed or stretched, their electrical resistance changes. “That tells us how much compression or stretching force is applied to a surface. We use this principle to gain a general understanding of interactions with objects and, specifically, to learn how to control an artificial hand interacting with these objects,” explains Hidalgo. What sets their work apart: the sensors embedded in silicon adjust to the surface in question (such as fingers or hands) but still provide precise data that can be used for the interaction with the environment.
    New perspectives for robotics and especially prosthetics
    “The integration of these soft, skin-like sensors in 3D objects opens up new paths for advanced haptic sensing in artificial intelligence,” says MIRMI Executive Director Prof. Sami Haddadin. The sensors provide valuable data on compressive forces and deformations in real time — thus providing immediate feedback. This expands the range of perception of an object or a robotic hand — facilitating a more sophisticated and sensitive interaction. Haddadin: “This work has the potential to bring about a general revolution in industries such as robotics, prosthetics and the human/machine interaction by making it possible to create wireless and customizable sensor technology for arbitrary objects and machines.”
    Video showing the entire process: https://youtu.be/i43wgx9bT-E More