More stories

  • in

    Robots predict human intention for faster builds

    Humans have a way of understandings others’ goals, desires and beliefs, a crucial skill that allows us to anticipate people’s actions. Taking bread out of the toaster? You’ll need a plate. Sweeping up leaves? I’ll grab the green trash can.
    This skill, often referred to as “theory of mind,” comes easily to us as humans, but is still challenging for robots. But, if robots are to become truly collaborative helpers in manufacturing and in everyday life, they need to learn the same abilities.
    In a new paper, a best paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), USC Viterbi computer science researchers aim to teach robots how to predict human preferences in assembly tasks, so they can one day help out on everything from building a satellite to setting a table.
    “When working with people, a robot needs to constantly guess what the person will do next,” said lead author Heramb Nemlekar, a USC computer science PhD student working under the supervision of Stefanos Nikolaidis, an assistant professor of computer science. “For example, if the robot thinks the person will need a screwdriver to assemble the next part, it can get the screwdriver ahead of time so that the person does not have to wait. This way the robot can help people finish the assembly much faster.”
    But, as anyone who has co-built furniture with a partner can attest, predicting what a person will do next is difficult: different people prefer to build the same product in different ways. While some people want to start with the most difficult parts to get them over with, others may want to start with the easiest parts to save energy.
    Making predictions
    Most of the current techniques require people to show the robot how they would like to perform the assembly, but this takes time and effort and can defeat the purpose, said Nemlekar. “Imagine having to assemble an entire airplane just to teach the robot your preferences,” he said.

    In this new study, however, the researchers found similarities in how an individual will assemble different products. For instance, if you start with the hardest part when building an Ikea sofa, you are likely to use the same tact when putting together a baby’s crib.
    So, instead of “showing” the robot their preferences in a complex task, they created a small assembly task (called a “canonical” task) that people can easily and quickly perform. In this case, putting together parts of a simple model airplane, such as the wings, tail and propeller.
    The robot “watched” the human complete the task using a camera placed directly above the assembly area, looking down. To detect the parts operated by the human, the system used AprilTags, similar to QR codes, attached to the parts.
    Then, the system used machine learning to learn a person’s preference based on their sequence of actions in the canonical task.
    “Based on how a person performs the small assembly, the robot predicts what that person will do in the larger assembly,” said Nemlekar. “For example, if the robot sees that a person likes to start the small assembly with the easiest part, it will predict that they will start with the easiest part in the large assembly as well.”
    Building trust

    In the researchers’ user study, their system was able to predict the actions that humans will take with around 82% accuracy.
    “We hope that our research can make it easier for people to show robots what they prefer,” said Nemlekar. “By helping each person in their preferred way, robots can reduce their work, save time and even build trust with them.”
    For instance, imagine you’re assembling a piece of furniture at home, but you’re not particularly handy and struggle with the task. A robot that has been trained to predict your preferences could provide you with the necessary tools and parts ahead of time, making the assembly process easier.
    This technology could also be useful in industrial settings where workers are tasked with assembling products on a mass scale, saving time and reducing the risk of injury or accidents. Additionally, it could help persons with disabilities or limited mobility to more easily assemble products and maintain independence.
    Quickly learning preferences
    The goal is not to replace humans on the factory floor, say the researchers. Instead, they hope this research will lead to significant improvements in the safety and productivity of assembly workers in human-robot hybrid factories. “Robots can perform the non-value-added or ergonomically challenging tasks that are currently being performed by workers.
    As for the next steps, the researchers plan to develop a method to automatically design canonical tasks for different types of assembly task. They also aim to evaluate the benefit of learning human preferences from short tasks and predicting their actions in a complex task in different contexts, for instance, personal assistance in homes.
    “While we observed that human preferences transfer from canonical to actual tasks in assembly manufacturing, I expect similar findings in other applications as well,” said Nikolaidis. “A robot that can quickly learn our preferences can help us prepare a meal, rearrange furniture or do house repairs, having a significant impact in our daily lives.” More

  • in

    DMI allows magnon-magnon coupling in hybrid perovskites

    An international group of researchers has created a mixed magnon state in an organic hybrid perovskite material by utilizing the Dzyaloshinskii-Moriya-Interaction (DMI). The resulting material has potential for processing and storing quantum computing information. The work also expands the number of potential materials that can be used to create hybrid magnonic systems.
    In magnetic materials, quasi-particles called magnons direct the electron spin within the material. There are two types of magnons — optical and acoustic — which refer to the direction of their spin.
    “Both optical and acoustic magnons propagate spin waves in antiferromagnets,” says Dali Sun, associate professor of physics and member of the Organic and Carbon Electronics Lab (ORaCEL) at North Carolina State University. “But in order to use spin waves to process quantum information, you need a mixed spin wave state.”
    “Normally two magnon modes cannot generate a mixed spin state due to their different symmetries,” Sun says. “But by harnessing the DMI we discovered a hybrid perovskite with a mixed magnon state.” Sun is also a corresponding author of the research.
    The researchers accomplished this by adding an organic cation to the material, which created a particular interaction called the DMI. In short, the DMI breaks the symmetry of the material, allowing the spins to mix.
    The team utilized a copper based magnetic hybrid organic-inorganic perovskite, which has a unique octahedral structure. These octahedrons can tilt and deform in different ways. Adding an organic cation to the material breaks the symmetry, creating angles within the material that allow the different magnon modes to couple and the spins to mix.
    “Beyond the quantum implications, this is the first time we’ve observed broken symmetry in a hybrid organic-inorganic perovskite,” says Andrew Comstock, NC State graduate research assistant and first author of the research.
    “We found that the DMI allows magnon coupling in copper-based hybrid perovskite materials with the correct symmetry requirements,” Comstock says. “Adding different cations creates different effects. This work really opens up ways to create magnon coupling from a lot of different materials — and studying the dynamic effects of this material can teach us new physics as well.”
    The work appears in Nature Communications and was primarily supported by the U.S. Department of Energy’s Center for Hybrid Organic Inorganic Semiconductors for Energy (CHOISE). Chung-Tao Chou of the Massachusetts Institute of Technology is co-first author of the work. Luqiao Liu of MIT, and Matthew Beard and Haipeng Lu of the National Renewable Energy Laboratory are co-corresponding authors of the research. More

  • in

    Students use machine learning in lesson designed to reveal issues, promise of A.I.

    In a new study, North Carolina State University researchers had 28 high school students create their own machine-learning artificial intelligence (AI) models for analyzing data. The goals of the project were to help students explore the challenges, limitations and promise of AI, and to ensure a future workforce is prepared to make use of AI tools.
    The study was conducted in conjunction with a high school journalism class in the Northeast. Since then, researchers have expanded the program to high school classrooms in multiple states, including North Carolina. NC State researchers are looking to partner with additional schools to collaborate in bringing the curriculum into classrooms.
    “We want students, from a very young age, to open up that black box so they aren’t afraid of AI,” said the study’s lead author Shiyan Jiang, assistant professor of learning design and technology at NC State. “We want students to know the potential and challenges of AI, and so they think about how they, the next generation, can respond to the evolving role of AI and society. We want to prepare students for the future workforce.”
    For the study, researchers developed a computer program called StoryQ that allows students to build their own machine-learning models. Then, researchers hosted a teacher workshop about the machine learning curriculum and technology in one-and-a-half hour sessions each week for a month. For teachers who signed up to participate further, researchers did another recap of the curriculum for participating teachers, and worked out logistics.
    “We created the StoryQ technology to allow students in high school or undergraduate classrooms to build what we call ‘text classification’ models,” Jiang said. “We wanted to lower the barriers so students can really know what’s going on in machine-learning, instead of struggling with the coding. So we created StoryQ, a tool that allows students to understand the nuances in building machine-learning and text classification models.”
    A teacher who decided to participate led a journalism class through a 15-day lesson where they used StoryQ to evaluate a series of Yelp reviews about ice cream stores. Students developed models to predict if reviews were “positive” or “negative” based on the language.

    “The teacher saw the relevance of the program to journalism,” Jiang said. “This was a very diverse class with many students who are under-represented in STEM and in computing. Overall, we found students enjoyed the lessons a lot, and had great discussions about the use and mechanism of machine-learning.”
    Researchers saw that students made hypotheses about specific words in the Yelp reviews, which they thought would predict if a review would be positive, or negative. For example, they expected reviews containing the word “like” to be positive. Then, the teacher guided the students to analyze whether their models correctly classified reviews. For example, a student who used the word “like” to predict reviews found that more than half of reviews containing the word were actually negative. Then, researchers said students used trial and error to try to improve the accuracy of their models.
    “Students learned how these models make decisions, and the role that humans can play in creating these technologies, and the kind of perspectives that can be brought in when they create AI technology,” Jiang said.
    From their discussions, researchers found that students had mixed reactions to AI technologies. Students were deeply concerned, for example, about the potential to use AI to automate processes for selecting students or candidates for opportunities like scholarships or programs.
    For future classes, researchers created a shorter, five-hour program. They’ve launched the program in two high schools in North Carolina, as well as schools in Georgia, Maryland and Massachusetts. In the next phase of their research, they are looking to study how teachers across disciplines collaborate to launch an AI-focused program and create a community of AI learning.
    “We want to expand the implementation in North Carolina,” Jiang said. “If there are any schools interested, we are always ready to bring this program to a school. Since we know teachers are super busy, we’re offering a shorter professional development course, and we also provide a stipend for teachers. We will go into the classroom to teach if needed, or demonstrate how we would teach the curriculum so teachers can replicate, adapt, and revise it. We will support teachers in all the ways we can.”
    The study, “High school students’ data modeling practices and processes: From modeling unstructured data to evaluating automated decisions,” was published online March 13 in the journal Learning, Media and Technology. Co-authors included Hengtao Tang, Cansu Tatar, Carolyn P. Rosé and Jie Chao. The work was supported by the National Science Foundation under grant number 1949110. More

  • in

    New classification of chess openings

    Using real data from an online chess platform, scientists of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) studied similarities of different chess openings. Based on these similarities, they developed a new classification method which can complement the standard classification.
    “To find out how similar chess openings actually are to each other — meaning in real game behavior — we drew on the wisdom of the crowd,” Giordano De Marzo of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) explains. The researchers analyzed 3,746,135 chess games, 18,253 players and 988 different openings from the chess platform Lichess and observed who plays which opening games. If several players choose two specific opening games over and over again, it stands to reason that they will be similar. Opening games that are so popular that they occur together with most others were excluded. “We also only included players in our analyses that had a rating above 2,000 on the platform Lichess. Total novices could randomly play any opening games, which would skew our analyses,” explains Vito D.P. Servedio of the Complexity Science Hub.
    Ten Clusters Clearly Delineated
    In this way, the researchers found that certain opening games group together. Ten different clusters clearly stood out according to actual similarities in playing behavior. “And these clusters don’t necessarily coincide with the common classification of chess openings,” says De Marzo. For example, certain opening games from different classes were played repeatedly by the same players. Therefore, although these strategies are classified in different classes, they must have some similarity. So, they are all in the same cluster. Each cluster thus represents a certain style of play — for example, rather defensive or very offensive. Moreover, the method of classification that the researchers have developed here can be applied not only to chess, but to similar games such as Go or Stratego.
    Complement the Standard Classification
    The opening phase in chess is usually less than 20 moves. Depending on which pieces are moved first, one speaks of an open, half-open, closed or irregular opening. The standard classification, the so-called ECO Code (Encyclopaedia of Chess Openings), divides them into five main groups: A, B, C, D and E. “Since this has evolved historically, it contains very useful information. Our clustering represents a new order that is close to the used one and can add to it by showing players how similar openings actually are to each other,” Servedio explains. After all, something that grows historically cannot be reordered from scratch. “You can’t say A20 now becomes B3. That would be like trying to exchange words in a language,” adds De Marzo.
    Rate Players and Opening Games
    In addition, their method also allowed the researchers to determine how good a player and how difficult a particular opening game is. The basic assumption: if a particular opening game is played by many people, it is likely to be rather easy. So, they examined which opening games were played the most and who played them. This gave the researchers a measure of how difficult an opening game is (= complexity) and a measure of how good a player is (= fitness). Matching these with the players’ rating on the chess platform itself showed a significant correlation. “On the one hand, this underlines the significance of our two newly introduced measures, but also the accuracy of our analysis,” explains Servedio. To ensure the relevance and validity of these results from a chess theory perspective, the researchers sought the expertise of a chess grandmaster who wishes to remain anonymous. More

  • in

    Absolute zero in the quantum computer

    The absolute lowest temperature possible is -273.15 degrees Celsius. It is never possible to cool any object exactly to this temperature — one can only approach absolute zero. This is the third law of thermodynamics.
    A research team at TU Wien (Vienna) has now investigated the question: How can this law be reconciled with the rules of quantum physics? They succeeded in developing a “quantum version” of the third law of thermodynamics: Theoretically, absolute zero is attainable. But for any conceivable recipe for it, you need three ingredients: Energy, time and complexity. And only if you have an infinite amount of one of these ingredients can you reach absolute zero.
    Information and thermodynamics: an apparent contradiction
    When quantum particles reach absolute zero, their state is precisely known: They are guaranteed to be in the state with the lowest energy. The particles then no longer contain any information about what state they were in before. Everything that may have happened to the particle before is perfectly erased. From a quantum physics point of view, cooling and deleting information are thus closely related.
    At this point, two important physical theories meet: Information theory and thermodynamics. But the two seem to contradict each other: “From information theory, we know the so-called Landauer principle. It says that a very specific minimum amount of energy is required to delete one bit of information,” explains Prof. Marcus Huber from the Atomic Institute of TU Wien. Thermodynamics, however, says that you need an infinite amount of energy to cool anything down exactly to absolute zero. But if deleting information and cooling to absolute zero are the same thing — how does that fit together?
    Energy, time and complexity
    The roots of the problem lie in the fact that thermodynamics was formulated in the 19th century for classical objects — for steam engines, refrigerators or glowing pieces of coal. At that time, people had no idea about quantum theory. If we want to understand the thermodynamics of individual particles, we first have to analyse how thermodynamics and quantum physics interact — and that is exactly what Marcus Huber and his team did.
    “We quickly realised that you don’t necessarily have to use infinite energy to reach absolute zero,” says Marcus Huber. “It is also possible with finite energy — but then you need an infinitely long time to do it.” Up to this point, the considerations are still compatible with classical thermodynamics as we know it from textbooks. But then the team came across an additional detail of crucial importance:
    “We found that quantum systems can be defined that allow the absolute ground state to be reached even at finite energy and in finite time — none of us had expected that,” says Marcus Huber. “But these special quantum systems have another important property: they are infinitely complex.” So you would need infinitely precise control over infinitely many details of the quantum system — then you could cool a quantum object to absolute zero in finite time with finite energy. In practice, of course, this is just as unattainable as infinitely high energy or infinitely long time.
    Erasing data in the quantum computer
    “So if you want to perfectly erase quantum information in a quantum computer, and in the process transfer a qubit to a perfectly pure ground state, then theoretically you would need an infinitely complex quantum computer that can perfectly control an infinite number of particles,” says Marcus Huber. In practice, however, perfection is not necessary — no machine is ever perfect. It is enough for a quantum computer to do its job fairly well. So the new results are not an obstacle in principle to the development of quantum computers.
    In practical applications of quantum technologies, temperature plays a key role today — the higher the temperature, the easier it is for quantum states to break and become unusable for any technical use. “This is precisely why it is so important to better understand the connection between quantum theory and thermodynamics,” says Marcus Huber. “There is a lot of interesting progress in this area at the moment. It is slowly becoming possible to see how these two important parts of physics intertwine.” More

  • in

    New cyber software can verify how much knowledge AI really knows

    With a growing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey have created software that is able to verify how much information an AI farmed from an organisation’s digital database.
    Surrey’s verification software can be used as part of a company’s online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.
    The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in software code. For example, in an online gaming context, it could identify whether an AI has learned to always win in online poker by exploiting a coding fault.
    Dr Solofomampionona Fortunat Rajaona is Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:
    “In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.
    “Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organisations the confidence to safely unleash the power of AI into secure settings.”
    The study about Surrey’s software won the best paper award at the 25th International Symposium on Formal Methods.
    Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said:
    “Over the past few months there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT. Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”
    Further information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346 More

  • in

    Origami-inspired robots can sense, analyze and act in challenging environments

    Roboticists have been using a technique similar to the ancient art of paper folding to develop autonomous machines out of thin, flexible sheets. These lightweight robots are simpler and cheaper to make and more compact for easier storage and transport.
    However, the rigid computer chips traditionally needed to enable advanced robot capabilities — sensing, analyzing and responding to the environment — add extra weight to the thin sheet materials and makes them harder to fold. The semiconductor-based components therefore have to be added after a robot has taken its final shape.
    Now, a multidisciplinary team led by researchers at the UCLA Samueli School of Engineering has created a new fabrication technique for fully foldable robots that can perform a variety of complex tasks without relying on semiconductors. A study detailing the research findings was published in Nature Communications.
    By embedding flexible and electrically conductive materials into a pre-cut, thin polyester film sheet, the researchers created a system of information-processing units, or transistors, which can be integrated with sensors and actuators. They then programmed the sheet with simple computer analogical functions that emulate those of semiconductors. Once cut, folded and assembled, the sheet transformed into an autonomous robot that can sense, analyze and act in response to their environments with precision. The researchers named their robots “OrigaMechs,” short for Origami MechanoBots.
    “This work leads to a new class of origami robots with expanded capabilities and levels of autonomy while maintaining the favorable attributes associated with origami folding-based fabrication,” said study lead author Wenzhong Yan, a UCLA mechanical engineering doctoral student.
    OrigaMechs derived their computing capabilities from a combination of mechanical origami multiplexed switches created by the folds and programmed Boolean logic commands, such as “AND,” “OR” and “NOT.” The switches enabled a mechanism that selectively output electrical signals based on the variable pressure and heat input into the system.

    Using the new approach, the team built three robots to demonstrate the system’s potential: an insect-like walking robot that reverses direction when either of its antennae senses an obstacle a Venus flytrap-like robot that envelops a “prey” when both of its jaw sensors detect an object a reprogrammable two-wheeled robot that can move along pre-designed paths of different geometric patternsWhile the robots were tethered to a power source for the demonstration, the researchers said the long-term goal would be to outfit the autonomous origami robots with an embedded energy storage system powered by thin-film lithium batteries.
    The chip-free design may lead to robots capable of working in extreme environments — strong radiative or magnetic fields, and places with intense radio frequency signals or high electrostatic discharges — where traditional semiconductor-based electronics might fail to function.
    “These types of dangerous or unpredictable scenarios, such as during a natural or humanmade disaster, could be where origami robots proved to be especially useful,” said study principal investigator Ankur Mehta, an assistant professor of electrical and computer engineering and director of UCLA’s Laboratory for Embedded Machines and Ubiquitous Robots.
    “The robots could be designed for specialty functions and manufactured on demand very quickly,” Mehta added. “Also, while it’s a very long way away, there could be environments on other planets where explorer robots that are impervious to those scenarios would be very desirable.”
    Pre-assembled robots built by this flexible cut-and-fold technique could be transported in flat packaging for massive space savings. This is important in scenarios such as space missions, where every cubic centimeter counts. The low-cost, lightweight and simple-to-fabricate robots could also lead to innovative educational tools or new types of toys and games.
    Other authors on the study are UCLA undergraduate student Mauricio Deguchi and graduate student Zhaoliang Zheng, as well as roboticists Shuguang Li and Daniela Rus from the Massachusetts Institute of Technology.
    The research was supported by the National Science Foundation. Yan and Mehta are applying for a patent through the UCLA Technology Development Group. More

  • in

    Robotic hand can identify objects with just one grasp

    Inspired by the human finger, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify an object after grasping it just one time.
    Many robotic hands pack all their powerful sensors into the fingertips, so an object must be in full contact with those fingertips to be identified, which can take multiple grasps. Other designs use lower-resolution sensors spread along the entire finger, but these don’t capture as much detail, so multiple regrasps are often required.
    Instead, the MIT team built a robotic finger with a rigid skeleton encased in a soft outer layer that has multiple high-resolution sensors incorporated under its transparent “skin.” The sensors, which use a camera and LEDs to gather visual information about an object’s shape, provide continuous sensing along the finger’s entire length. Each finger captures rich data on many parts of an object simultaneously.
    Using this design, the researchers built a three-fingered robotic hand that could identify objects after only one grasp, with about 85 percent accuracy. The rigid skeleton makes the fingers strong enough to pick up a heavy item, such as a drill, while the soft skin enables them to securely grasp a pliable item, like an empty plastic water bottle, without crushing it.
    These soft-rigid fingers could be especially useful in an at-home-care robot designed to interact with an elderly individual. The robot could lift a heavy item off a shelf with the same hand it uses to help the individual take a bath.
    “Having both soft and rigid elements is very important in any hand, but so is being able to perform great sensing over a really large area, especially if we want to consider doing very complicated manipulation tasks like what our own hands can do. Our goal with this work was to combine all the things that make our human hands so good into a robotic finger that can do tasks other robotic fingers can’t currently do,” says mechanical engineering graduate student Sandra Liu, co-lead author of a research paper on the robotic finger.

    Liu wrote the paper with co-lead author and mechanical engineering undergraduate student Leonardo Zamora Yañez and her advisor, Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the RoboSoft Conference.
    A human-inspired finger
    The robotic finger is comprised of a rigid, 3D-printed endoskeleton that is placed in a mold and encased in a transparent silicone “skin.” Making the finger in a mold removes the need for fasteners or adhesives to hold the silicone in place.
    The researchers designed the mold with a curved shape so the robotic fingers are slightly curved when at rest, just like human fingers.
    “Silicone will wrinkle when it bends, so we thought that if we have the finger molded in this curved position, when you curve it more to grasp an object, you won’t induce as many wrinkles. Wrinkles are good in some ways — they can help the finger slide along surfaces very smoothly and easily — but we didn’t want wrinkles that we couldn’t control,” Liu says.

    The endoskeleton of each finger contains a pair of detailed touch sensors, known as GelSight sensors, embedded into the top and middle sections, underneath the transparent skin. The sensors are placed so the range of the cameras overlaps slightly, giving the finger continuous sensing along its entire length.
    The GelSight sensor, based on technology pioneered in the Adelson group, is composed of a camera and three colored LEDs. When the finger grasps an object, the camera captures images as the colored LEDs illuminate the skin from the inside.
    Using the illuminated contours that appear in the soft skin, an algorithm performs backward calculations to map the contours on the grasped object’s surface. The researchers trained a machine-learning model to identify objects using raw camera image data.
    As they fine-tuned the finger fabrication process, the researchers ran into several obstacles.
    First, silicone has a tendency to peel off surfaces over time. Liu and her collaborators found they could limit this peeling by adding small curves along the hinges between the joints in the endoskeleton.
    When the finger bends, the bending of the silicone is distributed along the tiny curves, which reduces stress and prevents peeling. They also added creases to the joints so the silicone is not squashed as much when the finger bends.
    While troubleshooting their design, the researchers realized wrinkles in the silicone prevent the skin from ripping.
    “The usefulness of the wrinkles was an accidental discovery on our part. When we synthesized them on the surface, we found that they actually made the finger more durable than we expected,” she says.
    Getting a good grasp
    Once they had perfected the design, the researchers built a robotic hand using two fingers arranged in a Y pattern with a third finger as an opposing thumb. The hand captures six images when it grasps an object (two from each finger) and sends those images to a machine-learning algorithm which uses them as inputs to identify the object.
    Because the hand has tactile sensing covering all of its fingers, it can gather rich tactile data from a single grasp.
    “Although we have a lot of sensing in the fingers, maybe adding a palm with sensing would help it make tactile distinctions even better,” Liu says.
    In the future, the researchers also want to improve the hardware to reduce the amount of wear and tear in the silicone over time and add more actuation to the thumb so it can perform a wider variety of tasks.
    This work was supported, in part, by the Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST project. More