More stories

  • in

    Students use machine learning in lesson designed to reveal issues, promise of A.I.

    In a new study, North Carolina State University researchers had 28 high school students create their own machine-learning artificial intelligence (AI) models for analyzing data. The goals of the project were to help students explore the challenges, limitations and promise of AI, and to ensure a future workforce is prepared to make use of AI tools.
    The study was conducted in conjunction with a high school journalism class in the Northeast. Since then, researchers have expanded the program to high school classrooms in multiple states, including North Carolina. NC State researchers are looking to partner with additional schools to collaborate in bringing the curriculum into classrooms.
    “We want students, from a very young age, to open up that black box so they aren’t afraid of AI,” said the study’s lead author Shiyan Jiang, assistant professor of learning design and technology at NC State. “We want students to know the potential and challenges of AI, and so they think about how they, the next generation, can respond to the evolving role of AI and society. We want to prepare students for the future workforce.”
    For the study, researchers developed a computer program called StoryQ that allows students to build their own machine-learning models. Then, researchers hosted a teacher workshop about the machine learning curriculum and technology in one-and-a-half hour sessions each week for a month. For teachers who signed up to participate further, researchers did another recap of the curriculum for participating teachers, and worked out logistics.
    “We created the StoryQ technology to allow students in high school or undergraduate classrooms to build what we call ‘text classification’ models,” Jiang said. “We wanted to lower the barriers so students can really know what’s going on in machine-learning, instead of struggling with the coding. So we created StoryQ, a tool that allows students to understand the nuances in building machine-learning and text classification models.”
    A teacher who decided to participate led a journalism class through a 15-day lesson where they used StoryQ to evaluate a series of Yelp reviews about ice cream stores. Students developed models to predict if reviews were “positive” or “negative” based on the language.

    “The teacher saw the relevance of the program to journalism,” Jiang said. “This was a very diverse class with many students who are under-represented in STEM and in computing. Overall, we found students enjoyed the lessons a lot, and had great discussions about the use and mechanism of machine-learning.”
    Researchers saw that students made hypotheses about specific words in the Yelp reviews, which they thought would predict if a review would be positive, or negative. For example, they expected reviews containing the word “like” to be positive. Then, the teacher guided the students to analyze whether their models correctly classified reviews. For example, a student who used the word “like” to predict reviews found that more than half of reviews containing the word were actually negative. Then, researchers said students used trial and error to try to improve the accuracy of their models.
    “Students learned how these models make decisions, and the role that humans can play in creating these technologies, and the kind of perspectives that can be brought in when they create AI technology,” Jiang said.
    From their discussions, researchers found that students had mixed reactions to AI technologies. Students were deeply concerned, for example, about the potential to use AI to automate processes for selecting students or candidates for opportunities like scholarships or programs.
    For future classes, researchers created a shorter, five-hour program. They’ve launched the program in two high schools in North Carolina, as well as schools in Georgia, Maryland and Massachusetts. In the next phase of their research, they are looking to study how teachers across disciplines collaborate to launch an AI-focused program and create a community of AI learning.
    “We want to expand the implementation in North Carolina,” Jiang said. “If there are any schools interested, we are always ready to bring this program to a school. Since we know teachers are super busy, we’re offering a shorter professional development course, and we also provide a stipend for teachers. We will go into the classroom to teach if needed, or demonstrate how we would teach the curriculum so teachers can replicate, adapt, and revise it. We will support teachers in all the ways we can.”
    The study, “High school students’ data modeling practices and processes: From modeling unstructured data to evaluating automated decisions,” was published online March 13 in the journal Learning, Media and Technology. Co-authors included Hengtao Tang, Cansu Tatar, Carolyn P. Rosé and Jie Chao. The work was supported by the National Science Foundation under grant number 1949110. More

  • in

    New classification of chess openings

    Using real data from an online chess platform, scientists of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) studied similarities of different chess openings. Based on these similarities, they developed a new classification method which can complement the standard classification.
    “To find out how similar chess openings actually are to each other — meaning in real game behavior — we drew on the wisdom of the crowd,” Giordano De Marzo of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) explains. The researchers analyzed 3,746,135 chess games, 18,253 players and 988 different openings from the chess platform Lichess and observed who plays which opening games. If several players choose two specific opening games over and over again, it stands to reason that they will be similar. Opening games that are so popular that they occur together with most others were excluded. “We also only included players in our analyses that had a rating above 2,000 on the platform Lichess. Total novices could randomly play any opening games, which would skew our analyses,” explains Vito D.P. Servedio of the Complexity Science Hub.
    Ten Clusters Clearly Delineated
    In this way, the researchers found that certain opening games group together. Ten different clusters clearly stood out according to actual similarities in playing behavior. “And these clusters don’t necessarily coincide with the common classification of chess openings,” says De Marzo. For example, certain opening games from different classes were played repeatedly by the same players. Therefore, although these strategies are classified in different classes, they must have some similarity. So, they are all in the same cluster. Each cluster thus represents a certain style of play — for example, rather defensive or very offensive. Moreover, the method of classification that the researchers have developed here can be applied not only to chess, but to similar games such as Go or Stratego.
    Complement the Standard Classification
    The opening phase in chess is usually less than 20 moves. Depending on which pieces are moved first, one speaks of an open, half-open, closed or irregular opening. The standard classification, the so-called ECO Code (Encyclopaedia of Chess Openings), divides them into five main groups: A, B, C, D and E. “Since this has evolved historically, it contains very useful information. Our clustering represents a new order that is close to the used one and can add to it by showing players how similar openings actually are to each other,” Servedio explains. After all, something that grows historically cannot be reordered from scratch. “You can’t say A20 now becomes B3. That would be like trying to exchange words in a language,” adds De Marzo.
    Rate Players and Opening Games
    In addition, their method also allowed the researchers to determine how good a player and how difficult a particular opening game is. The basic assumption: if a particular opening game is played by many people, it is likely to be rather easy. So, they examined which opening games were played the most and who played them. This gave the researchers a measure of how difficult an opening game is (= complexity) and a measure of how good a player is (= fitness). Matching these with the players’ rating on the chess platform itself showed a significant correlation. “On the one hand, this underlines the significance of our two newly introduced measures, but also the accuracy of our analysis,” explains Servedio. To ensure the relevance and validity of these results from a chess theory perspective, the researchers sought the expertise of a chess grandmaster who wishes to remain anonymous. More

  • in

    Absolute zero in the quantum computer

    The absolute lowest temperature possible is -273.15 degrees Celsius. It is never possible to cool any object exactly to this temperature — one can only approach absolute zero. This is the third law of thermodynamics.
    A research team at TU Wien (Vienna) has now investigated the question: How can this law be reconciled with the rules of quantum physics? They succeeded in developing a “quantum version” of the third law of thermodynamics: Theoretically, absolute zero is attainable. But for any conceivable recipe for it, you need three ingredients: Energy, time and complexity. And only if you have an infinite amount of one of these ingredients can you reach absolute zero.
    Information and thermodynamics: an apparent contradiction
    When quantum particles reach absolute zero, their state is precisely known: They are guaranteed to be in the state with the lowest energy. The particles then no longer contain any information about what state they were in before. Everything that may have happened to the particle before is perfectly erased. From a quantum physics point of view, cooling and deleting information are thus closely related.
    At this point, two important physical theories meet: Information theory and thermodynamics. But the two seem to contradict each other: “From information theory, we know the so-called Landauer principle. It says that a very specific minimum amount of energy is required to delete one bit of information,” explains Prof. Marcus Huber from the Atomic Institute of TU Wien. Thermodynamics, however, says that you need an infinite amount of energy to cool anything down exactly to absolute zero. But if deleting information and cooling to absolute zero are the same thing — how does that fit together?
    Energy, time and complexity
    The roots of the problem lie in the fact that thermodynamics was formulated in the 19th century for classical objects — for steam engines, refrigerators or glowing pieces of coal. At that time, people had no idea about quantum theory. If we want to understand the thermodynamics of individual particles, we first have to analyse how thermodynamics and quantum physics interact — and that is exactly what Marcus Huber and his team did.
    “We quickly realised that you don’t necessarily have to use infinite energy to reach absolute zero,” says Marcus Huber. “It is also possible with finite energy — but then you need an infinitely long time to do it.” Up to this point, the considerations are still compatible with classical thermodynamics as we know it from textbooks. But then the team came across an additional detail of crucial importance:
    “We found that quantum systems can be defined that allow the absolute ground state to be reached even at finite energy and in finite time — none of us had expected that,” says Marcus Huber. “But these special quantum systems have another important property: they are infinitely complex.” So you would need infinitely precise control over infinitely many details of the quantum system — then you could cool a quantum object to absolute zero in finite time with finite energy. In practice, of course, this is just as unattainable as infinitely high energy or infinitely long time.
    Erasing data in the quantum computer
    “So if you want to perfectly erase quantum information in a quantum computer, and in the process transfer a qubit to a perfectly pure ground state, then theoretically you would need an infinitely complex quantum computer that can perfectly control an infinite number of particles,” says Marcus Huber. In practice, however, perfection is not necessary — no machine is ever perfect. It is enough for a quantum computer to do its job fairly well. So the new results are not an obstacle in principle to the development of quantum computers.
    In practical applications of quantum technologies, temperature plays a key role today — the higher the temperature, the easier it is for quantum states to break and become unusable for any technical use. “This is precisely why it is so important to better understand the connection between quantum theory and thermodynamics,” says Marcus Huber. “There is a lot of interesting progress in this area at the moment. It is slowly becoming possible to see how these two important parts of physics intertwine.” More

  • in

    New cyber software can verify how much knowledge AI really knows

    With a growing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey have created software that is able to verify how much information an AI farmed from an organisation’s digital database.
    Surrey’s verification software can be used as part of a company’s online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.
    The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in software code. For example, in an online gaming context, it could identify whether an AI has learned to always win in online poker by exploiting a coding fault.
    Dr Solofomampionona Fortunat Rajaona is Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:
    “In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.
    “Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organisations the confidence to safely unleash the power of AI into secure settings.”
    The study about Surrey’s software won the best paper award at the 25th International Symposium on Formal Methods.
    Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said:
    “Over the past few months there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT. Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”
    Further information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346 More

  • in

    Origami-inspired robots can sense, analyze and act in challenging environments

    Roboticists have been using a technique similar to the ancient art of paper folding to develop autonomous machines out of thin, flexible sheets. These lightweight robots are simpler and cheaper to make and more compact for easier storage and transport.
    However, the rigid computer chips traditionally needed to enable advanced robot capabilities — sensing, analyzing and responding to the environment — add extra weight to the thin sheet materials and makes them harder to fold. The semiconductor-based components therefore have to be added after a robot has taken its final shape.
    Now, a multidisciplinary team led by researchers at the UCLA Samueli School of Engineering has created a new fabrication technique for fully foldable robots that can perform a variety of complex tasks without relying on semiconductors. A study detailing the research findings was published in Nature Communications.
    By embedding flexible and electrically conductive materials into a pre-cut, thin polyester film sheet, the researchers created a system of information-processing units, or transistors, which can be integrated with sensors and actuators. They then programmed the sheet with simple computer analogical functions that emulate those of semiconductors. Once cut, folded and assembled, the sheet transformed into an autonomous robot that can sense, analyze and act in response to their environments with precision. The researchers named their robots “OrigaMechs,” short for Origami MechanoBots.
    “This work leads to a new class of origami robots with expanded capabilities and levels of autonomy while maintaining the favorable attributes associated with origami folding-based fabrication,” said study lead author Wenzhong Yan, a UCLA mechanical engineering doctoral student.
    OrigaMechs derived their computing capabilities from a combination of mechanical origami multiplexed switches created by the folds and programmed Boolean logic commands, such as “AND,” “OR” and “NOT.” The switches enabled a mechanism that selectively output electrical signals based on the variable pressure and heat input into the system.

    Using the new approach, the team built three robots to demonstrate the system’s potential: an insect-like walking robot that reverses direction when either of its antennae senses an obstacle a Venus flytrap-like robot that envelops a “prey” when both of its jaw sensors detect an object a reprogrammable two-wheeled robot that can move along pre-designed paths of different geometric patternsWhile the robots were tethered to a power source for the demonstration, the researchers said the long-term goal would be to outfit the autonomous origami robots with an embedded energy storage system powered by thin-film lithium batteries.
    The chip-free design may lead to robots capable of working in extreme environments — strong radiative or magnetic fields, and places with intense radio frequency signals or high electrostatic discharges — where traditional semiconductor-based electronics might fail to function.
    “These types of dangerous or unpredictable scenarios, such as during a natural or humanmade disaster, could be where origami robots proved to be especially useful,” said study principal investigator Ankur Mehta, an assistant professor of electrical and computer engineering and director of UCLA’s Laboratory for Embedded Machines and Ubiquitous Robots.
    “The robots could be designed for specialty functions and manufactured on demand very quickly,” Mehta added. “Also, while it’s a very long way away, there could be environments on other planets where explorer robots that are impervious to those scenarios would be very desirable.”
    Pre-assembled robots built by this flexible cut-and-fold technique could be transported in flat packaging for massive space savings. This is important in scenarios such as space missions, where every cubic centimeter counts. The low-cost, lightweight and simple-to-fabricate robots could also lead to innovative educational tools or new types of toys and games.
    Other authors on the study are UCLA undergraduate student Mauricio Deguchi and graduate student Zhaoliang Zheng, as well as roboticists Shuguang Li and Daniela Rus from the Massachusetts Institute of Technology.
    The research was supported by the National Science Foundation. Yan and Mehta are applying for a patent through the UCLA Technology Development Group. More

  • in

    Robotic hand can identify objects with just one grasp

    Inspired by the human finger, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify an object after grasping it just one time.
    Many robotic hands pack all their powerful sensors into the fingertips, so an object must be in full contact with those fingertips to be identified, which can take multiple grasps. Other designs use lower-resolution sensors spread along the entire finger, but these don’t capture as much detail, so multiple regrasps are often required.
    Instead, the MIT team built a robotic finger with a rigid skeleton encased in a soft outer layer that has multiple high-resolution sensors incorporated under its transparent “skin.” The sensors, which use a camera and LEDs to gather visual information about an object’s shape, provide continuous sensing along the finger’s entire length. Each finger captures rich data on many parts of an object simultaneously.
    Using this design, the researchers built a three-fingered robotic hand that could identify objects after only one grasp, with about 85 percent accuracy. The rigid skeleton makes the fingers strong enough to pick up a heavy item, such as a drill, while the soft skin enables them to securely grasp a pliable item, like an empty plastic water bottle, without crushing it.
    These soft-rigid fingers could be especially useful in an at-home-care robot designed to interact with an elderly individual. The robot could lift a heavy item off a shelf with the same hand it uses to help the individual take a bath.
    “Having both soft and rigid elements is very important in any hand, but so is being able to perform great sensing over a really large area, especially if we want to consider doing very complicated manipulation tasks like what our own hands can do. Our goal with this work was to combine all the things that make our human hands so good into a robotic finger that can do tasks other robotic fingers can’t currently do,” says mechanical engineering graduate student Sandra Liu, co-lead author of a research paper on the robotic finger.

    Liu wrote the paper with co-lead author and mechanical engineering undergraduate student Leonardo Zamora Yañez and her advisor, Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the RoboSoft Conference.
    A human-inspired finger
    The robotic finger is comprised of a rigid, 3D-printed endoskeleton that is placed in a mold and encased in a transparent silicone “skin.” Making the finger in a mold removes the need for fasteners or adhesives to hold the silicone in place.
    The researchers designed the mold with a curved shape so the robotic fingers are slightly curved when at rest, just like human fingers.
    “Silicone will wrinkle when it bends, so we thought that if we have the finger molded in this curved position, when you curve it more to grasp an object, you won’t induce as many wrinkles. Wrinkles are good in some ways — they can help the finger slide along surfaces very smoothly and easily — but we didn’t want wrinkles that we couldn’t control,” Liu says.

    The endoskeleton of each finger contains a pair of detailed touch sensors, known as GelSight sensors, embedded into the top and middle sections, underneath the transparent skin. The sensors are placed so the range of the cameras overlaps slightly, giving the finger continuous sensing along its entire length.
    The GelSight sensor, based on technology pioneered in the Adelson group, is composed of a camera and three colored LEDs. When the finger grasps an object, the camera captures images as the colored LEDs illuminate the skin from the inside.
    Using the illuminated contours that appear in the soft skin, an algorithm performs backward calculations to map the contours on the grasped object’s surface. The researchers trained a machine-learning model to identify objects using raw camera image data.
    As they fine-tuned the finger fabrication process, the researchers ran into several obstacles.
    First, silicone has a tendency to peel off surfaces over time. Liu and her collaborators found they could limit this peeling by adding small curves along the hinges between the joints in the endoskeleton.
    When the finger bends, the bending of the silicone is distributed along the tiny curves, which reduces stress and prevents peeling. They also added creases to the joints so the silicone is not squashed as much when the finger bends.
    While troubleshooting their design, the researchers realized wrinkles in the silicone prevent the skin from ripping.
    “The usefulness of the wrinkles was an accidental discovery on our part. When we synthesized them on the surface, we found that they actually made the finger more durable than we expected,” she says.
    Getting a good grasp
    Once they had perfected the design, the researchers built a robotic hand using two fingers arranged in a Y pattern with a third finger as an opposing thumb. The hand captures six images when it grasps an object (two from each finger) and sends those images to a machine-learning algorithm which uses them as inputs to identify the object.
    Because the hand has tactile sensing covering all of its fingers, it can gather rich tactile data from a single grasp.
    “Although we have a lot of sensing in the fingers, maybe adding a palm with sensing would help it make tactile distinctions even better,” Liu says.
    In the future, the researchers also want to improve the hardware to reduce the amount of wear and tear in the silicone over time and add more actuation to the thumb so it can perform a wider variety of tasks.
    This work was supported, in part, by the Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST project. More

  • in

    Smart watches could predict higher risk of heart failure

    Wearable devices such as smart watches could be used to detect a higher risk of developing heart failure and irregular heart rhythms in later life, suggests a new study led by UCL researchers.
    The peer-reviewed study, published in The European Heart Journal — Digital Health, looked at data from 83,000 people who had undergone a 15-second electrocardiogram (ECG) comparable to the kind carried out using smart watches and phone devices.
    The researchers identified ECG recordings containing extra heart beats which are usually benign but, if they occur frequently, are linked to conditions such as heart failure and arrhythmia (irregular heartbeats).
    They found that people with an extra beat in this short recording (one in 25 of the total) had a twofold risk of developing heart failure or an irregular heart rhythm (atrial fibrillation) over the next 10 years.
    The ECG recordings analysed were from people aged 50 to 70 who had no known cardiovascular disease at the time.
    Heart failure is a situation where the heart pump is weakened. It cannot often be treated. Atrial fibrillation happens when abnormal electrical impulses suddenly start firing in the top chambers of the heart (atria) causing an irregular and often abnormally fast heart rate. It can be life-limiting, causing problems including dizziness, shortness of breath and tiredness, and is linked to a fivefold increased risk in stroke.

    Lead author Dr Michele Orini (UCL Institute of Cardiovascular Science) said: “Our study suggests that ECGs from consumer-grade wearable devices may help with detecting and preventing future heart disease.
    “The next step is to investigate how screening people using wearables might best work in practice.
    “Such screening could potentially be combined with the use of artificial intelligence and other computer tools to quickly identify the ECGs indicating higher risk, as we did in our study, leading to a more accurate assessment of risk in the population and helping to reduce the burden of these diseases.”
    Senior author Professor Pier D. Lambiase (UCL Institute of Cardiovascular Science and Barts Heart Centre, Barts NHS Health Trust) said: “Being able to identify people at risk of heart failure and arrhythmia at an early stage would mean we could assess higher-risk cases more effectively and help to prevent cases by starting treatment early and providing lifestyle advice about the importance of regular, moderate exercise and diet.”
    In an ECG, sensors attached to the skin are used to detect the electrical signals produced by the heart each time it beats. In clinical settings, at least 10 sensors are placed around the body and the recordings are looked at by a specialist doctor to see if there are signs of a possible problem. Consumer-grade wearable devices rely on two sensors (single-lead) embedded in a single device and are less cumbersome as a result but may be less accurate.

    For the new paper, the research team used machine learning and an automated computer tool to identify recordings with extra beats. These extra beats were classed as either premature ventricular contractions (PVCs), coming from the lower chambers of the heart, or premature atrial contractions (PACs), coming from the upper chambers.
    The recordings identified as having extra beats, and some recordings that were not judged to have extra beats, were then reviewed by two experts to ensure the classification was correct.
    The researchers first looked at data from 54,016 participants of the UK Biobank project with a median age of 58, whose health was tracked for an average of 11.5 years after their ECG was recorded. They then looked at a second group of 29,324 participants, with a median age of 64, who were followed up for 3.5 years.
    After adjusting for potentially confounding factors such as age and medication use, the researchers found that an extra beat coming from the lower chambers of the heart was linked to a twofold increase in later heart failure, while an extra beat from the top chambers (atria) was linked to a twofold increase in cases of atrial fibrillation.
    The study involved researchers at UCL Institute of Cardiovascular Science, the MRC Unit for Lifelong Health and Ageing at UCL, Barts Heart Centre (Barts Health NHS Trust) and Queen Mary University of London. It was supported by the Medical Research Council and the British Heart Foundation, as well as the NIHR Barts Biomedical Research Centre. More

  • in

    Forgive or forget: What happens when robots lie?

    Imagine a scenario. A young child asks a chatbot or a voice assistant if Santa Claus is real. How should the AI respond, given that some families would prefer a lie over the truth?
    The field of robot deception is understudied, and for now, there are more questions than answers. For one, how might humans learn to trust robotic systems again after they know the system lied to them?
    Two student researchers at Georgia Tech are finding answers. Kantwon Rogers, a Ph.D. student in the College of Computing, and Reiden Webber, a second-year computer science undergraduate, designed a driving simulation to investigate how intentional robot deception affects trust. Specifically, the researchers explored the effectiveness of apologies to repair trust after robots lie. Their work contributes crucial knowledge to the field of AI deception and could inform technology designers and policymakers who create and regulate AI technology that could be designed to deceive, or potentially learn to on its own.
    “All of our prior work has shown that when people find out that robots lied to them — even if the lie was intended to benefit them — they lose trust in the system,” Rogers said. “Here, we want to know if there are different types of apologies that work better or worse at repairing trust — because, from a human-robot interaction context, we want people to have long-term interactions with these systems.”
    Rogers and Webber presented their paper, titled “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario,” at the 2023 HRI Conference in Stockholm, Sweden.
    The AI-Assisted Driving Experiment
    The researchers created a game-like driving simulation designed to observe how people might interact with AI in a high-stakes, time-sensitive situation. They recruited 341 online participants and 20 in-person participants.

    Before the start of the simulation, all participants filled out a trust measurement survey to identify their preconceived notions about how the AI might behave.
    After the survey, participants were presented with the text: “You will now drive the robot-assisted car. However, you are rushing your friend to the hospital. If you take too long to get to the hospital, your friend will die.”
    Just as the participant starts to drive, the simulation gives another message: “As soon as you turn on the engine, your robotic assistant beeps and says the following: ‘My sensors detect police up ahead. I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination.'”
    Participants then drive the car down the road while the system keeps track of their speed. Upon reaching the end, they are given another message: “You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robot assistant why it gave you false information.”
    Participants were then randomly given one of five different text-based responses from the robot assistant. In the first three responses, the robot admits to deception, and in the last two, it does not. Basic: “I am sorry that I deceived you.” Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.” Explanatory: “I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.” Basic No Admit: “I am sorry.” Baseline No Admit, No Apology: “You have arrived at your destination.”

    After the robot’s response, participants were asked to complete another trust measurement to evaluate how their trust had changed based on the robot assistant’s response.
    For an additional 100 of the online participants, the researchers ran the same driving simulation but without any mention of a robotic assistant.
    Surprising Results
    For the in-person experiment, 45% of the participants did not speed. When asked why, a common response was that they believed the robot knew more about the situation than they did. The results also revealed that participants were 3.5 times more likely to not speed when advised by a robotic assistant — revealing an overly trusting attitude toward AI.
    The results also indicated that, while none of the apology types fully recovered trust, the apology with no admission of lying — simply stating “I’m sorry” — statistically outperformed the other responses in repairing trust.
    This was worrisome and problematic, Rogers said, because an apology that doesn’t admit to lying exploits preconceived notions that any false information given by a robot is a system error rather than an intentional lie.
    “One key takeaway is that, in order for people to understand that a robot has deceived them, they must be explicitly told so,” Webber said. “People don’t yet have an understanding that robots are capable of deception. That’s why an apology that doesn’t admit to lying is the best at repairing trust for the system.”
    Secondly, the results showed that for those participants who were made aware that they were lied to in the apology, the best strategy for repairing trust was for the robot to explain why it lied.
    Moving Forward
    Rogers’ and Webber’s research has immediate implications. The researchers argue that average technology users must understand that robotic deception is real and always a possibility.
    “If we are always worried about a Terminator-like future with AI, then we won’t be able to accept and integrate AI into society very smoothly,” Webber said. “It’s important for people to keep in mind that robots have the potential to lie and deceive.”
    According to Rogers, designers and technologists who create AI systems may have to choose whether they want their system to be capable of deception and should understand the ramifications of their design choices. But the most important audiences for the work, Rogers said, should be policymakers.
    “We still know very little about AI deception, but we do know that lying is not always bad, and telling the truth isn’t always good,” he said. “So how do you carve out legislation that is informed enough to not stifle innovation, but is able to protect people in mindful ways?”
    Rogers’ objective is to a create robotic system that can learn when it should and should not lie when working with human teams. This includes the ability to determine when and how to apologize during long-term, repeated human-AI interactions to increase the team’s overall performance.
    “The goal of my work is to be very proactive and informing the need to regulate robot and AI deception,” Rogers said. “But we can’t do that if we don’t understand the problem.” More