More stories

  • in

    Cats purrfectly demonstrate what it takes to trust robots

    Would you trust a robot to look after your cat? New research suggests it takes more than a carefully designed robot to care for your cat, the environment in which they operate is also vital, as well as human interaction.
    Cat Royale is a unique collaboration between Computer Scientists from the University of Nottingham and artists at Blast Theory who worked together to create a multispecies world centred around a be-spoke enclosure in which three cats and a robot arm coexist for six hours a day during a twelve-day installation as part of an artist-led project. The installation was launched in 2023 at the World Science Festival in Brisbane, Australia and has been touring since, it has just won a Webby award for its creative experience.
    The research paper, “Designing Multispecies Worlds for Robots, Cats, and Humans” has just been presented at the annual Computer-Human Conference (CHI’24) where it won best paper. It outlines how designing the technology and its interactions is not sufficient, but that it is equally important to consider the design of the `world’ in which the technology operates. The research also highlights the necessity of human involvement in areas such as breakdown recovery, animal welfare, and their role as audience.
    Cat Royale centred around a robot arm offering activities to make the cats happier, these included dragging a ‘mouse’ toy along the floor, raising a feather ‘bird’ into the air, and even offering them treats to eat. The team then trained an AI to learn what games the cats liked best so that it could personalise their experiences.
    “At first glance, the project is about designing a robot to enrich the lives of a family of cats by playing with them. ” commented Professor Steve Benford from the University of Nottingham who led the research, “Under the surface, however, it explores the question of what it takes to trust a robot to look after our loved ones and potentially ourselves.”
    Working with Blast Theory to develop and then study Cat Royale, the research team gained important insights into the design of robots and its interactions with the cats. They had to design the robot to pick up toys, deploy them in ways that excited the cats, while it learned which games each cat liked. They also designed the entire world in which the cats and the robot lived, providing safe spaces for the cats to observe the robot and from which to sneak up on it, and decorating it so that the robot had the best chance of spotting the approaching cats.
    The implication is designing robots involves interior design as well as engineering and AI. If you want to introduce robots into your home to look after your loved ones, then you will likely need to redesign your home.
    Research workshops for Cat Royale were held at the Univeraity of Nottingham’s unique Cobotmaker Space where stakeholders were bought together to think about the design of the robot /welfare of cats. Eike Schneiders, Transitional Assistant Professor in the Mixed Reality Lab at the University of Nottingham worked on the design, he said: “As we learned through Cat Royale, creating a multispecies system — where cats, robots, and humans are all accounted for — takes more than just designing the robot. We had to ensure animal wellbeing at all times, while simultaneously ensuring that the interactive installation engaged the (human) audiences around the world. This involved consideration of many elements, including the design of the enclosure, the robot and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats.” More

  • in

    New work extends the thermodynamic theory of computation

    Every computing system, biological or synthetic, from cells to brains to laptops, has a cost. This isn’t the price, which is easy to discern, but an energy cost connected to the work required to run a program and the heat dissipated in the process.
    Researchers at SFI and elsewhere have spent decades developing a thermodynamic theory of computation, but previous work on the energy cost has focused on basic symbolic computations — like the erasure of a single bit — that aren’t readily transferable to less predictable, real-world computing scenarios.
    In a paper published in Physical Review X on May 13, a quartet of physicists and computer scientists expand the modern theory of the thermodynamics of computation. By combining approaches from statistical physics and computer science, the researchers introduce mathematical equations that reveal the minimum and maximum predicted energy cost of computational processes that depend on randomness, which is a powerful tool in modern computers.
    In particular, the framework offers insights into how to compute energy-cost bounds on computational processes with an unpredictable finish. For example: A coin-flipping simulator may be instructed to stop flipping once it achieves 10 heads. In biology, a cell may stop producing a protein once it elicits a certain reaction from another cell. The “stopping times” of these processes, or the time required to achieve the goal for the first time, can vary from trial to trial. The new framework offers a straightforward way to calculate the lower bounds on the energy cost of those situations.
    The research was conducted by SFI Professor David Wolpert, Gonzalo Manzano (Institute for Cross-Disciplinary Physics and Complex Systems, Spain), Édgar Roldán (Institute for Theoretical Physics, Italy), and SFI graduate fellow Gülce Kardes (CU Boulder). The study uncovers a way to lower-bound the energetic costs of arbitrary computational processes. For example: an algorithm that searches for a person’s first or last name in a database might stop running if it finds either, but we don’t know which one it found. “Many computational machines, when viewed as dynamical systems, have this property where if you jump from one state to another you really can’t go back to the original state in just one step,” says Kardes.
    Wolpert began investigating ways to apply ideas from nonequilibrium statistical physics to the theory of computation about a decade ago. Computers, he says, are a system out of equilibrium, and stochastic thermodynamics gives physicists a way to study nonequilibrium systems. “If you put those two together, it seemed like all kinds of fireworks would come out, in an SFI kind of spirit,” he says.
    In recent studies that laid the groundwork for this new paper, Wolpert and colleagues introduced the idea of a “mismatch cost,” or a measure of how much the cost of a computation exceeds Landauer’s bound. Proposed in 1961 by physicist Rolf Landauer, this limit defines the minimum amount of heat required to change information in a computer. Knowing the mismatch cost, Wolpert says, could inform strategies for reducing the overall energy cost of a system.

    Across the Atlantic, co-authors Manzano and Roldán have been developing a tool from the mathematics of finance — the martingale theory — to address the thermodynamic behavior of small fluctuating systems at stopping times. Roldán et. al.’s “Martingales for Physicists” helped pave the way to successful applications of such a martingale approach in thermodynamics.
    Wolpert, Kardes, Roldán, and Manzano extend these tools from stochastic thermodynamics to the calculation of a mismatch cost to common computational problems in their PRX paper.
    Taken together, their research point to a new avenue for finding the lowest energy needed for computation in any system, no matter how it’s implemented. “It’s exposing a vast new set of issues,” Wolpert says.
    It may also have a very practical application, in pointing to new ways to make computing more energy efficient. The National Science Foundation estimates that computers use between 5% and 9% of global generated power, but at current growth rates, that could reach 20% by 2030. But previous work by SFI researchers suggests modern computers are grossly inefficient: Biological systems, by contrast, are about 100,000 times more energy-efficient than human-built computers. Wolpert says that one of the primary motivations for a general thermodynamic theory of computation is to find new ways to reduce the energy consumption of real-world machines.
    For instance, a better understanding of how algorithms and devices use energy to do certain tasks could point to more efficient computer chip architectures. Right now, says Wolpert, there’s no clear way to make physical chips that can carry out computational tasks using less energy.
    “These kinds of techniques might provide a flashlight through the darkness,” he says. More

  • in

    Potential power and pitfalls of harnessing artificial intelligence for sleep medicine

    In a new research commentary, the Artificial Intelligence in Sleep Medicine Committee of the American Academy of Sleep Medicine highlights how artificial intelligence stands on the threshold of making monumental contributions to the field of sleep medicine. Through a strategic analysis, the committee examined advancements in AI within sleep medicine and spotlighted its potential in revolutionizing care in three critical areas: clinical applications, lifestyle management, and population health. The committee also reviewed barriers and challenges associated with using AI-enabled technologies.
    “AI is disrupting all areas of medicine, and the future of sleep medicine is poised at a transformational crossroad,” said lead author Dr. Anuja Bandyopadhyay, chair of the Artificial Intelligence in Sleep Medicine Committee. “This commentary outlines the powerful potential and challenges for sleep medicine physicians to be aware of as they begin leveraging AI to deliver precise, personalized patient care and enhance preventive health strategies on a larger scale while ensuring its ethical deployment.”
    According to the authors, AI has potential uses in the sleep field in three key areas: Clinical Applications:In the clinical realm, AI-driven technologies offer comprehensive data analysis, nuanced pattern recognition and automation in diagnosis, all while addressing chronic problems like sleep-related breathing disorders. Despite understated beginnings, the utilization of AI can offer improvements in efficiency and patient access, which can contribute to a reduction in burnout among health care professionals. Lifestyle Management:Incorporating AI also offers clear benefits for lifestyle management through the use of consumer sleep technology. These devices come in various forms like fitness wristbands, smartphone apps, and smart rings, and they contribute to better sleep health through tracking, assessment and enhancement. Wearable sleep technology and data-driven lifestyle recommendations can empower patients to take an active role in managing their health, as shown in a recent AASM survey, which reported that 68% of adults who have used a sleep tracker said they have changed their behavior based on what they have learned. But, as these AI-driven applications grow ever more intuitive, the importance of ongoing dialogue between patients and clinicians about the potential and limitations of these innovations remains vital. Population Health: Beyond individual care, AI technology reveals a new approach to public health regarding sleep. “AI has the exciting potential to synthesize environmental, behavioral and physiological data, contributing to informed population-level interventions and bridging existing health care gaps,” noted Bandyopadhyay.The paper also offers warnings about the integration of AI into sleep medicine. Issues of data privacy, security, accuracy, and the potential for reinforcing existing biases present new challenges for health care professionals. Additionally, reliance on AI without sufficient clinical judgment could lead to complexities in patient treatment.
    “While AI can significantly strengthen the evaluation and management of sleep disorders, it is intended to complement, not replace, the expertise of a sleep medicine professional,” Bandyopadhyay stated.
    Navigating this emerging landscape requires comprehensive validation and standardization protocols to responsibly and ethically implement AI technologies in health care. It’s critical that AI tools are validated against varied datasets to ensure their reliability and accuracy in all patient populations.
    “Our commentary provides not just a vision, but a roadmap for leveraging the technology to promote better sleep health outcomes,” Bandyopadhyay said. “It lays the foundation for future discussions on the ethical deployment of AI, the importance of clinician education, and the harmonization of this new technology with existing practices to optimize patient care.” More

  • in

    When consumers would prefer a chatbot over a person

    Actually, sometimes consumers don’t want to talk to a real person when they’re shopping online, a new study suggests.
    In fact, what they really want is a chatbot that makes it clear that it is not human at all.
    In a new study, researchers at The Ohio State University found that people preferred interacting with chatbots when they felt embarrassed about what they were buying online — items like antidiarrheal medicine or, for some people, skin care products.
    “In general, research shows people would rather interact with a human customer service agent than a chatbot,” said Jianna Jin, who led the study as a doctoral student at Ohio State’s Fisher College of Business.
    “But we found that when people are worried about others judging them, that tendency reverses and they would rather interact with a chatbot because they feel less embarrassed dealing with a chatbot than a human.”
    The study was published recently in the Journal of Consumer Psychology with study co-authors Jesse Walker, assistant professor, and Rebecca Walker Reczek, professor, both in marketing at Ohio State’s Fisher College.
    “Chatbots are becoming more and more common as customer service agents, and companies are not required in most states to disclose if they use them,” Reczek said. “But it may be important for companies to let consumers know if they’re dealing with a chatbot.”
    The new research explored what happened when consumers had what psychologists call self-presentation concerns — this is when people worry about how their behavior and actions may affect how others perceive them. Buying some products may trigger these concerns.

    In one of the five studies that was part of the Journal of Consumer Psychology paper, the researchers asked 386 undergraduate students to imagine buying either antidiarrheal or hay fever medication. They were given the choice between two online drug stores, one of which used chatbots and another that used customer service agents.
    When participants were told they were buying hay fever medication, which doesn’t cause most people to feel embarrassed, 91% said they would use the store that had human service agents. But when they were buying antidiarrheal medicine, 81% chose the store with the chatbots.
    But that’s just the beginning of the story. The researchers found in other studies that it was important how human the chatbots appeared and acted onscreen.
    In another study, participants were asked to imagine buying an antidiarrheal medicine from an online drugstore. They were then shown one of three live chat icons: One was a chatbot with an icon that was just a speech bubble, with no human characteristics; a second was a chatbot with a cartoon of a human; and the third featured a profile picture of a real clearly human woman.
    Both chatbots clearly identified themselves to participants as chatbots — but the one with the cartoon of a real human used more emotional language during the exchange, such as “I am so excited to see you!”
    Results showed that participants were more willing to receive information about the embarrassing product from the two chatbots than from the human. But the effect was not as strong for the chatbot with the human cartoon avatar that used more emotional language than the other chatbot.

    The fact that this chatbot had a cartoon human avatar and used emotional language may have left those in the study feeling uneasy and less willing to interact — even though they were told it was a chatbot, Walker said.
    “It was as if the participants were proactively protecting themselves against embarrassment by assuming the chatbot could be human,” Walker said.
    In another study, Jin actually designed a chatbot and had participants engage in a real back-and-forth interaction. Participants in this study were chosen because they all strongly agreed that they wanted to make a good impression on others with their skin.
    In other words, they had self-presentation concerns related to their skin and may have been interested in buying skincare products because they were embarrassed about their skin. Because of this, the researchers believed that they would respond more positively to clearly identified chatbots.
    Participants in the study were told they were interacting with an agent for a skincare brand and whether they were talking to a chatbot or a customer service representative. Participants answered a series of questions, including one in which they were asked if they would like to provide their email address to get a free sample of the brand.
    As the researchers hypothesized, participants were more likely to provide their email address if they thought they were interacting with a chatbot (62%) than a human (38%).
    In this study, as well as others, the researchers asked questions designed to get at why participants prefer chatbots when they had self-presentation concerns.
    Walker said the results of the study suggest chatbots decrease embarrassment because consumers perceive chatbots as less able to feel emotions and make appraisals about people.
    “Consumers feel less embarrassed because chatbots don’t have the level of consciousness and ability to judge them,” he said.
    Jin, who is now an assistant professor at the University of Notre Dame, said the results suggest companies need to pay attention to the role of chatbots in their business.
    “Managers may not realize the importance of using chatbots when consumers have self-presentation concerns,” she said.
    And as conversational AI continues to get better, it may become more difficult for consumers to tell the difference between chatbots and human service agents, Reczek said. That could be a problem for companies whose customers may prefer to interact with chatbots because of their self-presentation concerns and fears of embarrassment.
    “It is going to be even more important for firms to clearly disclose that they use chatbots if they want consumers to realize they are interacting with a bot,” Reczek said. More

  • in

    The universe may have a complex geometry — like a doughnut

    The cosmos may have something in common with a doughnut.

    In addition to their fried, sugary goodness, doughnuts are known for their shape, or in mathematical terms, their topology. In a universe with an analogous, complex topology, you could travel across the cosmos and end up back where you started. Such a cosmos hasn’t yet been ruled out, physicists report in the April 26 Physical Review Letters. 

    On a shape with boring, or trivial topology, any closed path you draw can be shrunk down to a point. For example, consider traveling around Earth. If you were to go all the way around the equator, that’s a closed loop, but you could squish that down by shifting your trip up to the North Pole. But the surface of a doughnut has complex, or nontrivial, topology (SN: 10/4/16). A loop that encircles the doughnut’s hole, for example, can’t be shrunk down, because the hole limits how far you can squish it.  More

  • in

    AI systems are already skilled at deceiving and manipulating humans

    Many artificial intelligence (AI) systems have already learned how to deceive humans, even systems that have been trained to be helpful and honest. In a review article publishing in the journal Patterns on May 10, researchers describe the risks of deception by AI systems and call for governments to develop strong regulations to address this issue as soon as possible.
    “AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” says first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”
    Park and colleagues analyzed literature focusing on ways in which AI systems spread false information — through learned deception, in which they systematically learn to manipulate others.
    The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be “largely honest and helpful” and to “never intentionally backstab” its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn’t play fair.
    “We found that Meta’s AI had learned to be a master of deception,” says Park. “While Meta succeeded in training its AI to win in the game of Diplomacy — CICERO placed in the top 10% of human players who had played more than one game — Meta failed to train its AI to win honestly.”
    Other AI systems demonstrated the ability to bluff in a game of Texas hold ’em poker against professional human players, to fake attacks during the strategy game Starcraft II in order to defeat opponents, and to misrepresent their preferences in order to gain the upper hand in economic negotiations.
    While it may seem harmless if AI systems cheat at games, it can lead to “breakthroughs in deceptive AI capabilities” that can spiral into more advanced forms of AI deception in the future, Park added.

    Some AI systems have even learned to cheat tests designed to evaluate their safety, the researchers found. In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.
    “By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” says Park.
    The major near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections, warns Park. Eventually, if these systems can refine this unsettling skill set, humans could lose control of them, he says.
    “We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” says Park. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”
    While Park and his colleagues do not think society has the right measure in place yet to address AI deception, they are encouraged that policymakers have begun taking the issue seriously through measures such as the EU AI Act and President Biden’s AI Executive Order. But it remains to be seen, Park says, whether policies designed to mitigate AI deception can be strictly enforced given that AI developers do not yet have the techniques to keep these systems in check.
    “If banning AI deception is politically infeasible at the current moment, we recommend that deceptive AI systems be classified as high risk,” says Park.
    This work was supported by the MIT Department of Physics and the Beneficial AI Foundation. More

  • in

    AI intervention mitigates tension among conflicting ethnic groups

    Prejudice and fear have always been at the core of intergroup hostilities.
    While intergroup interaction is a prerequisite for initiating peace and stability at the junction of clashing interests, values, and cultures, the risk of further escalation precisely from direct interactions cannot be ruled out. In particular, a shortage of impartial, nonpartisan personnel to properly manage an electronic contact — or E-contact — session may cause the process to backfire and become destabilized.
    Now, a research team including Kyoto University has shown that interactive AI programs may help reduce prejudice and anxiety among historically divided ethnic groups in Afghanistan during online interactions.
    “Compared to the control group, participants in the AI intervention group showed more engagement in our study and significantly less prejudice and anxiety toward other ethnic groups,” says Sofia Sahab of KyotoU’s Graduate School of Informatics.
    In collaboration with Nagoya University, Nagoya Institute of Technology, and Hokkaido University, Sahab’s team has tested the effectiveness of using a CAI — or conversational AI — on the discussion platform D-Agree to facilitate unbiased and constructive conversations. The program ensures participants a safe, private space to talk freely, a setting that is commonly taken for granted in war-free countries.
    “Our over-decade-long work on AI agent-based consensus-building support has empirically demonstrated AI agents’ applicability in de-escalating confrontational situations,” remarks co-author Takayuki Ito, also of the informatics school.
    Sahab’s team applied a randomized controlled experiment to determine the causal effects of conversational AI facilitation in online discussions in reducing prejudice and anxiety.

    Participants from three ethnic backgrounds were divided into two groups — an AI group and a non-AI control group — to gauge the effects. As expected, the former expressed more empathy toward outside groups than participants in the control group.
    “The neutral AI agents aim to reduce risks by coordinating guided conversations as naturally as possible. By providing fair and cost-effective strategies to encourage positive interactions, we can promote lasting harmony among diverse ethnic groups,” adds Sahab.
    In the long term, the researchers are considering the potential for AI intervention beyond border conflicts to promote positive social change.
    “AI may have come at a pivotal time to aid humanity in enhancing social sustainability with CAI-mediated human interactions,” reflects Sahab. More

  • in

    Blockchain could offer a solution to the UK’s transport ticketing systems

    A new approach to transport ticketing offers a step towards an integrated, transparent system that works efficiently for both ticket providers and passengers across all modes of transport.
    Traditional ticketing systems are based on solutions that are vulnerable to issues including a lack of transferability across multi-modal transport networks, and an inability to adapt to policy changes and new technologies.
    Experts at the University of Birmingham have outlined a system that offers a new foundation for all ticketing providers. In a new paper, published in IET Blockchain, STUB (System for Ticketing Ubiquity within Blockchains) brings together the capabilities of two versatile technologies — blockchain and ontology.
    A blockchain is a distributed ledger that that records transactions in a way that ensures security, transparency, and immutability. An ontology is formal representation of knowledge within a domain and the relationships between those concepts, used to model and manage complex information systems.
    The researchers showed how both technologies could be combined to create a robust, transparent, and interconnected data framework that ensures consistent and reliable shared knowledge.
    Utilising these data structures, ticket providers can sell and validate tokenised tickets on the blockchain, ensuring universal accessibility across all providers. The integration of ontology allows providers to capture and share contextual information about the transport network, enabling providers to offer comprehensive data about routes, schedules, and availability, thereby streamlining the ticketing process.
    Lead author, Dr Joe Preece, said: “Transport systems around the world are becoming increasingly interconnected. Ticketing systems are key to this and there is a growing interest in the use of smarter transport ticketing that harnesses emerging technologies to overcome the limitations of traditional systems.

    “The system we have devised enables ticket providers to operate in a more transparent, flexible environment, that will ultimately offer passengers a more user-friendly experience.
    “STUB’s approach is not to be a single central data platform with transport policy baked-in, but instead to be a policy-agnostic approach that empowers existing ticket providers and technologies to share core ticketing data and to build new solutions on top of.
    “In essence, this may provide a modernised approach to the Rail Settlement Plan, that enables multi-modal ticketing, automated revenue and refund allocation, and dynamic fare pricing, whilst retaining the technologies in the sector that already work well.
    The next step for the team will be to set up a pilot scheme for the technology in a regional transport network, to demonstrate its efficacy, and to get feedback from ticket operators and passengers.
    “A big challenge to implementation will be the integration with existing ticketing infrastructure to work alongside the current standardised approaches whilst we scale up the technology. Setting up a successful pilot will be key to breaking down these barriers.” More