More stories

  • in

    AI helps show how the brain’s fluids flow

    A new artificial intelligence-based technique for measuring fluid flow around the brain’s blood vessels could have big implications for developing treatments for diseases such as Alzheimer’s.
    The perivascular spaces that surround cerebral blood vessels transport water-like fluids around the brain and help sweep away waste. Alterations in the fluid flow are linked to neurological conditions, including Alzheimer’s, small vessel disease, strokes, and traumatic brain injuries but are difficult to measure in vivo.
    A multidisciplinary team of mechanical engineers, neuroscientists, and computer scientists led by University of Rochester Associate Professor Douglas Kelley developed novel AI velocimetry measurements to accurately calculate brain fluid flow. The results are outlined in a study published by Proceedings of the National Academy of Sciences.
    “In this study, we combined some measurements from inside the animal models with a novel AI technique that allowed us to effectively measure things that nobody’s ever been able to measure before,” says Kelley, a faculty member in Rochester’s Department of Mechanical Engineering.
    The work builds upon years of experiments led by study coauthor Maiken Nedergaard, the codirector of Rochester’s Center for Translational Neuromedicine. The group has previously been able to conduct two-dimensional studies on the fluid flow in perivascular spaces by injecting tiny particles into the fluid and measuring their position and velocity over time. But scientists needed more complex measurements to understand the full intricacy of the system — and exploring such a vital, fluid system is a challenge.
    To address that challenge, the team collaborated with George Karniadakis from Brown University to leverage artificial intelligence. They integrated the existing 2D data with physics-informed neural networks to create unprecedented high-resolution looks at the system.
    “This is a way to reveal pressures, forces, and the three-dimensional flow rate with much more accuracy than we can otherwise do,” says Kelley. “The pressure is important because nobody knows for sure quite what pumping mechanism drives all these flows around the brain yet. This is a new field.”
    The scientists conducted the research with support from the Collaborative Research in Computational Neuroscience program, the National Institutes of Health Brain Initiative, and the Army Research Office’s Multidisciplinary University Research Initiatives program. More

  • in

    Metamaterials with built-in frustration have mechanical memory

    Researchers from the UvA Institute of Physics and ENS de Lyon have discovered how to design materials that necessarily have a point or line where the material doesn’t deform under stress, and that even remember how they have been poked or squeezed in the past. These results could be used in robotics and mechanical computers, while similar design principles could be used in quantum computers.
    The outcome is a breakthrough in the field of metamaterials: designer materials whose responses are determined by their structure rather than their chemical composition. To construct a metamaterial with mechanical memory, physicists Xiaofei Guo, Marcelo Guzmán, David Carpentier, Denis Bartolo and Corentin Coulais realised that its design needs to be ‘frustrated’, and that this frustration corresponds to a new type of order, which they call non-orientable order.
    Physics with a twist
    A simple example of a non-orientable object is a Möbius strip, made by taking a strip of material, adding half a twist to it and then gluing its ends together. You can try this at home with a strip of paper. Following the surface of a Möbius strip with your finger, you’ll find that when you get back to your starting point, your finger will be on the other side of the paper.
    A Möbius strip is non-orientable because there is no way to label the two sides of the strip in a consistent manner; the twist makes the entire surface one and the same. This is in contrast to a simple cylinder (a strip without any twists whose ends are glued together), which has a distinct inner and outer surface.
    Guo and her colleagues realised that this non-orientability strongly affects how an object or metamaterial responds to being pushed or squeezed. If you place a simple cylinder and a Möbius strip on a flat surface and press down on them from above, you’ll find that the sides of the cylinder will all bulge out (or in), while the sides of the Möbius strip cannot do the same. Instead, the non-orientability of the latter ensures that there is always a point along the strip where it does not deform under pressure.

    Frustration is not always a bad thing
    Excitingly, this behaviour extends far beyond Möbius strips. ‘We discovered that the behaviour of non-orientable objects such as Möbius strips allows us to describe any material that is globally frustrated. These materials naturally want to be ordered, but something in their structure forbids the order to span the whole system and forces the ordered pattern to vanish at one point or line in space. There is no way to get rid of that vanishing point without cutting the structure, so it has to be there no matter what,’ explains Coulais, who leads the Machine Materials Laboratory at the University of Amsterdam.
    The research team designed and 3D-printed their own mechanical metamaterial structures which exhibit the same frustrated and non-orientable behaviour as Möbius strips. Their designs are based on rings of squares connected by hinges at their corners. When these rings are squeezed, neighbouring squares will rotate in opposite directions so that their edges move closer together. The opposite rotation of neighbours makes the system’s response analogous to the anti-ferromagnetic ordering that occurs in certain magnetic materials.
    Rings composed of an odd number of squares are frustrated, because there is no way for all neighbouring squares to rotate in opposite directions. Squeezed odd-numbered rings therefore exhibit non-orientable order, in which the rotation angle at one point along the ring must go to zero.
    Being a feature of the overall shape of the material makes this a robust topological property. By connecting multiple metarings together, it is even possible to emulate the mechanics of higher-dimensional topological structures such as the Klein bottle.

    Mechanical memory
    Having an enforced point or line of zero deformation is key to endowing materials with mechanical memory. Instead of squeezing a metamaterial ring from all sides, you can press the ring at distinct points. Doing so, the order in which you press different points determines where the zero deformation point or line ends up.
    This is a form of storing information. It can even be used to execute certain types of logic gates, the basis of any computer algorithm. A simple metamaterial ring can thus function as a mechanical computer.
    Beyond mechanics, the results of the study suggest that non-orientability could be a robust design principle for metamaterials that can effectively store information across scales, in fields as diverse as colloidal science, photonics, magnetism, and atomic physics. It could even be useful for new types of quantum computers.
    Coulais concludes: ‘Next, we want to exploit the robustness of the vanishing deformations for robotics. We believe the vanishing deformations could be used to create robotic arms and wheels with predictable bending and locomotion mechanisms.’ More

  • in

    New technique in error-prone quantum computing makes classical computers sweat

    Despite steady improvements in quantum computers, they’re still noisy and error prone, which leads to questionable or wrong answers. Scientists predict that they won’t truly outcompete today’s “classical” supercomputers for at least five or 10 years, until researchers can adequately correct the errors that bedevil entangled quantum bits, or qubits.
    But a new study shows that, even lacking good error correction, there are ways to mitigate errors that could make quantum computers useful today.
    Researchers at IBM Quantum in New York and their collaborators at the University of California, Berkeley, and Lawrence Berkeley National Laboratory report today (June 14) in the journal Nature that they pitted a 127-qubit quantum computer against a state-of-the-art supercomputer and, for at least one type of calculation, bested the supercomputer.
    The calculation wasn’t chosen because it was difficult for classical computers, the researchers say, but because it’s similar to ones that physicists make all the time. Crucially, the calculation could be made increasingly complex in order to test whether today’s noisy, error-prone quantum computers can produce accurate results for certain types of common calculations.
    The fact that the quantum computer produced the verifiably correct solution as the calculation became more complex, while the supercomputer algorithm produced an incorrect answer, provides hope that quantum computing algorithms with error mitigation, instead of the more difficult error correction, could tackle cutting-edge physics problems, such as understanding the quantum properties of superconductors and novel electronic materials.
    “We’re entering the regime where the quantum computer might be able to do things that current algorithms on classical computers cannot do,” said UC Berkeley graduate student and study co-author Sajant Anand.

    “We can start to think of quantum computers as a tool for studying problems that we wouldn’t be able to study otherwise,” added Sarah Sheldon, senior manager for Quantum Theory and Capabilities at IBM Quantum.
    Conversely, the quantum computer’s trouncing of the classical computer could also spark new ideas to improve the quantum algorithms now used on classical computers, according to co-author Michael Zaletel, UC Berkeley associate professor of physics and holder of the Thomas and Alison Schneider Chair in Physics.
    “Going into it, I was pretty sure that the classical method would do better than the quantum one,” he said. “So, I had mixed emotions when IBM’s zero-noise extrapolated version did better than the classical method. But thinking about how the quantum system is working might actually help us figure out the right classical way to approach the problem. While the quantum computer did something that the standard classical algorithm couldn’t, we think it’s an inspiration for making the classical algorithm better so that the classical computer performs just as well as the quantum computer in the future.”
    Boost the noise to suppress the noise
    One key to the seeming advantage of IBM’s quantum computer is quantum error mitigation, a novel technique for dealing with the noise that accompanies a quantum computation. Paradoxically, IBM researchers controllably increased the noise in their quantum circuit to get even noisier, less accurate answers and then extrapolated backward to estimate the answer the computer would have gotten if there were no noise. This relies on having a good understanding of the noise that affects quantum circuits and predicting how it affects the output.

    The problem of noise comes about because IBM’s qubits are sensitive superconducting circuits that represent the zeros and ones of a binary computation. When the qubits are entangled for a calculation, unavoidable annoyances, such as heat and vibration, can alter the entanglement, introducing errors. The greater the entanglement, the worse the effects of noise.
    In addition, computations that act on one set of qubits can introduce random errors in other, uninvolved qubits. Additional computations then compound these errors. Scientists hope to use extra qubits to monitor such errors so they can be corrected — so-called fault-tolerant error correction. But achieving scalable fault-tolerance is a huge engineering challenge, and whether it will work in practice for ever greater numbers of qubits remains to be shown, Zaletel said.
    Instead, IBM engineers came up with a strategy of error mitigation they called zero noise extrapolation (ZNE), which uses probabilistic methods to controllably increase the noise on the quantum device. Based on a recommendation from a former intern, IBM researchers approached Anand, postdoctoral researcher Yantao Wu and Zaletel to ask their help in assessing the accuracy of the results obtained using this error mitigation strategy. Zaletel develops supercomputer algorithms to solve difficult calculations involving quantum systems, such as the electronic interactions in new materials. These algorithms, which employ tensor network simulations, can be directly applied to simulate interacting qubits in a quantum computer.
    Over a period of several weeks, Youngseok Kim and Andrew Eddins at IBM Quantum ran increasingly complex quantum calculations on the advanced IBM Quantum Eagle processor, and then Anand attempted the same calculations using state-of-the-art classical methods on the Cori supercomputer and Lawrencium cluster at Berkeley Lab and the Anvil supercomputer at Purdue University. When Quantum Eagle was rolled out in 2021, it had the highest number of high-quality qubits of any quantum computer, seemingly beyond the ability of classical computers to simulate.
    In fact, exactly simulating all 127 entangled qubits on a classical computer would require an astronomical amount of memory. The quantum state would need to be represented by 2 to the power of 127 separate numbers. That’s 1 followed by 38 zeros; typical computers can store around 100 billion numbers, 27 orders of magnitude too small. To simplify the problem, Anand, Wu and Zaletel used approximation techniques that allowed them to solve the problem on a classical computer in a reasonable amount of time, and at a reasonable cost. These methods are somewhat like jpeg image compression, in that they get rid of less important information and keep only what’s required to achieve accurate answers within the limits of the memory available.
    Anand confirmed the accuracy of the quantum computer’s results for the less complex calculations, but as the depth of the calculations grew, the results of the quantum computer diverged from those of the classical computer. For certain specific parameters, Anand was able to simplify the problem and calculate exact solutions that verified the quantum calculations over the classical computer calculations. At the largest depths considered, exact solutions were not available, yet the quantum and classical results disagreed.
    The researchers caution that, while they can’t prove that the quantum computer’s final answers for the hardest calculations were correct, Eagle’s successes on the previous runs gave them confidence that they were.
    “The success of the quantum computer wasn’t like a fine-tuned accident. It actually worked for a whole family of circuits it was being applied to,” Zaletel said.
    Friendly competition
    While Zaletel is cautious about predicting whether this error mitigation technique will work for more qubits or calculations of greater depth, the results were nonetheless inspiring, he said.
    “It sort of spurred a feeling of friendly competition,” he said. “I have a sense that we should be able to simulate on a classical computer what they’re doing. But we need to think about it in a clever and better way — the quantum device is in a regime where it suggests we need a different approach.”
    One approach is to simulate the ZNE technique developed by IBM.
    “Now, we’re asking if we can take the same error mitigation concept and apply it to classical tensor network simulations to see if we can get better classical results,” Anand said. “This work gives us the ability to maybe use a quantum computer as a verification tool for the classical computer, which is flipping the script on what’s usually done.”
    Anand and Zaletel’s work was supported by the U.S. Department of Energy under an Early Career Award (DE-SC0022716). Wu’s work was supported by a RIKEN iTHEMS fellowship. Cori is part of the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. More

  • in

    Hybrid AI-powered computer vision combines physics and big data

    Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.
    Published in Nature Machine Intelligence, the study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time — as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.
    Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the physics of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.
    It has been a challenge to incorporate an understanding of physics — the laws that govern mass, motion and more — into the development of neural networks, where AIs modeled after the human brain with billions of nodes to crunch massive image data sets until they gain an understanding of what they “see.” But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks.
    The UCLA study aims to harness the power of both the deep knowledge from data and the real-world know-how of physics to create a hybrid AI with enhanced capabilities.
    “Visual machines — cars, robots, or health instruments that use images to perceive the world — are ultimately doing tasks in our physical world,” said the study’s corresponding author Achuta Kadambi, an assistant professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise.”
    The research team outlined three ways in which physics and data are starting to be combined into computer vision artificial intelligence: Incorporating physics into AI data sets Tag objects with additional information, such as how fast they can move or how much they weigh, similar to characters in video games Incorporating physics into network architectures Run data through a network filter that codes physical properties into what cameras pick up Incorporating physics into network loss function Leverage knowledge built on physics to help AI interpret training data on what it observesThese three lines of investigation have already yielded encouraging results in improved computer vision. For example, the hybrid approach allows AI to track and predict an object’s motion more precisely and can produce accurate, high-resolution images from scenes obscured by inclement weather.
    With continued progress in this dual modality approach, deep learning-based AIs may even begin to learn the laws of physics on their own, according to the researchers.
    The other authors on the paper are Army Research Laboratory computer scientist Celso de Melo and UCLA faculty Stefano Soatto, a professor of computer science; Cho-Jui Hsieh, an associate professor of computer science and Mani Srivastava, a professor of electrical and computer engineering and of computer science.
    The research was supported in part by a grant from the Army Research Laboratory. Kadambi is supported by grants from the National Science Foundation, the Army Young Investigator Program and the Defense Advanced Research Projects Agency. A co-founder of Vayu Robotics, Kadambi also receives funding from Intrinsic, an Alphabet company. Hsieh, Srivastava and Soatto receive support from Amazon. More

  • in

    A step toward safe and reliable autopilots for flying

    In the film “Top Gun: Maverick,” Maverick, played by Tom Cruise, is charged with training young pilots to complete a seemingly impossible mission — to fly their jets deep into a rocky canyon, staying so low to the ground they cannot be detected by radar, then rapidly climb out of the canyon at an extreme angle, avoiding the rock walls. Spoiler alert: With Maverick’s help, these human pilots accomplish their mission.
    A machine, on the other hand, would struggle to complete the same pulse-pounding task. To an autonomous aircraft, for instance, the most straightforward path toward the target is in conflict with what the machine needs to do to avoid colliding with the canyon walls or staying undetected. Many existing AI methods aren’t able to overcome this conflict, known as the stabilize-avoid problem, and would be unable to reach their goal safely.
    MIT researchers have developed a new technique that can solve complex stabilize-avoid problems better than other methods. Their machine-learning approach matches or exceeds the safety of existing methods while providing a tenfold increase in stability, meaning the agent reaches and remains stable within its goal region.
    In an experiment that would make Maverick proud, their technique effectively piloted a simulated jet aircraft through a narrow corridor without crashing into the ground.
    “This has been a longstanding, challenging problem. A lot of people have looked at it but didn’t know how to handle such high-dimensional and complex dynamics,” says Chuchu Fan, the Wilson Assistant Professor of Aeronautics and Astronautics, a member of the Laboratory for Information and Decision Systems (LIDS), and senior author of a new paper on this technique.
    Fan is joined by lead author Oswin So, a graduate student. The paper will be presented at the Robotics: Science and Systems conference.

    The stabilize-avoid challenge
    Many approaches tackle complex stabilize-avoid problems by simplifying the system so they can solve it with straightforward math, but the simplified results often don’t hold up to real-world dynamics.
    More effective techniques use reinforcement learning, a machine-learning method where an agent learns by trial-and-error with a reward for behavior that gets it closer to a goal. But there are really two goals here — remain stable and avoid obstacles — and finding the right balance is tedious.
    The MIT researchers broke the problem down into two steps. First, they reframe the stabilize-avoid problem as a constrained optimization problem. In this setup, solving the optimization enables the agent to reach and stabilize to its goal, meaning it stays within a certain region. By applying constraints, they ensure the agent avoids obstacles, So explains.
    Then for the second step, they reformulate that constrained optimization problem into a mathematical representation known as the epigraph form and solve it using a deep reinforcement learning algorithm. The epigraph form lets them bypass the difficulties other methods face when using reinforcement learning.

    “But deep reinforcement learning isn’t designed to solve the epigraph form of an optimization problem, so we couldn’t just plug it into our problem. We had to derive the mathematical expressions that work for our system. Once we had those new derivations, we combined them with some existing engineering tricks used by other methods,” So says.
    No points for second place
    To test their approach, they designed a number of control experiments with different initial conditions. For instance, in some simulations, the autonomous agent needs to reach and stay inside a goal region while making drastic maneuvers to avoid obstacles that are on a collision course with it.
    When compared with several baselines, their approach was the only one that could stabilize all trajectories while maintaining safety. To push their method even further, they used it to fly a simulated jet aircraft in a scenario one might see in a “Top Gun” movie. The jet had to stabilize to a target near the ground while maintaining a very low altitude and staying within a narrow flight corridor.
    This simulated jet model was open-sourced in 2018 and had been designed by flight control experts as a testing challenge. Could researchers create a scenario that their controller could not fly? But the model was so complicated it was difficult to work with, and it still couldn’t handle complex scenarios, Fan says.
    The MIT researchers’ controller was able to prevent the jet from crashing or stalling while stabilizing to the goal far better than any of the baselines.
    In the future, this technique could be a starting point for designing controllers for highly dynamic robots that must meet safety and stability requirements, like autonomous delivery drones. Or it could be implemented as part of larger system. Perhaps the algorithm is only activated when a car skids on a snowy road to help the driver safely navigate back to a stable trajectory.
    Navigating extreme scenarios that a human wouldn’t be able to handle is where their approach really shines, So adds.
    “We believe that a goal we should strive for as a field is to give reinforcement learning the safety and stability guarantees that we will need to provide us with assurance when we deploy these controllers on mission-critical systems. We think this is a promising first step toward achieving that goal,” he says.
    Moving forward, the researchers want to enhance their technique so it is better able to take uncertainty into account when solving the optimization. They also want to investigate how well the algorithm works when deployed on hardware, since there will be mismatches between the dynamics of the model and those in the real world.
    The work is funded, in part, by MIT Lincoln Laboratory under the Safety in Aerobatic Flight Regimes program. More

  • in

    Four-legged robot traverses tricky terrains thanks to improved 3D vision

    Researchers led by the University of California San Diego have developed a new model that trains four-legged robots to see more clearly in 3D. The advance enabled a robot to autonomously cross challenging terrain with ease — including stairs, rocky ground and gap-filled paths — while clearing obstacles in its way.
    The researchers will present their work at the 2023 Conference on Computer Vision and Pattern Recognition (CVPR), which will take place from June 18 to 22 in Vancouver, Canada.
    “By providing the robot with a better understanding of its surroundings in 3D, it can be deployed in more complex environments in the real world,” said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.
    The robot is equipped with a forward-facing depth camera on its head. The camera is tilted downwards at an angle that gives it a good view of both the scene in front of it and the terrain beneath it.
    To improve the robot’s 3D perception, the researchers developed a model that first takes 2D images from the camera and translates them into 3D space. It does this by looking at a short video sequence that consists of the current frame and a few previous frames, then extracting pieces of 3D information from each 2D frame. That includes information about the robot’s leg movements such as joint angle, joint velocity and distance from the ground. The model compares the information from the previous frames with information from the current frame to estimate the 3D transformation between the past and the present.
    The model fuses all that information together so that it can use the current frame to synthesize the previous frames. As the robot moves, the model checks the synthesized frames against the frames that the camera has already captured. If they are a good match, then the model knows that it has learned the correct representation of the 3D scene. Otherwise, it makes corrections until it gets it right.

    The 3D representation is used to control the robot’s movement. By synthesizing visual information from the past, the robot is able to remember what it has seen, as well as the actions its legs have taken before, and use that memory to inform its next moves.
    “Our approach allows the robot to build a short-term memory of its 3D surroundings so that it can act better,” said Wang.
    The new study builds on the team’s previous work, where researchers developed algorithms that combine computer vision with proprioception — which involves the sense of movement, direction, speed, location and touch — to enable a four-legged robot to walk and run on uneven ground while avoiding obstacles. The advance here is that by improving the robot’s 3D perception (and combining it with proprioception), the researchers show that the robot can traverse more challenging terrain than before.
    “What’s exciting is that we have developed a single model that can handle different kinds of challenging environments,” said Wang. “That’s because we have created a better understanding of the 3D surroundings that makes the robot more versatile across different scenarios.”
    The approach has its limitations, however. Wang notes that their current model does not guide the robot to a specific goal or destination. When deployed, the robot simply takes a straight path and if it sees an obstacle, it avoids it by walking away via another straight path. “The robot does not control exactly where it goes,” he said. “In future work, we would like to include more planning techniques and complete the navigation pipeline.”
    Video: https://youtu.be/vJdt610GSGk
    Paper title: “Neural Volumetric Memory for Visual Locomotion Control.” Co-authors include Ruihan Yang, UC San Diego, and Ge Yang, Massachusetts Institute of Technology.
    This work was supported in part by the National Science Foundation (CCF-2112665, IIS-2240014, 1730158 and ACI-1541349), an Amazon Research Award and gifts from Qualcomm. More

  • in

    The chatbot will see you now

    The informed consent process in biomedical research is biased towards people who can meet with clinical study staff during the working day. For those who have the availability to have a consent conversation, the time burden can be off-putting. Professor Eric Vilain, from the Department of Paediatrics, University of California, Irvine, USA, will tell the European Society of Human Genetics annual conference today (Tuesday 13 June) how results from his team’s study of the use of a chatbot (GIA — ‘Genetics Information Assistant’ developed by Invitae Corporation) in the consent process show that it encourages inclusivity, and leads to faster completion and high levels of understanding. Since such consent is the cornerstone of all research studies, finding ways of cutting the time spent on it while continuing to make sure that participants’ understanding is not lessened is something clinicians have aimed for some time.
    Working with their institutional review board (IRB), Prof Vilain’s team from across University of California Irvine, Children’s National Hospital, and Invitae Corporation designed a script for the GIA chatbot to transform the trial consent form and protocol into a logic flow and script. Unlike conventional methods of obtaining consent, the bot was able to quiz participants to assess the knowledge they had attained. It could also be accessed at any time, allowing individuals with less free time to use it outside normal business hours. “We saw that more than half of our participants interacted with the bot at these times, and this shows its utility in decreasing the barriers to entry to research. Currently, most people who participate in biomedical research have time to do so as well as the knowledge that studies exist,” says Prof Vilain
    The researchers involved 72 families in the consent process during a six-month time period as part of the US national GREGoR consortium, a National Institutes of Health initiative to advance rare disease research. A total of 37 families completed consent using the traditional process, while 35 used the chatbot. The researchers found that the median length of the consent conversation was shorter for those using the bot, at 44 rather than 76 minutes, and the time from referral to the study to consent completion was also faster, at five as opposed to 16 days. The level of understanding of those who had used the bot was assessed with a 10-question quiz that 96% of participants passed, and a request for feedback showed that 86% thought that they had had a positive experience.
    “I was surprised and pleased that a significant number of people would feel comfortable communicating with a chatbot,” says Prof Vilain. “But we worked hard with our IRB to ensure that it didn’t ‘hallucinate’ (make mistakes) and to ensure that knowledge was conveyed correctly. When the bot was unable to answer a question, it encouraged the participant to speak with a member of the study team.”
    While it is not possible to give an accurate account of cost saving, the time savings of staff were substantial, the researchers say. Because people can pause the chatbot consent process at any time, it can be completed much more quickly — for example, four participants completed in 24 hours. Of the consent conversations that were quick (less than an hour), 83% of them were with the chatbot. The consent conversations that were longer (between one and two hours), were with a study staff member (66%).
    “But it’s far from being just about speed,” says Prof Vilain. “The traditional method of consenting does not have a mechanism to verify understanding objectively. It is based on the conviction of the study staff member hosting the conversation that the consent has been informed properly and the individual understands what they are consenting to. The chat-based method can test comprehension more objectively. It does not allow users who do not show understanding to give consent, and puts them in touch with a genetic counsellor to figure out why knowledge transmission did not occur.
    “We believe that our work has made an important contribution to the obtention of properly-informed consent, and would now like to see it used in different languages to reach global populations,” he concludes.
    Professor Alexandre Reymond, chair of the conference, said: “The keystone to informed consent should be that it is by definition ‘informed’, and we should explore all possibilities to ensure this in the future.”
    (ends) More

  • in

    Loneliness, insomnia linked to work with AI systems

    Employees who frequently interact with artificial intelligence systems are more likely to experience loneliness that can lead to insomnia and increased after-work drinking, according to research published by the American Psychological Association.
    Researchers conducted four experiments in the U.S., Taiwan, Indonesia and Malaysia. Findings were consistent across cultures. The research was published online in the Journal of Applied Psychology.
    In a prior career, lead researcher Pok Man Tang, PhD, worked in an investment bank where he used AI systems, which led to his interest in researching the timely issue.
    “The rapid advancement in AI systems is sparking a new industrial revolution that is reshaping the workplace with many benefits but also some uncharted dangers, including potentially damaging mental and physical impacts for employees,” said Tang, an assistant professor of management at the University of Georgia. “Humans are social animals, and isolating work with AI systems may have damaging spillover effects into employees’ personal lives.”
    At the same time, working with AI systems may have some benefits. The researchers found that employees who frequently used AI systems were more likely to offer help to fellow employees, but that response may have been triggered by their loneliness and need for social contact.
    Furthermore, the studies found that participants with higher levels of attachment anxiety — the tendency to feel insecure and worried about social connections — responded more strongly to working on AI systems with both positive reactions, such as helping others, and negative ones, such as loneliness and insomnia.

    In one experiment, 166 engineers at a Taiwanese biomedical company who worked with AI systems were surveyed over three weeks about their feelings of loneliness, attachment anxiety and sense of belonging. Coworkers rated individual participants on their helpful behaviors, and family members reported on participants’ insomnia and after-work alcohol consumption. Employees who interacted more frequently with AI systems were more likely to experience loneliness, insomnia and increased after-work alcohol consumption, but also showed some helping behaviors toward fellow employees.
    In another experiment with 126 real estate consultants in an Indonesian property management company, half were instructed not to use AI systems for three consecutive days while the other half were told to work with AI systems as much as possible. The findings for the latter group were similar to the previous experiment, except there was no association between the frequency of AI use and after-work alcohol consumption.
    There were similar findings from an online experiment with 214 full-time working adults in the U.S. and another with 294 employees at a Malaysian tech company.
    The research findings are correlational and don’t prove that work with AI systems causes loneliness or the other responses, just that there is an association among them.
    Tang said that moving forward, developers of AI technology should consider equipping AI systems with social features, such as a human voice, to emulate human-like interactions. Employers also could limit the frequency of work with AI systems and offer opportunities for employees to socialize.
    Team decision-making and other tasks where social connections are important could be done by people, while AI systems could focus more on tedious and repetitive tasks, Tang added.
    “Mindfulness programs and other positive interventions also might help relieve loneliness,” Tang said. “AI will keep expanding so we need to act now to lessen the potentially damaging effects for people who work with these systems.” More