More stories

  • in

    Guardrails, education urged to protect adolescent AI users

    The effects of artificial intelligence on adolescents are nuanced and complex, according to a report from the American Psychological Association that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.
    “AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents,” according to the report, entitled “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory.” “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.”
    The report was written by an expert advisory panel and follows on two other APA reports on social media use in adolescence and healthy video content recommendations.
    The AI report notes that adolescence — which it defines as ages 10-25 — is a long development period and that age is “not a foolproof marker for maturity or psychological competence.” It is also a time of critical brain development, which argues for special safeguards aimed at younger users.
    “Like social media, AI is neither inherently good nor bad,” said APA Chief of Psychology Mitch Prinstein, PhD, who spearheaded the report’s development. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”
    The report makes a number of recommendations to make certain that adolescents can use AI safely. These include:
    Ensuring there are healthy boundaries with simulated human relationships. Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot, rather than a human.

    Creating age-appropriate defaults in privacy settings, interaction limits and content. This will involve transparency, human oversight and support and rigorous testing, according to the report.
    Encouraging uses of AI that can promote healthy development. AI can assist in brainstorming, creating, summarizing and synthesizing information — all of which can make it easier for students to understand and retain key concepts, the report notes. But it is critical for students to be aware of AI’s limitations.
    Limiting access to and engagement with harmful and inaccurate content. AI developers should build in protections to prevent adolescents’ exposure to harmful content.
    Protecting adolescents’ data privacy and likenesses. This includes limiting the use of adolescents’ data for targeted advertising and the sale of their data to third parties.
    The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.
    “Many of these changes can be made immediately, by parents, educators and adolescents themselves,” Prinstein said. “Others will require more substantial changes by developers, policymakers and other technology professionals.”
    Report: https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being
    In addition to the report, further resources and guidance for parents on AI and keeping teens safe and for teens on AI literacy are available at APA.org. More

  • in

    Attachment theory: A new lens for understanding human-AI relationships

    Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.
    Artificial intelligence (AI) is ubiquitous in this era. As a result, human-AI interactions are becoming more frequent and complex, and this trend is expected to accelerate soon. Therefore, scientists have made remarkable efforts to better understand human-AI relationships in terms of trust and companionship. However, these man-machine interactions can possibly also be understood in terms of attachment-related functions and experiences, which have traditionally been used to explain human interpersonal bonds.
    In an innovative work, which incorporates two pilot studies and one formal study, a group of researchers from Waseda University, Japan, including Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences, has utilized attachment theory to examine human-AI relationships. Their findings were recently published online in the journal Current Psychology on May 9, 2025.
    Mr. Yang explains the motivation behind their research. “As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security. These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. This research is our attempt to explore that possibility.”
    Notably, the team developed a new self-report scale called the Experiences in Human-AI Relationships Scale, or EHARS, to measure attachment-related tendencies toward AI. They found that some individuals seek emotional support and guidance from AI, similar to how they interact with people. Nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.
    This study differentiated two dimensions of human attachment to AI: anxiety and avoidance. An individual with high attachment anxiety toward AI needs emotional reassurance and harbors a fear of receiving inadequate responses from AI. In contrast, a high attachment avoidance toward AI is characterized by discomfort with closeness and a consequent preference for emotional distance from AI.
    However, these findings do not mean that humans are currently forming genuine emotional attachments to AI. Rather, the study demonstrates that psychological frameworks used for human relationships may also apply to human-AI interactions. The present results can inform the ethical design of AI companions and mental health support tools. For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users’ emotional needs, providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies. The results also suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.
    Furthermore, the proposed EHARS could be used by developers or psychologists to assess how people relate to AI emotionally and adjust AI interaction strategies accordingly.
    “As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being,” concludes Mr. Yang. More

  • in

    Self-powered artificial synapse mimics human color vision

    As artificial intelligence and smart devices continue to evolve, machine vision is taking an increasingly pivotal role as a key enabler of modern technologies. Unfortunately, despite much progress, machine vision systems still face a major problem: processing the enormous amounts of visual data generated every second requires substantial power, storage, and computational resources. This limitation makes it difficult to deploy visual recognition capabilities in edge devices — such as smartphones, drones, or autonomous vehicles.
    Interestingly, the human visual system offers a compelling alternative model. Unlike conventional machine vision systems that have to capture and process every detail, our eyes and brain selectively filter information, allowing for higher efficiency in visual processing while consuming minimal power. Neuromorphic computing, which mimics the structure and function of biological neural systems, has thus emerged as a promising approach to overcome existing hurdles in computer vision. However, two major challenges have persisted. The first is achieving color recognition comparable to human vision, whereas the second is eliminating the need for external power sources to minimize energy consumption.
    Against this backdrop, a research team led by Associate Professor Takashi Ikuno from the School of Advanced Engineering, Department of Electronic Systems Engineering, Tokyo University of Science (TUS), Japan, has developed a groundbreaking solution. Their paper, published in Volume 15 of the journal Scientific Reports on May 12, 2025, introduces a self-powered artificial synapse capable of distinguishing colors with remarkable precision. The study was co-authored by Mr. Hiroaki Komatsu and Ms. Norika Hosoda, also from TUS.
    The researchers created their device by integrating two different dye-sensitized solar cells, which respond differently to various wavelengths of light. Unlike conventional optoelectronic artificial synapses that require external power sources, the proposed synapse generates its electricity via solar energy conversion. This self-powering capability makes it particularly suitable for edge computing applications, where energy efficiency is crucial.
    As evidenced through extensive experiments, the resulting system can distinguish between colors with a resolution of 10 nanometers across the visible spectrum — a level of discrimination approaching that of the human eye. Moreover, the device also exhibited bipolar responses, producing positive voltage under blue light and negative voltage under red light. This makes it possible to perform complex logic operations that would typically require multiple conventional devices. “The results show great potential for the application of this next-generation optoelectronic device, which enables high-resolution color discrimination and logical operations simultaneously, to low-power artificial intelligence (AI) systems with visual recognition,” notes Dr. Ikuno.
    To demonstrate a real-world application, the team used their device in a physical reservoir computing framework to recognize different human movements recorded in red, green, and blue. The system achieved an impressive 82% accuracy when classifying 18 different combinations of colors and movements using just a single device, rather than the multiple photodiodes needed in conventional systems.
    The implications of this research extend across multiple industries. In autonomous vehicles, these devices could enable more efficient recognition of traffic lights, road signs, and obstacles. In healthcare, they could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities. “We believe this technology will contribute to the realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye, with applications in optical sensors for self-driving cars, low-power biometric sensors for medical use, and portable recognition devices,” remarks Dr. Ikuno.
    Overall, this work represents a significant step toward bringing the wonders of computer vision to edge devices, enabling our everyday devices to see the world more like we do. More

  • in

    Engineers develop self-healing muscle for robots

    A University of Nebraska-Lincoln engineering team is another step closer to developing soft robotics and wearable systems that mimic the ability of human and plant skin to detect and self-heal injuries.
    Engineer Eric Markvicka, along with graduate students Ethan Krings and Patrick McManigal, recently presented a paper at the IEEE International Conference on Robotics and Automation in Atlanta, Georgia, that sets forth a systems-level approach for a soft robotics technology that can identify damage from a puncture or extreme pressure, pinpoint its location and autonomously initiate self-repair.
    The paper was among the 39 of 1,606 submissions selected as an ICRA 2025 Best Paper Award finalist. It was also a finalist for the Best Student Paper Award and in the mechanism and design category.
    The team’s strategy may help overcome a longstanding problem in developing soft robotics systems that import nature-inspired design principles.
    “In our community, there is a huge push toward replicating traditional rigid systems using soft materials, and a huge movement toward biomimicry,” said Markvicka, Robert F. and Myrna L. Krohn Assistant Professor of Biomedical Engineering. “While we’ve been able to create stretchable electronics and actuators that are soft and conformal, they often don’t mimic biology in their ability to respond to damage and then initiate self-repair.”
    To fill that gap, his team developed an intelligent, self-healing artificial muscle featuring a multi-layer architecture that enables the system to identify and locate damage, then initiate a self-repair mechanism — all without external intervention.
    “The human body and animals are amazing. We can get cut and bruised and get some pretty serious injuries. And in most cases, with very limited external applications of bandages and medications, we’re able to self-heal a lot of things,” Markvicka said. “If we could replicate that within synthetic systems, that would really transform the field and how we think about electronics and machines.”
    The team’s “muscle” — or actuator, the part of a robot that converts energy into physical movement — has three layers. The bottom one — the damage detection layer — is a soft electronic skin composed of liquid metal microdroplets embedded in a silicone elastomer. That skin is adhered to the middle layer, the self-healing component, which is a stiff thermoplastic elastomer. On top is the actuation layer, which kick-starts the muscle’s motion when pressurized with water.

    To begin the process, the team induces five monitoring currents across the bottom “skin” of the muscle, which is connected to a microcontroller and sensing circuit. Puncture or pressure damage to that layer triggers formation of an electrical network between the traces. The system recognizes this electrical footprint as evidence of damage and subsequently increases the current running through the newly formed electrical network.
    This enables that network to function as a local Joule heater, converting the energy of the electric current into heat around the areas of damage. After a few minutes, this heat melts and reprocesses the middle thermoplastic layer, which seals the damage — effectively self-healing the wound.
    The last step is resetting the system back to its original state by erasing the bottom layer’s electrical footprint of damage. To do this, Markvicka’s team is exploiting the effects of electromigration, a process in which an electrical current causes metal atoms to migrate. The phenomenon is traditionally viewed as a hindrance in metallic circuits because moving atoms deform and cause gaps in a circuit’s materials, leading to device failure and breakage.
    In a major innovation, the researchers are using electromigration to solve a problem that has long plagued their efforts to create an autonomous, self-healing system: the seeming permanency of the damage-induced electrical networks in the bottom layer. Without the ability to reset the baseline monitoring traces, the system cannot complete more than one cycle of damage and repair.
    It struck the researchers that electromigration — with its ability to physically separate metal ions and trigger open-circuit failure — might be the key to erasing the newly formed traces. The strategy worked: By further ramping up the current, the team can induce electromigration and thermal failure mechanisms that reset the damage detection network.
    “Electromigration is generally seen as a huge negative,” Markvicka said. “It’s one of the bottlenecks that has prevented the miniaturization of electronics. We use it in a unique and really positive way here. Instead of trying to prevent it from happening, we are, for the first time, harnessing it to erase traces that we used to think were permanent.”
    Autonomously self-healing technology has potential to revolutionize many industries. In agricultural states like Nebraska, it could be a boon for robotics systems that frequently encounter sharp objects like twigs, thorns, plastic and glass. It could also revolutionize wearable health monitoring devices that must withstand daily wear and tear.

    The technology would also benefit society more broadly. Most consumer-based electronics have lifespans of only one or two years, contributing to billions of pounds of electronic waste each year. This waste contains toxins like lead and mercury, which threaten human and environmental health. Self-healing technology could help stem the tide.
    “If we can begin to create materials that are able to passably and autonomously detect when damage has happened, and then initiate these self-repair mechanisms, it would really be transformative,” Markvicka said. More

  • in

    New quantum visualization technique to identify materials for next generation quantum computing

    Scientists at University College Cork (UCC) in Ireland have developed a powerful new tool for finding the next generation of materials needed for large-scale, fault-tolerant quantum computing.
    The significant breakthrough means that, for the first time, researchers have found a way to determine once and for all whether a material can effectively be used in certain quantum computing microchips.
    The major findings have been published today in the academic journal Science and are the result of a large international collaboration which includes leading theoretical work from Prof. Dung-Hai Lee in University of California, Berkeley, and material synthesis from professors Sheng Ran and Johnpierre Paglione in Washington University in St. Louis and the University of Maryland respectively.
    Using equipment found in only three labs around the world, researchers at the Davis Group based in UCC were able to definitively determine whether Uranium ditelluride (UTe 2), which is a known superconductor, had the characteristics required to be an intrinsic topological superconductor.
    A topological superconductor is a unique material that, at its surface, hosts new quantum particles named Majorana fermions. In theory they can be used to stably store quantum information without being disturbed by the noise and disorder which plague present quantum computers. Physicists have been on the hunt for an intrinsic topological superconductor for decades, but no material ever discovered has ticked all the boxes.
    UTe 2 had been considered a strong candidate material for intrinsic topological superconductivity since its discovery in 2019 however no research had definitively evaluated its suitability — until now.
    Using a scanning tunneling microscope (STM) operating in a new mode invented by Séamus Davis, Professor of Quantum Physics at UCC, a team led by Joe Carroll, a PhD researcher at the Davis Group and Kuanysh Zhussupbekov, a Marie Curie postdoctoral fellow, were able to conclude once and for all whether UTe 2 is the right sort of topological superconductor.

    The experiments carried out using the “Andreev” STM — found only in Prof. Davis’ labs in Cork, Oxford University in the UK, and Cornell University in New York — discovered that UTe 2 is indeed an intrinsic topological superconductor, but not exactly the kind for which physicists have been searching.
    However, the first-of-its-kind experiment is a breakthrough in itself.
    When asked about the experiment Mr. Carroll described it as follows “Traditionally researchers have searched for topological superconductors by taking measurements using metallic probes. They do this because metals are simple materials, so they play essentially no role in the experiment. What’s new about our technique is that we use another superconductor to probe the surface of UTe2. By doing so we exclude the normal surface electrons from our measurement leaving behind only the Majorana fermions.”
    Carroll further highlighted that this technique would allow scientists to determine directly whether other materials are suitable for topological quantum computing.
    Quantum computers have the capacity to answer in seconds the kind of complex mathematical problems that would take current generation computers years to solve. Right now, governments and companies around the world are racing to develop quantum processors with more and more quantum bits but the fickle nature of these quantum calculations is holding back significant progress.
    Earlier this year Microsoft announced the Majorana 1, which the company has said is “the world’s first Quantum Processing Unit (QPU) powered by a Topological Core .”
    Microsoft explained that to achieve this advance, synthetic topological superconductors based on elaborately engineered stacks of conventional materials were required.
    However the Davis Group’s new work means that scientists can now find single materials to replace these complicated circuits, potentially leading to greater efficiencies in quantum processors and allowing many more qubits on a single chip thus moving us closer to the next generation of quantum computing. More

  • in

    The future of AI regulation: Why leashes are better than guardrails

    Many policy discussions on AI safety regulation have focused on the need to establish regulatory “guardrails” to protect the public from the risks of AI technology. In a new paper published in the journal Risk Analysis, two experts argue that, instead of imposing guardrails, policymakers should demand “leashes.”
    Director of the Penn Program on Regulation and professor at University of Pennsylvania Carey Law School, Cary Coglianese and University of Notre Dame computer science doctoral candidate Colton R. Crum explain that management-based regulation (a flexible “leash” strategy) will work better than a prescriptive guardrail approach, as AI is too heterogeneous and dynamic to operate within fixed lanes. Leashes “are flexible and adaptable — just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration,” the authors write. Leashes “permit AI tools to explore new domains without regulatory barriers getting in the way.”
    The various applications of AI include social media, chatbots, autonomous vehicles, precision medicine, fintech investment advisors, and many more. While AI offers benefits for society, such as, to pick but one example, the ability to find evidence of cancerous tumors that well-trained radiologists can miss, it also can pose risks.
    In their paper, Coglianese and Crum offer three examples of AI risks: autonomous vehicle (AV) collisions, suicide associated with social media, and bias and discrimination brought about by AI through a variety of applications and digital formats, such as AI-generated text, images, and videos.
    With flexible management-based regulation, firms using AI tools that pose risks in each of these settings — and others — would be expected to put their AI tools on a leash by creating internal systems to anticipate and reduce the range of possible harms from the use of their tools.
    Management-based regulation can flexibly respond to “AI’s novel uses and problems and better allows for technological exploration, discovery, and change,” write Coglianese and Crum. At the same time, it provides “a tethered structure that, like a leash, can help prevent AI from ‘running away.'” More

  • in

    Electronic tattoo gauges mental strain

    Researchers gave participants face tattoos that can track when their brain is working too hard. Published May 29 in the Cell Press journal Device, the study introduces a non-permanent wireless forehead e-tattoo that decodes brainwaves to measure mental strain without bulky headgear. This technology may help track the mental workload of workers like air traffic controllers and truck drivers, whose lapses in focus can have serious consequences.
    “Technology is developing faster than human evolution. Our brain capacity cannot keep up and can easily get overloaded,” says Nanshu Lu, the study’s author, from the University of Texas at Austin (UT Austin). “There is an optimal mental workload for optimal performance, which differs from person to person.”
    Humans perform best in a cognitive Goldilocks zone, neither overwhelmed nor bored. Finding that balance is key to optimal performance. Current mental workload assessment relies on the NASA Task Load Index, a lengthy and subjective survey participants complete after performing tasks.
    The e-tattoo offers an objective alternative by analyzing electrical activity from the brain and eye movement, in processes known as electroencephalography (EEG) and electrooculography (EOG). Unlike EEG caps that are bulky with dangling wires and lathered with squishy gel, the wireless e-tattoo consists of a lightweight battery pack and paper-thin, sticker-like sensors. These sensors feature wavy loops and coils, a design that allows them to stretch and conform seamlessly to the skin for comfort and clear signals.
    “What’s surprising is those caps, while having more sensors for different regions of the brain, never get a perfect signal because everyone’s head shape is different,” says Lu. “We measure participants’ facial features to manufacture personalized e-tattoos to ensure that the sensors are always in the right location and receiving signals.”
    The researchers tested the e-tattoo on six participants who completed a memory challenge that increased in difficulty. As mental load rose, participants showed higher activity in theta and delta brainwaves, signaling increased cognitive demand, while alpha and beta activity decreased, indicating mental fatigue. The results suggest that the device can detect when the brain is struggling.
    The device didn’t stop at detection. It could also predict mental strain. The researchers trained a computer model to estimate mental workload based on signals from the e-tattoo, successfully distinguishing between different levels of mental workload. The results show that the device can potentially predict mental fatigue.
    Cost is another advantage. Traditional EEG equipment can exceed $15,000, while the e-tattoo’s chips and battery pack costs $200, and disposable sensors are about $20 each. “Being low cost makes the device accessible,” says author Luis Sentis from UT Austin. “One of my wishes is to turn the e-tattoo into a product we can wear at home.”
    While the e-tattoo only works on hairless skin, the researchers are working to combine it with ink-based sensors that work on hair. This will allow for full head coverage and more comprehensive brain monitoring. As robots and new technology increasingly enter workplaces and homes, the team hopes this technology will enhance understanding of human-machine interaction.
    “We’ve long monitored workers’ physical health, tracking injuries and muscle strain,” says Sentis. “Now we have the ability to monitor mental strain, which hasn’t been tracked. This could fundamentally change how organizations ensure the overall well-being of their workforce.” More

  • in

    Traditional diagnostic decision support systems outperform generative AI for diagnosing disease

    Medical professionals have been using artificial intelligence (AI) to streamline diagnoses for decades, using what are called diagnostic decision support systems (DDSSs). Computer scientists at Massachusetts General Hospital (MGH), a founding member of the Mass General Brigham healthcare system first developed MGH’s own DDSS called DXplain in 1984, which relies on thousands of disease profiles, clinical findings, and data points to generate and rank potential diagnoses for use by clinicians. With the popularization and increased accessibility of generative AI and large language models (LLMs) in medicine, investigators at MGH’s Laboratory of Computer Science (LCS) sought to compare the diagnostic capabilities of DXplain, which has evolved over the past four decades, to popular LLMs.
    Their new research compares ChatGPT, Gemini, and DXplain at diagnosing patient cases, revealing that DXplain performed somewhat better, but the LLMs also performed well. The investigators envision pairing DXplain with an LLM as the optimal way forward, as it would improve both systems and enhance their clinical efficacy. The results are published in JAMA Network Open.
    “Amid all the interest in large language models, it’s easy to forget that the first AI systems used successfully in medicine were expert systems like DXplain,” said co-author Edward Hoffer, MD, of the LCS at MGH.
    “These systems can enhance and expand clinicians’ diagnoses, recalling information that physicians may forget in the heat of the moment and isn’t biased by common flaws in human reasoning. And now, we think combining the powerful explanatory capabilities of existing diagnostic systems with the linguistic capabilities of large language models will enable better automated diagnostic decision support and patient outcomes,” said corresponding author Mitchell Feldman, MD, also of MGH’s LCS.
    The investigators tested the diagnostic capabilities of DXplain, ChatGPT, and Gemini using 36 patient cases spanning racial, ethnic, age, and gender categories. For each case, the systems had a chance to suggest potential case diagnoses both with and without lab data. With lab data, all three systems listed the correct diagnosis most of the time: 72% for DXplain, 64% for ChatGPT, and 58% for Gemini. Without lab data, DXplain listed the correct diagnosis 56% of the time, outperforming ChatGPT (42%) and Gemini (39%), though the results were not statistically significant.
    The researchers observed that the DDSS and LLMs caught certain diseases the others missed, suggesting there may be promise in combining the approaches. Preliminary work building off these findings reveals that LLMs could be used to pull clinical findings from narrative text, which could then be plugged into DDSSs — in turn synergistically improving both systems and their diagnostic conclusions. More