More stories

  • in

    Meet CARMEN, a robot that helps people with mild cognitive impairment

    Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation-a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home.
    Unlike other robots in this space, CARMEN was developed by the research team at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners. To the best of the researchers’ knowledge, CARMEN is also the only robot that teaches compensatory cognitive strategies to help improve memory and executive function.
    “We wanted to make sure we were providing meaningful and practical inventions,” said Laurel Riek, a professor of computer science and emergency medicine at UC San Diego and the work’s senior author.
    MCI is an in-between stage between typical aging and dementia. It affects various areas of cognitive functioning, including memory, attention, and executive functioning. About 20% of individuals over 65 have the condition, with up to 15% transitioning to dementia each year. Existing pharmacological treatments have not been able to slow or prevent this evolution, but behavioral treatments can help.
    Researchers programmed CARMEN to deliver a series of simple cognitive training exercises. For example, the robot can teach participants to create routine places to leave important objects, such as keys; or learn note taking strategies to remember important things. CARMEN does this through interactive games and activities.
    The research team designed CARMEN with a clear set of criteria in mind. It is important that people can use the robot independently, without clinician or researcher supervision. For this reason, CARMEN had to be plug and play, without many moving parts that require maintenance. The robot also has to be able to function with limited access to the internet, as many people do not have access to reliable connectivity. CARMEN needs to be able to function over a long period of time. The robot also has to be able to communicate clearly with users; express compassion and empathy for a person’s situation; and provide breaks after challenging tasks to help sustain engagement.
    Researchers deployed CARMEN for a week in the homes of several people with MCI, who then engaged in multiple tasks with the robot, such as identifying routine places to leave household items so they don’t get lost, and placing tasks on a calendar so they won’t be forgotten. Researchers also deployed the robot in the homes of several clinicians with experience working with people with MCI. Both groups of participants completed questionnaires and interviews before and after the week-long deployments.

    After the week with CARMEN, participants with MCI reported trying strategies and behaviors that they previously had written off as impossible. All participants reported that using the robot was easy. Two out of the three participants found the activities easy to understand, but one of the users struggled. All said they wanted more interaction with the robot.
    “We found that CARMEN gave participants confidence to use cognitive strategies in their everyday life, and participants saw opportunities for CARMEN to exhibit greater levels of autonomy or be used for other applications,” the researchers write.
    The research team presented their findings at the ACM/IEEE Human Robot Interaction (HRI) conference in March 2024, where they received a best paper award nomination.
    Next steps
    Next steps include deploying the robot in a larger number of homes.
    Researchers also plan to give CARMEN the ability to have conversations with users, with an emphasis on preserving privacy when these conversations happen. This is both an accessibility issue (as some users might not have the fine motor skills necessary to interact with CARMEN’s touch screen), as well as because most people expect to be able to have conversations with systems in their homes. At the same time, researchers want to limit how much information CARMEN can give users. “We want to be mindful that the user still needs to do the bulk of the work, so the robot can only assist and not give too many hints,” Riek said.
    Researchers are also exploring how CARMEN could assist users with other conditions, such as ADHD.
    The UC San Diego team built CARMEN based on the FLEXI robot from the University of Washington. But they made substantial changes to its hardware, and wrote all its software from scratch. Researchers used ROS for the robot’s operating system.
    Many elements of the project are available at https://github.com/UCSD-RHC-Lab/CARMEN More

  • in

    Novel application of optical tweezers: Colorfully showing molecular energy transfer

    A novel technique with potential applications for fields such as droplet chemistry and photochemistry has been demonstrated by an Osaka Metropolitan University-led research group.
    Professor Yasuyuki Tsuboi of the Graduate School of Science and the team investigated Förster resonance energy transfer (FRET), a phenomenon seen in photosynthesis and other natural processes where a donor molecule in an excited state transfers energy to an acceptor molecule.
    Using dyes to mark the donor and acceptor molecules, the team set out to see if FRET could be controlled by the intensity of an optical force, in this case a laser beam. By focusing a laser beam on an isolated polymer droplet, the team showed that increased intensity accelerated the energy transfer, made visible by the polymer changing color due to the dyes mixing.
    Fluorescence could also be controlled just by adjusting the laser intensity without touching the sample, offering a novel non-contact approach.
    “Although this research is still at a basic stage, it may provide new options for a variety of future FRET research applications,” Professor Tsuboi explained. “We believe that extending this to quantum dots as well as new polymer systems and fluorescent molecules is the next challenge.” More

  • in

    International collaboration lays the foundation for future AI for materials

    Artificial intelligence (AI) is accelerating the development of new materials. A prerequisite for AI in materials research is large-scale use and exchange of data on materials, which is facilitated by a broad international standard. A major international collaboration now presents an extended version of the OPTIMADE standard.
    New technologies in areas such as energy and sustainability involving for example batteries, solar cells, LED lighting and biodegradable materials require new materials. Many researchers around the world are working to create materials that have not existed before. But there are major challenges in creating materials with the exact properties required, such as not containing environmentally hazardous substances and at the same time being durable enough to not break down.
    “We’re now seeing an explosive development where researchers in materials science are adopting AI methods from other fields and also developing their own models to use in materials research. Using AI to predict properties of different materials opens up completely new possibilities,” says Rickard Armiento, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University in Sweden.
    Today, many demanding simulations are performed on supercomputers that describe how electrons move in materials, which gives rise to different material properties. These advanced calculations yield large amounts of data that can be used to train machine learning models.
    These AI models can then immediately predict the responses to new calculations that have not yet been made, and by extension predict the properties of new materials. But huge amounts of data are required to train the models.
    “We’re moving into an era where we want to train models on all data that exist,” says Rickard Armiento.
    Data from large-scale simulations, and general data about materials, are collected in large databases. Over time, many such databases have emerged from different research groups and projects, like isolated islands in the sea. They work differently and often use properties that are defined in different ways.
    “Researchers at universities or in industry who want to map materials on a large scale or want to train an AI model must retrieve information from these databases. Therefore, a standard is needed so that users can communicate with all these data libraries and understand the information they receive,” says Gian-Marco Rignanese, professor at the Institute of Condensed Matter and Nanosciences at UCLouvain in Belgium.
    The OPTIMADE (Open databases integration for materials design) standard has been developed over the past eight years. Behind this standard is a large international network with over 30 institutions worldwide and large materials databases in Europe and the USA. The aim is to give users easier access to both leading and lesser-known materials databases. A new version of the standard, v1.2, is now being released, and is described in an article published in the journal Digital Discovery. One of the biggest changes in the new version is a greatly enhanced possibility to accurately describe different material properties and other data using common, well-founded definitions.
    The international collaboration spans the EU, the UK, the US, Mexico, Japan and China together with institutions such as École Polytechnique Fédérale de Lausanne (EPFL), University of California Berkeley, University of Cambridge, Northwestern University, Duke University, Paul Scherrer Institut, and Johns Hopkins University. Much of the collaboration takes place in meetings with annual workshops funded by CECAM (Centre Européen de Calcul Atomique et Moléculaire) in Switzerland, with the first one funded by the Lorentz Center in the Netherlands. Other activities have been supported by the organisation Psi-k, the competence centre NCCR MARVEL in Switzerland, and the e-Science Research Centre (SeRC) in Sweden. The researchers in the collaboration receive support from many different financiers. More

  • in

    Researchers engineer AI path to prevent power outages

    University of Texas at Dallas researchers have developed an artificial intelligence (AI) model that could help electrical grids prevent power outages by automatically rerouting electricity in milliseconds.
    The UT Dallas researchers, who collaborated with engineers at the University at Buffalo in New York, demonstrated the automated system in a study published online June 4 in Nature Communications.
    The approach is an early example of “self-healing grid” technology, which uses AI to detect and repair problems such as outages autonomously and without human intervention when issues occur, such as storm-damaged power lines.
    The North American grid is an extensive, complex network of transmission and distribution lines, generation facilities and transformers that distributes electricity from power sources to consumers.
    Using various scenarios in a test network, the researchers demonstrated that their solution can automatically identify alternative routes to transfer electricity to users before an outage occurs. AI has the advantage of speed: The system can automatically reroute electrical flow in microseconds, while current human-controlled processes to determine alternate paths could take from minutes to hours.
    “Our goal is to find the optimal path to send power to the majority of users as quickly as possible,” said Dr. Jie Zhang, associate professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science. “But more research is needed before this system can be implemented.”
    Zhang, who is co-corresponding author of the study, and his colleagues used technology that applies machine learning to graphs in order to map the complex relationships between entities that make up a power distribution network. Graph machine learning involves describing a network’s topology, the way the various components are arranged in relation to each other and how electricity moves through the system.

    Network topology also may play a critical role in applying AI to solve problems in other complex systems, such as critical infrastructure and ecosystems, said study co-author Dr. Yulia Gel, professor of mathematical sciences in the School of Natural Sciences and Mathematics.
    “In this interdisciplinary project, by leveraging our team expertise in power systems, mathematics and machine learning, we explored how we can systematically describe various interdependencies in the distribution systems using graph abstractions,” Gel said. “We then investigated how the underlying network topology, integrated into the reinforcement learning framework, can be used for more efficient outage management in the power distribution system.”
    The researchers’ approach relies on reinforcement learning that makes the best decisions to achieve optimal results. Led by co-corresponding author Dr. Souma Chowdhury, associate professor of mechanical and aerospace engineering, University at Buffalo researchers focused on the reinforcement learning aspect of the project.
    If electricity is blocked due to line faults, the system is able to reconfigure using switches and draw power from available sources in close proximity, such as from large-scale solar panels or batteries on a university campus or business, said Roshni Anna Jacob, a UTD electrical engineering doctoral student and the paper’s co-first author.
    “You can leverage those power generators to supply electricity in a specific area,” Jacob said.
    After focusing on preventing outages, the researchers will aim to develop similar technology to repair and restore the grid after a power disruption. More

  • in

    Novel blood-powered chip offers real-time health monitoring

    Metabolic disorders, like diabetes and osteoporosis, are burgeoning throughout the world, especially in developing countries.
    The diagnosis for these disorders is typically a blood test, but because the existing healthcare infrastructure in remote areas is unable to support these tests, most individuals go undiagnosed and without treatment. Conventional methods also involve labor-intensive and invasive processes which tend to be time-consuming and make real-time monitoring unfeasible, particularly in real life settings and in country-side populations.
    Researchers at the University of Pittsburgh and University of Pittsburgh Medical Center are proposing a new device that uses blood to generate electricity and measure its conductivity, opening doors to medical care in any location.
    “As the fields of nanotechnology and microfluidics continue to advance, there is a growing opportunity to develop lab-on-a-chip devices capable of surrounding the constraints of modern medical care,” said Amir Alavi, assistant professor of civil and environmental engineering at Pitt’s Swanson School of Engineering. “These technologies could potentially transform healthcare by offering quick and convenient diagnostics, ultimately improving patient outcomes and the effectiveness of medical services.”
    Now, We Got Good Blood
    Blood electrical conductivity is a valuable metric for assessing various health parameters and detecting medical conditions.
    This conductivity is predominantly governed by the concentration of essential electrolytes, notably sodium and chloride ions. These electrolytes are integral to a multitude of physiological processes, helping physicians pinpoint a diagnosis.

    “Blood is basically a water-based environment that has various molecules that conduct or impede electric currents,” explained Dr. Alan Wells, the medical director of UPMC Clinical Laboratories, Executive Vice Chairman, Section of Laboratory Medicine at University of Pittsburgh and UPMC, and Thomas Gill III Professor of Pathology, Pitt School of Medicine, Department of Pathology. “Glucose, for example, is an electrical conductor. We are able to see how it affects conductivity through these measurements. Thus, allowing us to make a diagnosis on the spot.”
    Despite its vitality, the knowledge of human blood conductivity is limited because of its measurement challenges like electrode polarization, limited access to human blood samples, and the complexities associated with blood temperature maintenance. Measuring conductivity at frequencies below 100 Hz is particularly important for gaining a deeper understanding of the blood electrical properties and fundamental biological processes, but is even more difficult.
    A Pocket-Sized Lab
    The research team is proposing an innovative, portable millifluidic nanogenerator lab-on-a-chip device capable of measuring blood at low frequencies. The device utilizes blood as a conductive substance within its integrated triboelectric nanogenerator, or TENG. The proposed blood-based TENG system can convert mechanical energy into electricity via triboelectrification.
    This process involves the exchange of electrons between contacting materials, resulting in a charge transfer. In a TENG system, the electron transfer and charge separation generate a voltage difference that drives electric current when the materials experience relative motion like compression or sliding. The team analyzes the voltage generated by the device under predefined loading conditions to determine the electrical conductivity of the blood. The self-powering mechanism enables miniaturization of the proposed blood-based nanogenerator. The team also used AI models to directly estimate blood electrical conductivity using the voltage patterns generated by the device.
    To test its accuracy, the team compared its results with a traditional test which proved successful. This opens the door to taking the testing to where people live. In addition, blood-powered nanogenerators are capable of functioning in the body wherever blood is present, enabling self-powered diagnostics using the local blood chemistry. More

  • in

    Prying open the AI black box

    Artificial intelligence continues to squirm its way into many aspects of our lives. But what about biology, the study of life itself? AI can sift through hundreds of thousands of genome data points to identify potential new therapeutic targets. While these genomic insights may appear helpful, scientists aren’t sure how today’s AI models come to their conclusions in the first place. Now, a new system named SQUID arrives on the scene armed to pry open AI’s black box of murky internal logic.
    SQUID, short for Surrogate Quantitative Interpretability for Deepnets, is a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and can lead to more accurate predictions about the effects of genetic mutations.
    How does it work so much better? The key, CSHL Assistant Professor Peter Koo says, lies in SQUID’s specialized training.
    “The tools that people use to try to understand these models have been largely coming from other fields like computer vision or natural language processing. While they can be useful, they’re not optimal for genomics. What we did with SQUID was leverage decades of quantitative genetics knowledge to help us understand what these deep neural networks are learning,” explains Koo.
    SQUID works by first generating a library of over 100,000 variant DNA sequences. It then analyzes the library of mutations and their effects using a program called MAVE-NN (Multiplex Assays of Variant Effects Neural Network). This tool allows scientists to perform thousands of virtual experiments simultaneously. In effect, they can “fish out” the algorithms behind a given AI’s most accurate predictions. Their computational “catch” could set the stage for experiments that are more grounded in reality.
    “In silico [virtual] experiments are no replacement for actual laboratory experiments. Nevertheless, they can be very informative. They can help scientists form hypotheses for how a particular region of the genome works or how a mutation might have a clinically relevant effect,” explains CSHL Associate Professor Justin Kinney, a co-author of the study.
    There are tons of AI models in the sea. More enter the waters each day. Koo, Kinney, and colleagues hope that SQUID will help scientists grab hold of those that best meet their specialized needs.
    Though mapped, the human genome remains an incredibly challenging terrain. SQUID could help biologists navigate the field more effectively, bringing them closer to their findings’ true medical implications. More

  • in

    Unifying behavioral analysis through animal foundation models

    Behavioral analysis can provide a lot of information about the health status or motivations of a living being. A new technology developed at EPFL makes it possible for a single deep learning model to detect animal motion across many species and environments. This “foundational model,” called SuperAnimal, can be used for animal conservation, biomedicine, and neuroscience research.
    Although there is the saying, “straight from the horse’s mouth,” it’s impossible to get a horse to tell you if it’s in pain or experiencing joy. Yet, its body will express the answer in its movements. To a trained eye, pain will manifest as a change in gait, or in the case of joy, the facial expressions of the animal could change. But what if we can automate this with AI? And what about AI models for cows, dogs, cats, or even mice? Automating animal behavior not only removes observer bias, but it helps humans more efficiently get to the right answer.
    Today marks the beginning of a new chapter in posture analysis for behavioral phenotyping. Mackenzie Mathis’ laboratory at EPFL publishes a Nature Communications article describing a particularly effective new open-source tool that requires no human annotations to get the model to track animals. Named “SuperAnimal,” it can automatically recognize, without human supervision, the location of “keypoints” (typically joints) in a whole range of animals — over 45 animal species — and even in mythical ones!
    “The current pipeline allows users to tailor deep learning models, but this then relies on human effort to identify keypoints on each animal to create a training set,” explains Mackenzie Mathis. “This leads to duplicated labeling efforts across researchers and can lead to different semantic labels for the same keypoints, making merging data to train large foundation models very challenging. Our new method provides a new approach to standardize this process and train large-scale datasets. It also makes labeling 10 to 100 times more effective than current tools.”
    The “SuperAnimal method” is an evolution of a pose estimation technique that Mackenzie Mathis’ laboratory had already distributed under the name “DeepLabCut™.” You can read more about this game-changing tool and its origin in this new Nature technology feature.
    “Here, we have developed an algorithm capable of compiling a large set of annotations across databases and train the model to learn a harmonized language — we call this pre-training the foundation model,” explains Shaokai Ye, a PhD student researcher and first author of the study. “Then users can simply deploy our base model or fine-tune it on their own data, allowing for further customization if needed.”
    These advances will make motion analysis much more accessible. “Veterinarians could be particularly interested, as well as those in biomedical research — especially when it comes to observing the behavior of laboratory mice. But it can go further,” says Mackenzie Mathis, mentioning neuroscience and… athletes (canine or otherwise)! Other species — birds, fish, and insects — are also within the scope of the model’s next evolution. “We also will leverage these models in natural language interfaces to build even more accessible and next-generation tools. For example, Shaokai and I, along with our co-authors at EPFL, recently developed AmadeusGPT, published recently at NeurIPS, that allows for querying video data with written or spoken text. Expanding this for complex behavioral analysis will be very exciting.” SuperAnimal is now available to researchers worldwide through its open-source distribution. More

  • in

    Controlling electronics with light: The magnetite breakthrough

    “Some time ago, we showed that it is possible to induce an inverse phase transition in magnetite,” says physicist Fabrizio Carbone at EPFL. “It’s as if you took water and you could turn it into ice by putting energy into it with a laser. This is counterintuitive as normally to freeze water you cool it down, i.e. remove energy from it.”
    Now, Carbone has led a research project to elucidate and control the microscopic structural properties of magnetite during such light-induced phase transitions. The study discovered that using specific light wavelengths for photoexcitation the system can drive magnetite into distinct non-equilibrium metastable states (“metastable” means that the state can change under certain conditions) called “hidden phases,” thus revealing a new protocol to manipulate material properties at ultrafast timescales. The findings, which could impact the future of electronics, are published in PNAS.
    What are “non-equilibrium states?” An “equilibrium state” is basically a stable state where a material’s properties do not change over time because the forces within it are balanced. When this is disrupted, the material (the “system,” to be accurate in terms of physics) is said to enter a non-equilibrium state, exhibiting properties that can border on the exotic and unpredictable.
    The “hidden phases” of magnetite
    A phase transition is a change in a material’s state, due to changes in temperature, pressure, or other external conditions. An everyday example is water going from solid ice to liquid or from liquid to gas when it boils.
    Phase transitions in materials usually follow predictable pathways under equilibrium conditions. But when materials are driven out of equilibrium, they can start showing so called “hidden phases” — intermediate states that are not normally accessible. Observing hidden phases requires advanced techniques that can capture rapid and minute changes in the material’s structure.
    Magnetite (Fe3O4) is a well-studied material known for its intriguing metal-to-insulator transition at low temperatures — from being able to conduct electricity to actively blocking it. This is known as the Verwey transition, and it changes magnetite’s electronic and structural properties significantly.

    With its complex interplay of crystal structure, charge, and orbital orders, magnetite can undergo this metal-insulator transition at around 125 K.
    Ultrafast lasers induce hidden transitions in magnetite
    “To understand this phenomenon better, we did this experiment where we directly looked at the atomic motions happening during such a transformation,” says Carbone. “We found out that laser excitation takes the solid into some different phases that don’t exist in equilibrium conditions.”
    The experiments used two different wavelengths of light: near-infrared (800 nm) and visible (400 nm). When excited with 800 nm light pulses, the magnetite’s structure was disrupted, creating a mix of metallic and insulating regions. In contrast, 400 nm light pulses made the magnetite a more stable insulator.
    To monitor the structural changes in magnetite induced by laser pulses, the researchers used ultrafast electron diffraction, a technique that can “see” the movements of atoms in materials on sub-picosecond timescales (a picosecond is a trillionth of a second).
    The technique allowed the scientists to observe how the different wavelengths of laser light actually affect the structure of the magnetite on an atomic scale.

    Magnetite’s crystal structure is what is referred to as a “monoclinic lattice,” where the unit cell is shaped like a skewed box, with three unequal edges, and two of its angles are 90 degrees while the third is different.
    When the 800 nm light shone on the magnetite, it induced a rapid compression of the magnetite’s monoclinic lattice, transforming it towards a cubic structure. This takes place in three stages over 50 picoseconds, and suggests that there are complex dynamic interactions happening within the material. Conversely, the 400 nm, visible light caused the lattice to expand, reinforcing the monoclinic lattice, and creating a more ordered phase — a stable insulator.
    Fundamental implications and technological applications
    The study reveals that the electronic properties of magnetite can be controlled by selectively using different light wavelengths. Understanding these light-induced transitions provides valuable insights into the fundamental physics of strongly correlated systems.
    “Our study breaks ground for a novel approach to control matter at ultrafast timescale using tailored photon pulses,” write the researchers. Being able to induce and control hidden phases in magnetite could have significant implications for the development of advanced materials and devices. For instance, materials that can switch between different electronic states quickly and efficiently could be used in next-generation computing and memory devices. More