More stories

  • in

    Robots face the future

    Researchers have found a way to bind engineered skin tissue to the complex forms of humanoid robots. This brings with it potential benefits to robotic platforms such as increased mobility, self-healing abilities, embedded sensing capabilities and an increasingly lifelike appearance. Taking inspiration from human skin ligaments, the team, led by Professor Shoji Takeuchi of the University of Tokyo, included special perforations in a robot face, which helped a layer of skin take hold. Their research could be useful in the cosmetics industry and to help train plastic surgeons.
    Takeuchi is a pioneer in the field of biohybrid robotics, where biology and mechanical engineering meet. So far, his lab, the Biohybrid Systems Laboratory, has created mini robots that walk using biological muscle tissue, 3D printed lab-grown meat, engineered skin that can heal, and more. It was during research on the last of these items that Takeuchi felt the need to take the idea of robotic skin further to improve its properties and capabilities.
    “During previous research on a finger-shaped robot covered in engineered skin tissue we grew in our lab, I felt the need for better adhesion between the robotic features and the subcutaneous structure of the skin,” said Takeuchi. “By mimicking human skin-ligament structures and by using specially made V-shaped perforations in solid materials, we found a way to bind skin to complex structures. The natural flexibility of the skin and the strong method of adhesion mean the skin can move with the mechanical components of the robot without tearing or peeling away.”
    Previous methods to attach skin tissue to solid surfaces involved things like mini anchors or hooks, but these limited the kinds of surfaces that could receive skin coatings and could cause damage during motion. By carefully engineering small perforations instead, essentially any shape of surface can have skin applied to it. The trick the team employed was to use a special collagen gel for adhesion, which is naturally viscous so difficult to feed into the minuscule perforations. But using a common technique for plastic adhesion called plasma treatment, they managed to coax the collagen into the fine structures of the perforations while also holding the skin close to the surface in question.
    “Manipulating soft, wet biological tissues during the development process is much harder than people outside the field might think. For instance, if sterility is not maintained, bacteria can enter and the tissue will die,” said Takeuchi. “However, now that we can do this, living skin can bring a range of new abilities to robots. Self-healing is a big deal — some chemical-based materials can be made to heal themselves, but they require triggers such as heat, pressure or other signals, and they also do not proliferate like cells. Biological skin repairs minor lacerations as ours does, and nerves and other skin organs can be added for use in sensing and so on.”
    This research was not just made to prove a point, though. Takeuchi and his lab have a goal in mind for this application that could help in several areas of medical research. The idea of an organ-on-a-chip is not especially new, and finds use in things like drug development, but something like a face-on-a-chip could be useful in research into skin aging, cosmetics, surgical procedures, plastic surgery and more. Also, if sensors can be embedded, robots may be endowed with better environmental awareness and improved interactive capabilities.
    “In this study, we managed to replicate human appearance to some extent by creating a face with the same surface material and structure as humans,” said Takeuchi. “Additionally, through this research, we identified new challenges, such as the necessity for surface wrinkles and a thicker epidermis to achieve a more humanlike appearance. We believe that creating a thicker and more realistic skin can be achieved by incorporating sweat glands, sebaceous glands, pores, blood vessels, fat and nerves. Of course, movement is also a crucial factor, not just the material, so another important challenge is creating humanlike expressions by integrating sophisticated actuators, or muscles, inside the robot. Creating robots that can heal themselves, sense their environment more accurately and perform tasks with humanlike dexterity is incredibly motivating.” More

  • in

    3D-printed chip sensor detects foodborne pathogens for safer products

    Every so often, a food product is recalled because of some sort of contamination. For consumers of such products, a recall can trigger doubt in the safety and reliability of what they eat and drink. In many cases, a recall will come too late to keep some people from getting ill.
    In spite of the food industry’s efforts to fight pathogens, products are still contaminated and people still get sick. Much of the problem stems from the tools available to screen for harmful pathogens, which are often not effective enough at protecting the public.
    In AIP Advances, by AIP Publishing, researchers from Guangdong University of Technology and Pudong New District People’s Hospital developed a new method for detecting foodborne pathogens that is faster, cheaper, and more effective than existing methods. The researchers hope their technique can improve screening processes and keep contaminated food out of the hands of consumers.
    Even with the best detection method, finding contaminating pathogens is not an easy task.
    “Detecting these pathogens is challenging, due to their diverse nature and the various environments in which they can thrive,” said author Silu Feng. “Additionally, low concentrations of pathogens in large food samples, the presence of similar non-pathogenic organisms, and the complex nature of different food types make accurate and rapid detection difficult.”
    Existing detection methods do exist, such as cell culture and DNA sequencing, but are challenging to employ at large scales. Not every batch of food can be thoroughly tested, so some contaminants inevitably slip through.
    “Overall, these methods face limitations such as lengthy result times, the need for specialized equipment and trained personnel, and challenges in detecting multiple pathogens simultaneously, highlighting the need for improved detection techniques,” said Feng.

    The study’s authors decided to take a different approach, designing a microfluidic chip that uses light to detect multiple types of pathogens simultaneously. Their chip is created using 3D printing, making it easy to fabricate in large amounts and modify to target specific pathogens.
    The chip is split into four sections, each of which is tailored to detect a specific pathogen. If that pathogen is present in the sample, it will bind to a detection surface and change its optical properties. This arrangement let the researchers detect several common bacteria, such as E. coli, salmonella, listeria, and S. aureus, quickly and at very low concentrations.
    “This method can quickly and effectively detect multiple different pathogens, and the detection results are easy to interpret, significantly improving detection efficiency,” said Feng.
    The team plans to continue developing their device to make it even more applicable for food screening. More

  • in

    Meet CARMEN, a robot that helps people with mild cognitive impairment

    Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation-a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home.
    Unlike other robots in this space, CARMEN was developed by the research team at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners. To the best of the researchers’ knowledge, CARMEN is also the only robot that teaches compensatory cognitive strategies to help improve memory and executive function.
    “We wanted to make sure we were providing meaningful and practical inventions,” said Laurel Riek, a professor of computer science and emergency medicine at UC San Diego and the work’s senior author.
    MCI is an in-between stage between typical aging and dementia. It affects various areas of cognitive functioning, including memory, attention, and executive functioning. About 20% of individuals over 65 have the condition, with up to 15% transitioning to dementia each year. Existing pharmacological treatments have not been able to slow or prevent this evolution, but behavioral treatments can help.
    Researchers programmed CARMEN to deliver a series of simple cognitive training exercises. For example, the robot can teach participants to create routine places to leave important objects, such as keys; or learn note taking strategies to remember important things. CARMEN does this through interactive games and activities.
    The research team designed CARMEN with a clear set of criteria in mind. It is important that people can use the robot independently, without clinician or researcher supervision. For this reason, CARMEN had to be plug and play, without many moving parts that require maintenance. The robot also has to be able to function with limited access to the internet, as many people do not have access to reliable connectivity. CARMEN needs to be able to function over a long period of time. The robot also has to be able to communicate clearly with users; express compassion and empathy for a person’s situation; and provide breaks after challenging tasks to help sustain engagement.
    Researchers deployed CARMEN for a week in the homes of several people with MCI, who then engaged in multiple tasks with the robot, such as identifying routine places to leave household items so they don’t get lost, and placing tasks on a calendar so they won’t be forgotten. Researchers also deployed the robot in the homes of several clinicians with experience working with people with MCI. Both groups of participants completed questionnaires and interviews before and after the week-long deployments.

    After the week with CARMEN, participants with MCI reported trying strategies and behaviors that they previously had written off as impossible. All participants reported that using the robot was easy. Two out of the three participants found the activities easy to understand, but one of the users struggled. All said they wanted more interaction with the robot.
    “We found that CARMEN gave participants confidence to use cognitive strategies in their everyday life, and participants saw opportunities for CARMEN to exhibit greater levels of autonomy or be used for other applications,” the researchers write.
    The research team presented their findings at the ACM/IEEE Human Robot Interaction (HRI) conference in March 2024, where they received a best paper award nomination.
    Next steps
    Next steps include deploying the robot in a larger number of homes.
    Researchers also plan to give CARMEN the ability to have conversations with users, with an emphasis on preserving privacy when these conversations happen. This is both an accessibility issue (as some users might not have the fine motor skills necessary to interact with CARMEN’s touch screen), as well as because most people expect to be able to have conversations with systems in their homes. At the same time, researchers want to limit how much information CARMEN can give users. “We want to be mindful that the user still needs to do the bulk of the work, so the robot can only assist and not give too many hints,” Riek said.
    Researchers are also exploring how CARMEN could assist users with other conditions, such as ADHD.
    The UC San Diego team built CARMEN based on the FLEXI robot from the University of Washington. But they made substantial changes to its hardware, and wrote all its software from scratch. Researchers used ROS for the robot’s operating system.
    Many elements of the project are available at https://github.com/UCSD-RHC-Lab/CARMEN More

  • in

    Novel application of optical tweezers: Colorfully showing molecular energy transfer

    A novel technique with potential applications for fields such as droplet chemistry and photochemistry has been demonstrated by an Osaka Metropolitan University-led research group.
    Professor Yasuyuki Tsuboi of the Graduate School of Science and the team investigated Förster resonance energy transfer (FRET), a phenomenon seen in photosynthesis and other natural processes where a donor molecule in an excited state transfers energy to an acceptor molecule.
    Using dyes to mark the donor and acceptor molecules, the team set out to see if FRET could be controlled by the intensity of an optical force, in this case a laser beam. By focusing a laser beam on an isolated polymer droplet, the team showed that increased intensity accelerated the energy transfer, made visible by the polymer changing color due to the dyes mixing.
    Fluorescence could also be controlled just by adjusting the laser intensity without touching the sample, offering a novel non-contact approach.
    “Although this research is still at a basic stage, it may provide new options for a variety of future FRET research applications,” Professor Tsuboi explained. “We believe that extending this to quantum dots as well as new polymer systems and fluorescent molecules is the next challenge.” More

  • in

    International collaboration lays the foundation for future AI for materials

    Artificial intelligence (AI) is accelerating the development of new materials. A prerequisite for AI in materials research is large-scale use and exchange of data on materials, which is facilitated by a broad international standard. A major international collaboration now presents an extended version of the OPTIMADE standard.
    New technologies in areas such as energy and sustainability involving for example batteries, solar cells, LED lighting and biodegradable materials require new materials. Many researchers around the world are working to create materials that have not existed before. But there are major challenges in creating materials with the exact properties required, such as not containing environmentally hazardous substances and at the same time being durable enough to not break down.
    “We’re now seeing an explosive development where researchers in materials science are adopting AI methods from other fields and also developing their own models to use in materials research. Using AI to predict properties of different materials opens up completely new possibilities,” says Rickard Armiento, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University in Sweden.
    Today, many demanding simulations are performed on supercomputers that describe how electrons move in materials, which gives rise to different material properties. These advanced calculations yield large amounts of data that can be used to train machine learning models.
    These AI models can then immediately predict the responses to new calculations that have not yet been made, and by extension predict the properties of new materials. But huge amounts of data are required to train the models.
    “We’re moving into an era where we want to train models on all data that exist,” says Rickard Armiento.
    Data from large-scale simulations, and general data about materials, are collected in large databases. Over time, many such databases have emerged from different research groups and projects, like isolated islands in the sea. They work differently and often use properties that are defined in different ways.
    “Researchers at universities or in industry who want to map materials on a large scale or want to train an AI model must retrieve information from these databases. Therefore, a standard is needed so that users can communicate with all these data libraries and understand the information they receive,” says Gian-Marco Rignanese, professor at the Institute of Condensed Matter and Nanosciences at UCLouvain in Belgium.
    The OPTIMADE (Open databases integration for materials design) standard has been developed over the past eight years. Behind this standard is a large international network with over 30 institutions worldwide and large materials databases in Europe and the USA. The aim is to give users easier access to both leading and lesser-known materials databases. A new version of the standard, v1.2, is now being released, and is described in an article published in the journal Digital Discovery. One of the biggest changes in the new version is a greatly enhanced possibility to accurately describe different material properties and other data using common, well-founded definitions.
    The international collaboration spans the EU, the UK, the US, Mexico, Japan and China together with institutions such as École Polytechnique Fédérale de Lausanne (EPFL), University of California Berkeley, University of Cambridge, Northwestern University, Duke University, Paul Scherrer Institut, and Johns Hopkins University. Much of the collaboration takes place in meetings with annual workshops funded by CECAM (Centre Européen de Calcul Atomique et Moléculaire) in Switzerland, with the first one funded by the Lorentz Center in the Netherlands. Other activities have been supported by the organisation Psi-k, the competence centre NCCR MARVEL in Switzerland, and the e-Science Research Centre (SeRC) in Sweden. The researchers in the collaboration receive support from many different financiers. More

  • in

    Researchers engineer AI path to prevent power outages

    University of Texas at Dallas researchers have developed an artificial intelligence (AI) model that could help electrical grids prevent power outages by automatically rerouting electricity in milliseconds.
    The UT Dallas researchers, who collaborated with engineers at the University at Buffalo in New York, demonstrated the automated system in a study published online June 4 in Nature Communications.
    The approach is an early example of “self-healing grid” technology, which uses AI to detect and repair problems such as outages autonomously and without human intervention when issues occur, such as storm-damaged power lines.
    The North American grid is an extensive, complex network of transmission and distribution lines, generation facilities and transformers that distributes electricity from power sources to consumers.
    Using various scenarios in a test network, the researchers demonstrated that their solution can automatically identify alternative routes to transfer electricity to users before an outage occurs. AI has the advantage of speed: The system can automatically reroute electrical flow in microseconds, while current human-controlled processes to determine alternate paths could take from minutes to hours.
    “Our goal is to find the optimal path to send power to the majority of users as quickly as possible,” said Dr. Jie Zhang, associate professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science. “But more research is needed before this system can be implemented.”
    Zhang, who is co-corresponding author of the study, and his colleagues used technology that applies machine learning to graphs in order to map the complex relationships between entities that make up a power distribution network. Graph machine learning involves describing a network’s topology, the way the various components are arranged in relation to each other and how electricity moves through the system.

    Network topology also may play a critical role in applying AI to solve problems in other complex systems, such as critical infrastructure and ecosystems, said study co-author Dr. Yulia Gel, professor of mathematical sciences in the School of Natural Sciences and Mathematics.
    “In this interdisciplinary project, by leveraging our team expertise in power systems, mathematics and machine learning, we explored how we can systematically describe various interdependencies in the distribution systems using graph abstractions,” Gel said. “We then investigated how the underlying network topology, integrated into the reinforcement learning framework, can be used for more efficient outage management in the power distribution system.”
    The researchers’ approach relies on reinforcement learning that makes the best decisions to achieve optimal results. Led by co-corresponding author Dr. Souma Chowdhury, associate professor of mechanical and aerospace engineering, University at Buffalo researchers focused on the reinforcement learning aspect of the project.
    If electricity is blocked due to line faults, the system is able to reconfigure using switches and draw power from available sources in close proximity, such as from large-scale solar panels or batteries on a university campus or business, said Roshni Anna Jacob, a UTD electrical engineering doctoral student and the paper’s co-first author.
    “You can leverage those power generators to supply electricity in a specific area,” Jacob said.
    After focusing on preventing outages, the researchers will aim to develop similar technology to repair and restore the grid after a power disruption. More

  • in

    Novel blood-powered chip offers real-time health monitoring

    Metabolic disorders, like diabetes and osteoporosis, are burgeoning throughout the world, especially in developing countries.
    The diagnosis for these disorders is typically a blood test, but because the existing healthcare infrastructure in remote areas is unable to support these tests, most individuals go undiagnosed and without treatment. Conventional methods also involve labor-intensive and invasive processes which tend to be time-consuming and make real-time monitoring unfeasible, particularly in real life settings and in country-side populations.
    Researchers at the University of Pittsburgh and University of Pittsburgh Medical Center are proposing a new device that uses blood to generate electricity and measure its conductivity, opening doors to medical care in any location.
    “As the fields of nanotechnology and microfluidics continue to advance, there is a growing opportunity to develop lab-on-a-chip devices capable of surrounding the constraints of modern medical care,” said Amir Alavi, assistant professor of civil and environmental engineering at Pitt’s Swanson School of Engineering. “These technologies could potentially transform healthcare by offering quick and convenient diagnostics, ultimately improving patient outcomes and the effectiveness of medical services.”
    Now, We Got Good Blood
    Blood electrical conductivity is a valuable metric for assessing various health parameters and detecting medical conditions.
    This conductivity is predominantly governed by the concentration of essential electrolytes, notably sodium and chloride ions. These electrolytes are integral to a multitude of physiological processes, helping physicians pinpoint a diagnosis.

    “Blood is basically a water-based environment that has various molecules that conduct or impede electric currents,” explained Dr. Alan Wells, the medical director of UPMC Clinical Laboratories, Executive Vice Chairman, Section of Laboratory Medicine at University of Pittsburgh and UPMC, and Thomas Gill III Professor of Pathology, Pitt School of Medicine, Department of Pathology. “Glucose, for example, is an electrical conductor. We are able to see how it affects conductivity through these measurements. Thus, allowing us to make a diagnosis on the spot.”
    Despite its vitality, the knowledge of human blood conductivity is limited because of its measurement challenges like electrode polarization, limited access to human blood samples, and the complexities associated with blood temperature maintenance. Measuring conductivity at frequencies below 100 Hz is particularly important for gaining a deeper understanding of the blood electrical properties and fundamental biological processes, but is even more difficult.
    A Pocket-Sized Lab
    The research team is proposing an innovative, portable millifluidic nanogenerator lab-on-a-chip device capable of measuring blood at low frequencies. The device utilizes blood as a conductive substance within its integrated triboelectric nanogenerator, or TENG. The proposed blood-based TENG system can convert mechanical energy into electricity via triboelectrification.
    This process involves the exchange of electrons between contacting materials, resulting in a charge transfer. In a TENG system, the electron transfer and charge separation generate a voltage difference that drives electric current when the materials experience relative motion like compression or sliding. The team analyzes the voltage generated by the device under predefined loading conditions to determine the electrical conductivity of the blood. The self-powering mechanism enables miniaturization of the proposed blood-based nanogenerator. The team also used AI models to directly estimate blood electrical conductivity using the voltage patterns generated by the device.
    To test its accuracy, the team compared its results with a traditional test which proved successful. This opens the door to taking the testing to where people live. In addition, blood-powered nanogenerators are capable of functioning in the body wherever blood is present, enabling self-powered diagnostics using the local blood chemistry. More

  • in

    Prying open the AI black box

    Artificial intelligence continues to squirm its way into many aspects of our lives. But what about biology, the study of life itself? AI can sift through hundreds of thousands of genome data points to identify potential new therapeutic targets. While these genomic insights may appear helpful, scientists aren’t sure how today’s AI models come to their conclusions in the first place. Now, a new system named SQUID arrives on the scene armed to pry open AI’s black box of murky internal logic.
    SQUID, short for Surrogate Quantitative Interpretability for Deepnets, is a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and can lead to more accurate predictions about the effects of genetic mutations.
    How does it work so much better? The key, CSHL Assistant Professor Peter Koo says, lies in SQUID’s specialized training.
    “The tools that people use to try to understand these models have been largely coming from other fields like computer vision or natural language processing. While they can be useful, they’re not optimal for genomics. What we did with SQUID was leverage decades of quantitative genetics knowledge to help us understand what these deep neural networks are learning,” explains Koo.
    SQUID works by first generating a library of over 100,000 variant DNA sequences. It then analyzes the library of mutations and their effects using a program called MAVE-NN (Multiplex Assays of Variant Effects Neural Network). This tool allows scientists to perform thousands of virtual experiments simultaneously. In effect, they can “fish out” the algorithms behind a given AI’s most accurate predictions. Their computational “catch” could set the stage for experiments that are more grounded in reality.
    “In silico [virtual] experiments are no replacement for actual laboratory experiments. Nevertheless, they can be very informative. They can help scientists form hypotheses for how a particular region of the genome works or how a mutation might have a clinically relevant effect,” explains CSHL Associate Professor Justin Kinney, a co-author of the study.
    There are tons of AI models in the sea. More enter the waters each day. Koo, Kinney, and colleagues hope that SQUID will help scientists grab hold of those that best meet their specialized needs.
    Though mapped, the human genome remains an incredibly challenging terrain. SQUID could help biologists navigate the field more effectively, bringing them closer to their findings’ true medical implications. More