More stories

  • in

    Next platform for brain-inspired computing

    Computers have come so far in terms of their power and potential, rivaling and even eclipsing human brains in their ability to store and crunch data, make predictions and communicate. But there is one domain where human brains continue to dominate: energy efficiency.
    “The most efficient computers are still approximately four orders of magnitude — that’s 10,000 times — higher in energy requirements compared to the human brain for specific tasks such as image processing and recognition, although they outperform the brain in tasks like mathematical calculations,” said UC Santa Barbara electrical and computer engineering Professor Kaustav Banerjee, a world expert in the realm of nanoelectronics. “Making computers more energy efficient is crucial because the worldwide energy consumption by on-chip electronics stands at #4 in the global rankings of nation-wise energy consumption, and it is increasing exponentially each year, fueled by applications such as artificial intelligence.” Additionally, he said, the problem of energy inefficient computing is particularly pressing in the context of global warming, “highlighting the urgent need to develop more energy-efficient computing technologies.”
    Neuromorphic (NM) computing has emerged as a promising way to bridge the energy efficiency gap. By mimicking the structure and operations of the human brain, where processing occurs in parallel across an array of low power-consuming neurons, it may be possible to approach brain-like energy efficiency. In a paper published in thejournal Nature Communications, Banerjee and co-workers Arnab Pal, Zichun Chai, Junkai Jiang and Wei Cao, in collaboration with researchers Vivek De and Mike Davies from Intel Labs propose such an ultra-energy efficient platform, using 2D transition metal dichalcogenide (TMD)-based tunnel-field-effect transistors (TFETs). Their platform, the researchers say, can bring the energy requirements to within two orders of magnitude (about 100 times) with respect to the human brain.
    Leakage currents and subthreshold swing
    The concept of neuromorphic computing has been around for decades, though the research around it has intensified only relatively recently. Advances in circuitry that enable smaller, denser arrays of transistors, and therefore more processing and functionality for less power consumption are just scratching the surface of what can be done to enable brain-inspired computing. Add to that an appetite generated by its many potential applications, such as AI and the Internet-of-Things, and it’s clear that expanding the options for a hardware platform for neuromorphic computing must be addressed in order to move forward.
    Enter the team’s 2D tunnel-transistors. Emerging out of Banerjee’s longstandingresearch efforts to develop high-performance, low-power consumption transistors to meet the growing hunger for processing without a matching increase in power requirement, these atomically thin, nanoscale transistors are responsive at low voltages, and as the foundation of the researchers’ NM platform, can mimic the highly energy efficient operations of the human brain. In addition to lower off-state currents, the 2D TFETs also have a low subthreshold swing (SS), a parameter that describes how effectively a transistor can switch from off to on. According to Banerjee, a lower SS means a lower operating voltage, and faster and more efficient switching.
    “Neuromorphic computing architectures are designed to operate with very sparse firing circuits,” said lead author Arnab Pal, “meaning they mimic how neurons in the brain fire only when necessary.” In contrast to the more conventional von Neumann architecture of today’s computers, in which data is processed sequentially, memory and processing components are separated and which continuously draw power throughout the entire operation, an event-driven system such as a NM computer fires up only when there is input to process, and memory and processing are distributed across an array of transistors. Companies like Intel and IBM have developed brain-inspired platforms, deploying billions of interconnected transistors and generating significant energy savings.

    However, there’s still room for energy efficiency improvement, according to the researchers.
    “In these systems, most of the energy is lost through leakage currents when the transistors are off, rather than during their active state,” Banerjee explained. A ubiquitous phenomenon in the world of electronics, leakage currents are small amounts of electricity that flow through a circuit even when it is in the off state (but still connected to power). According to the paper, current NM chips use traditional metal-oxide-semiconductor field-effect transistors (MOSFETs) which have a high on-state current, but also high off-state leakage. “Since the power efficiency of these chips is constrained by the off-state leakage, our approach — using tunneling transistors with much lower off-state current — can greatly improve power efficiency,” Banerjee said.
    When integrated into a neuromorphic circuit, which emulates the firing and reset of neurons, the TFETs proved themselves more energy efficient than state-of-the-art MOSFETs, particularly the FinFETs (a MOSFET design that incorporates vertical “fins” as a way to provide better control of switching and leakage). TFETs are still in the experimental stage, however the performance and energy efficiency of neuromorphic circuits based on them makes them a promising candidate for the next generation of brain-inspired computing.
    According to co-authors Vivek De (Intel Fellow) and Mike Davies (Director of Intel’s Neuromorphic Computing Lab), “Once realized, this platform can bring the energy consumption in chips to within two orders of magnitude with respect to the human brain — not accounting for the interface circuitry and memory storage elements. This represents a significant improvement from what is achievable today.”
    Eventually, one can realize three-dimensional versions of these 2D-TFET based neuromorphic circuits to provide even closer emulation of the human brain, added Banerjee, widely recognized as one of the key visionaries behind 3D integrated circuits that are now witnessing wide scale commercial proliferation. More

  • in

    Robots face the future

    Researchers have found a way to bind engineered skin tissue to the complex forms of humanoid robots. This brings with it potential benefits to robotic platforms such as increased mobility, self-healing abilities, embedded sensing capabilities and an increasingly lifelike appearance. Taking inspiration from human skin ligaments, the team, led by Professor Shoji Takeuchi of the University of Tokyo, included special perforations in a robot face, which helped a layer of skin take hold. Their research could be useful in the cosmetics industry and to help train plastic surgeons.
    Takeuchi is a pioneer in the field of biohybrid robotics, where biology and mechanical engineering meet. So far, his lab, the Biohybrid Systems Laboratory, has created mini robots that walk using biological muscle tissue, 3D printed lab-grown meat, engineered skin that can heal, and more. It was during research on the last of these items that Takeuchi felt the need to take the idea of robotic skin further to improve its properties and capabilities.
    “During previous research on a finger-shaped robot covered in engineered skin tissue we grew in our lab, I felt the need for better adhesion between the robotic features and the subcutaneous structure of the skin,” said Takeuchi. “By mimicking human skin-ligament structures and by using specially made V-shaped perforations in solid materials, we found a way to bind skin to complex structures. The natural flexibility of the skin and the strong method of adhesion mean the skin can move with the mechanical components of the robot without tearing or peeling away.”
    Previous methods to attach skin tissue to solid surfaces involved things like mini anchors or hooks, but these limited the kinds of surfaces that could receive skin coatings and could cause damage during motion. By carefully engineering small perforations instead, essentially any shape of surface can have skin applied to it. The trick the team employed was to use a special collagen gel for adhesion, which is naturally viscous so difficult to feed into the minuscule perforations. But using a common technique for plastic adhesion called plasma treatment, they managed to coax the collagen into the fine structures of the perforations while also holding the skin close to the surface in question.
    “Manipulating soft, wet biological tissues during the development process is much harder than people outside the field might think. For instance, if sterility is not maintained, bacteria can enter and the tissue will die,” said Takeuchi. “However, now that we can do this, living skin can bring a range of new abilities to robots. Self-healing is a big deal — some chemical-based materials can be made to heal themselves, but they require triggers such as heat, pressure or other signals, and they also do not proliferate like cells. Biological skin repairs minor lacerations as ours does, and nerves and other skin organs can be added for use in sensing and so on.”
    This research was not just made to prove a point, though. Takeuchi and his lab have a goal in mind for this application that could help in several areas of medical research. The idea of an organ-on-a-chip is not especially new, and finds use in things like drug development, but something like a face-on-a-chip could be useful in research into skin aging, cosmetics, surgical procedures, plastic surgery and more. Also, if sensors can be embedded, robots may be endowed with better environmental awareness and improved interactive capabilities.
    “In this study, we managed to replicate human appearance to some extent by creating a face with the same surface material and structure as humans,” said Takeuchi. “Additionally, through this research, we identified new challenges, such as the necessity for surface wrinkles and a thicker epidermis to achieve a more humanlike appearance. We believe that creating a thicker and more realistic skin can be achieved by incorporating sweat glands, sebaceous glands, pores, blood vessels, fat and nerves. Of course, movement is also a crucial factor, not just the material, so another important challenge is creating humanlike expressions by integrating sophisticated actuators, or muscles, inside the robot. Creating robots that can heal themselves, sense their environment more accurately and perform tasks with humanlike dexterity is incredibly motivating.” More

  • in

    3D-printed chip sensor detects foodborne pathogens for safer products

    Every so often, a food product is recalled because of some sort of contamination. For consumers of such products, a recall can trigger doubt in the safety and reliability of what they eat and drink. In many cases, a recall will come too late to keep some people from getting ill.
    In spite of the food industry’s efforts to fight pathogens, products are still contaminated and people still get sick. Much of the problem stems from the tools available to screen for harmful pathogens, which are often not effective enough at protecting the public.
    In AIP Advances, by AIP Publishing, researchers from Guangdong University of Technology and Pudong New District People’s Hospital developed a new method for detecting foodborne pathogens that is faster, cheaper, and more effective than existing methods. The researchers hope their technique can improve screening processes and keep contaminated food out of the hands of consumers.
    Even with the best detection method, finding contaminating pathogens is not an easy task.
    “Detecting these pathogens is challenging, due to their diverse nature and the various environments in which they can thrive,” said author Silu Feng. “Additionally, low concentrations of pathogens in large food samples, the presence of similar non-pathogenic organisms, and the complex nature of different food types make accurate and rapid detection difficult.”
    Existing detection methods do exist, such as cell culture and DNA sequencing, but are challenging to employ at large scales. Not every batch of food can be thoroughly tested, so some contaminants inevitably slip through.
    “Overall, these methods face limitations such as lengthy result times, the need for specialized equipment and trained personnel, and challenges in detecting multiple pathogens simultaneously, highlighting the need for improved detection techniques,” said Feng.

    The study’s authors decided to take a different approach, designing a microfluidic chip that uses light to detect multiple types of pathogens simultaneously. Their chip is created using 3D printing, making it easy to fabricate in large amounts and modify to target specific pathogens.
    The chip is split into four sections, each of which is tailored to detect a specific pathogen. If that pathogen is present in the sample, it will bind to a detection surface and change its optical properties. This arrangement let the researchers detect several common bacteria, such as E. coli, salmonella, listeria, and S. aureus, quickly and at very low concentrations.
    “This method can quickly and effectively detect multiple different pathogens, and the detection results are easy to interpret, significantly improving detection efficiency,” said Feng.
    The team plans to continue developing their device to make it even more applicable for food screening. More

  • in

    Meet CARMEN, a robot that helps people with mild cognitive impairment

    Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation-a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home.
    Unlike other robots in this space, CARMEN was developed by the research team at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners. To the best of the researchers’ knowledge, CARMEN is also the only robot that teaches compensatory cognitive strategies to help improve memory and executive function.
    “We wanted to make sure we were providing meaningful and practical inventions,” said Laurel Riek, a professor of computer science and emergency medicine at UC San Diego and the work’s senior author.
    MCI is an in-between stage between typical aging and dementia. It affects various areas of cognitive functioning, including memory, attention, and executive functioning. About 20% of individuals over 65 have the condition, with up to 15% transitioning to dementia each year. Existing pharmacological treatments have not been able to slow or prevent this evolution, but behavioral treatments can help.
    Researchers programmed CARMEN to deliver a series of simple cognitive training exercises. For example, the robot can teach participants to create routine places to leave important objects, such as keys; or learn note taking strategies to remember important things. CARMEN does this through interactive games and activities.
    The research team designed CARMEN with a clear set of criteria in mind. It is important that people can use the robot independently, without clinician or researcher supervision. For this reason, CARMEN had to be plug and play, without many moving parts that require maintenance. The robot also has to be able to function with limited access to the internet, as many people do not have access to reliable connectivity. CARMEN needs to be able to function over a long period of time. The robot also has to be able to communicate clearly with users; express compassion and empathy for a person’s situation; and provide breaks after challenging tasks to help sustain engagement.
    Researchers deployed CARMEN for a week in the homes of several people with MCI, who then engaged in multiple tasks with the robot, such as identifying routine places to leave household items so they don’t get lost, and placing tasks on a calendar so they won’t be forgotten. Researchers also deployed the robot in the homes of several clinicians with experience working with people with MCI. Both groups of participants completed questionnaires and interviews before and after the week-long deployments.

    After the week with CARMEN, participants with MCI reported trying strategies and behaviors that they previously had written off as impossible. All participants reported that using the robot was easy. Two out of the three participants found the activities easy to understand, but one of the users struggled. All said they wanted more interaction with the robot.
    “We found that CARMEN gave participants confidence to use cognitive strategies in their everyday life, and participants saw opportunities for CARMEN to exhibit greater levels of autonomy or be used for other applications,” the researchers write.
    The research team presented their findings at the ACM/IEEE Human Robot Interaction (HRI) conference in March 2024, where they received a best paper award nomination.
    Next steps
    Next steps include deploying the robot in a larger number of homes.
    Researchers also plan to give CARMEN the ability to have conversations with users, with an emphasis on preserving privacy when these conversations happen. This is both an accessibility issue (as some users might not have the fine motor skills necessary to interact with CARMEN’s touch screen), as well as because most people expect to be able to have conversations with systems in their homes. At the same time, researchers want to limit how much information CARMEN can give users. “We want to be mindful that the user still needs to do the bulk of the work, so the robot can only assist and not give too many hints,” Riek said.
    Researchers are also exploring how CARMEN could assist users with other conditions, such as ADHD.
    The UC San Diego team built CARMEN based on the FLEXI robot from the University of Washington. But they made substantial changes to its hardware, and wrote all its software from scratch. Researchers used ROS for the robot’s operating system.
    Many elements of the project are available at https://github.com/UCSD-RHC-Lab/CARMEN More

  • in

    Novel application of optical tweezers: Colorfully showing molecular energy transfer

    A novel technique with potential applications for fields such as droplet chemistry and photochemistry has been demonstrated by an Osaka Metropolitan University-led research group.
    Professor Yasuyuki Tsuboi of the Graduate School of Science and the team investigated Förster resonance energy transfer (FRET), a phenomenon seen in photosynthesis and other natural processes where a donor molecule in an excited state transfers energy to an acceptor molecule.
    Using dyes to mark the donor and acceptor molecules, the team set out to see if FRET could be controlled by the intensity of an optical force, in this case a laser beam. By focusing a laser beam on an isolated polymer droplet, the team showed that increased intensity accelerated the energy transfer, made visible by the polymer changing color due to the dyes mixing.
    Fluorescence could also be controlled just by adjusting the laser intensity without touching the sample, offering a novel non-contact approach.
    “Although this research is still at a basic stage, it may provide new options for a variety of future FRET research applications,” Professor Tsuboi explained. “We believe that extending this to quantum dots as well as new polymer systems and fluorescent molecules is the next challenge.” More

  • in

    International collaboration lays the foundation for future AI for materials

    Artificial intelligence (AI) is accelerating the development of new materials. A prerequisite for AI in materials research is large-scale use and exchange of data on materials, which is facilitated by a broad international standard. A major international collaboration now presents an extended version of the OPTIMADE standard.
    New technologies in areas such as energy and sustainability involving for example batteries, solar cells, LED lighting and biodegradable materials require new materials. Many researchers around the world are working to create materials that have not existed before. But there are major challenges in creating materials with the exact properties required, such as not containing environmentally hazardous substances and at the same time being durable enough to not break down.
    “We’re now seeing an explosive development where researchers in materials science are adopting AI methods from other fields and also developing their own models to use in materials research. Using AI to predict properties of different materials opens up completely new possibilities,” says Rickard Armiento, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University in Sweden.
    Today, many demanding simulations are performed on supercomputers that describe how electrons move in materials, which gives rise to different material properties. These advanced calculations yield large amounts of data that can be used to train machine learning models.
    These AI models can then immediately predict the responses to new calculations that have not yet been made, and by extension predict the properties of new materials. But huge amounts of data are required to train the models.
    “We’re moving into an era where we want to train models on all data that exist,” says Rickard Armiento.
    Data from large-scale simulations, and general data about materials, are collected in large databases. Over time, many such databases have emerged from different research groups and projects, like isolated islands in the sea. They work differently and often use properties that are defined in different ways.
    “Researchers at universities or in industry who want to map materials on a large scale or want to train an AI model must retrieve information from these databases. Therefore, a standard is needed so that users can communicate with all these data libraries and understand the information they receive,” says Gian-Marco Rignanese, professor at the Institute of Condensed Matter and Nanosciences at UCLouvain in Belgium.
    The OPTIMADE (Open databases integration for materials design) standard has been developed over the past eight years. Behind this standard is a large international network with over 30 institutions worldwide and large materials databases in Europe and the USA. The aim is to give users easier access to both leading and lesser-known materials databases. A new version of the standard, v1.2, is now being released, and is described in an article published in the journal Digital Discovery. One of the biggest changes in the new version is a greatly enhanced possibility to accurately describe different material properties and other data using common, well-founded definitions.
    The international collaboration spans the EU, the UK, the US, Mexico, Japan and China together with institutions such as École Polytechnique Fédérale de Lausanne (EPFL), University of California Berkeley, University of Cambridge, Northwestern University, Duke University, Paul Scherrer Institut, and Johns Hopkins University. Much of the collaboration takes place in meetings with annual workshops funded by CECAM (Centre Européen de Calcul Atomique et Moléculaire) in Switzerland, with the first one funded by the Lorentz Center in the Netherlands. Other activities have been supported by the organisation Psi-k, the competence centre NCCR MARVEL in Switzerland, and the e-Science Research Centre (SeRC) in Sweden. The researchers in the collaboration receive support from many different financiers. More

  • in

    Researchers engineer AI path to prevent power outages

    University of Texas at Dallas researchers have developed an artificial intelligence (AI) model that could help electrical grids prevent power outages by automatically rerouting electricity in milliseconds.
    The UT Dallas researchers, who collaborated with engineers at the University at Buffalo in New York, demonstrated the automated system in a study published online June 4 in Nature Communications.
    The approach is an early example of “self-healing grid” technology, which uses AI to detect and repair problems such as outages autonomously and without human intervention when issues occur, such as storm-damaged power lines.
    The North American grid is an extensive, complex network of transmission and distribution lines, generation facilities and transformers that distributes electricity from power sources to consumers.
    Using various scenarios in a test network, the researchers demonstrated that their solution can automatically identify alternative routes to transfer electricity to users before an outage occurs. AI has the advantage of speed: The system can automatically reroute electrical flow in microseconds, while current human-controlled processes to determine alternate paths could take from minutes to hours.
    “Our goal is to find the optimal path to send power to the majority of users as quickly as possible,” said Dr. Jie Zhang, associate professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science. “But more research is needed before this system can be implemented.”
    Zhang, who is co-corresponding author of the study, and his colleagues used technology that applies machine learning to graphs in order to map the complex relationships between entities that make up a power distribution network. Graph machine learning involves describing a network’s topology, the way the various components are arranged in relation to each other and how electricity moves through the system.

    Network topology also may play a critical role in applying AI to solve problems in other complex systems, such as critical infrastructure and ecosystems, said study co-author Dr. Yulia Gel, professor of mathematical sciences in the School of Natural Sciences and Mathematics.
    “In this interdisciplinary project, by leveraging our team expertise in power systems, mathematics and machine learning, we explored how we can systematically describe various interdependencies in the distribution systems using graph abstractions,” Gel said. “We then investigated how the underlying network topology, integrated into the reinforcement learning framework, can be used for more efficient outage management in the power distribution system.”
    The researchers’ approach relies on reinforcement learning that makes the best decisions to achieve optimal results. Led by co-corresponding author Dr. Souma Chowdhury, associate professor of mechanical and aerospace engineering, University at Buffalo researchers focused on the reinforcement learning aspect of the project.
    If electricity is blocked due to line faults, the system is able to reconfigure using switches and draw power from available sources in close proximity, such as from large-scale solar panels or batteries on a university campus or business, said Roshni Anna Jacob, a UTD electrical engineering doctoral student and the paper’s co-first author.
    “You can leverage those power generators to supply electricity in a specific area,” Jacob said.
    After focusing on preventing outages, the researchers will aim to develop similar technology to repair and restore the grid after a power disruption. More

  • in

    Novel blood-powered chip offers real-time health monitoring

    Metabolic disorders, like diabetes and osteoporosis, are burgeoning throughout the world, especially in developing countries.
    The diagnosis for these disorders is typically a blood test, but because the existing healthcare infrastructure in remote areas is unable to support these tests, most individuals go undiagnosed and without treatment. Conventional methods also involve labor-intensive and invasive processes which tend to be time-consuming and make real-time monitoring unfeasible, particularly in real life settings and in country-side populations.
    Researchers at the University of Pittsburgh and University of Pittsburgh Medical Center are proposing a new device that uses blood to generate electricity and measure its conductivity, opening doors to medical care in any location.
    “As the fields of nanotechnology and microfluidics continue to advance, there is a growing opportunity to develop lab-on-a-chip devices capable of surrounding the constraints of modern medical care,” said Amir Alavi, assistant professor of civil and environmental engineering at Pitt’s Swanson School of Engineering. “These technologies could potentially transform healthcare by offering quick and convenient diagnostics, ultimately improving patient outcomes and the effectiveness of medical services.”
    Now, We Got Good Blood
    Blood electrical conductivity is a valuable metric for assessing various health parameters and detecting medical conditions.
    This conductivity is predominantly governed by the concentration of essential electrolytes, notably sodium and chloride ions. These electrolytes are integral to a multitude of physiological processes, helping physicians pinpoint a diagnosis.

    “Blood is basically a water-based environment that has various molecules that conduct or impede electric currents,” explained Dr. Alan Wells, the medical director of UPMC Clinical Laboratories, Executive Vice Chairman, Section of Laboratory Medicine at University of Pittsburgh and UPMC, and Thomas Gill III Professor of Pathology, Pitt School of Medicine, Department of Pathology. “Glucose, for example, is an electrical conductor. We are able to see how it affects conductivity through these measurements. Thus, allowing us to make a diagnosis on the spot.”
    Despite its vitality, the knowledge of human blood conductivity is limited because of its measurement challenges like electrode polarization, limited access to human blood samples, and the complexities associated with blood temperature maintenance. Measuring conductivity at frequencies below 100 Hz is particularly important for gaining a deeper understanding of the blood electrical properties and fundamental biological processes, but is even more difficult.
    A Pocket-Sized Lab
    The research team is proposing an innovative, portable millifluidic nanogenerator lab-on-a-chip device capable of measuring blood at low frequencies. The device utilizes blood as a conductive substance within its integrated triboelectric nanogenerator, or TENG. The proposed blood-based TENG system can convert mechanical energy into electricity via triboelectrification.
    This process involves the exchange of electrons between contacting materials, resulting in a charge transfer. In a TENG system, the electron transfer and charge separation generate a voltage difference that drives electric current when the materials experience relative motion like compression or sliding. The team analyzes the voltage generated by the device under predefined loading conditions to determine the electrical conductivity of the blood. The self-powering mechanism enables miniaturization of the proposed blood-based nanogenerator. The team also used AI models to directly estimate blood electrical conductivity using the voltage patterns generated by the device.
    To test its accuracy, the team compared its results with a traditional test which proved successful. This opens the door to taking the testing to where people live. In addition, blood-powered nanogenerators are capable of functioning in the body wherever blood is present, enabling self-powered diagnostics using the local blood chemistry. More