More stories

  • in

    How do neural networks learn? A mathematical formula explains how they detect relevant patterns

    Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to healthcare. But these networks remain a black box whose inner workings engineers and scientists struggle to understand. Now, a team led by data and computer scientists at the University of California San Diego has given neural networks the equivalent of an X-ray to uncover how they actually learn.
    The researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions.
    “We are trying to understand neural networks from first principles,” said Daniel Beaglehole, a Ph.D. student in the UC San Diego Department of Computer Science and Engineering and co-first author of the study. “With our formula, one can simply interpret which features the network is using to make predictions.”
    The team presented their findings in the March 7 issue of the journal Science.
    Why does this matter? AI-powered tools are now pervasive in everyday life. Banks use them to approve loans. Hospitals use them to analyze medical data, such as X-rays and MRIs. Companies use them to screen job applicants. But it’s currently difficult to understand the mechanism neural networks use to make decisions and the biases in the training data that might impact this.
    “If you don’t understand how neural networks learn, it’s very hard to establish whether neural networks produce reliable, accurate, and appropriate responses,” said Mikhail Belkin, the paper’s corresponding author and a professor at the UC San Diego Halicioglu Data Science Institute. “This is particularly significant given the rapid recent growth of machine learning and neural net technology.”
    The study is part of a larger effort in Belkin’s research group to develop a mathematical theory that explains how neural networks work. “Technology has outpaced theory by a huge amount,” he said. “We need to catch up.”
    The team also showed that the statistical formula they used to understand how neural networks learn, known as Average Gradient Outer Product (AGOP), could be applied to improve performance and efficiency in other types of machine learning architectures that do not include neural networks.

    “If we understand the underlying mechanisms that drive neural networks, we should be able to build machine learning models that are simpler, more efficient and more interpretable,” Belkin said. “We hope this will help democratize AI.”
    The machine learning systems that Belkin envisions would need less computational power, and therefore less power from the grid, to function. These systems also would be less complex and so easier to understand.
    Illustrating the new findings with an example
    (Artificial) neural networks are computational tools to learn relationships between data characteristics (i.e. identifying specific objects or faces in an image). One example of a task is determining whether in a new image a person is wearing glasses or not. Machine learning approaches this problem by providing the neural network many example (training) images labeled as images of “a person wearing glasses” or “a person not wearing glasses.” The neural network learns the relationship between images and their labels, and extracts data patterns, or features, that it needs to focus on to make a determination. One of the reasons AI systems are considered a black box is because it is often difficult to describe mathematically what criteria the systems are actually using to make their predictions, including potential biases. The new work provides a simple mathematical explanation for how the systems are learning these features.
    Features are relevant patterns in the data. In the example above, there are a wide range of features that the neural networks learns, and then uses, to determine if in fact a person in a photograph is wearing glasses or not. One feature it would need to pay attention to for this task is the upper part of the face. Other features could be the eye or the nose area where glasses often rest. The network selectively pays attention to the features that it learns are relevant and then discards the other parts of the image, such as the lower part of the face, the hair and so on.
    Feature learning is the ability to recognize relevant patterns in data and then use those patterns to make predictions. In the glasses example, the network learns to pay attention to the upper part of the face. In the new Science paper, the researchers identified a statistical formula that describes how the neural networks are learning features.

    Alternative neural network architectures: The researchers went on to show that inserting this formula into computing systems that do not rely on neural networks allowed these systems to learn faster and more efficiently.
    “How do I ignore what’s not necessary? Humans are good at this,” said Belkin. “Machines are doing the same thing. Large Language Models, for example, are implementing this ‘selective paying attention’ and we haven’t known how they do it. In our Science paper, we present a mechanism explaining at least some of how the neural nets are ‘selectively paying attention.'”
    Study funders included the National Science Foundation and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning. Belkin is part of NSF-funded and UC San Diego-led The Institute for Learning-enabled Optimization at Scale, or TILOS. More

  • in

    Mathematicians use AI to identify emerging COVID-19 variants

    Scientists at The Universities of Manchester and Oxford have developed an AI framework that can identify and track new and concerning COVID-19 variants and could help with other infections in the future.
    The framework combines dimension reduction techniques and a new explainable clustering algorithm called CLASSIX, developed by mathematicians at The University of Manchester. This enables the quick identification of groups of viral genomes that might present a risk in the future from huge volumes of data.
    The study, presented this week in the journal PNAS, could support traditional methods of tracking viral evolution, such as phylogenetic analysis, which currently require extensive manual curation.
    Roberto Cahuantzi, a researcher at The University of Manchester and first and corresponding author of the paper, said: “Since the emergence of COVID-19, we have seen multiple waves of new variants, heightened transmissibility, evasion of immune responses, and increased severity of illness.
    “Scientists are now intensifying efforts to pinpoint these worrying new variants, such as alpha, delta and omicron, at the earliest stages of their emergence. If we can find a way to do this quickly and efficiently, it will enable us to be more proactive in our response, such as tailored vaccine development and may even enable us to eliminate the variants before they become established.”
    Like many other RNA viruses, COVID-19 has a high mutation rate and short time between generations meaning it evolves extremely rapidly. This means identifying new strains that are likely to be problematic in the future requires considerable effort.
    Currently, there are almost 16 million sequences available on the GISAID database (the Global Initiative on Sharing All Influenza Data), which provides access to genomic data of influenza viruses.

    Mapping the evolution and history of all COVID-19 genomes from this data is currently done using extremely large amounts of computer and human time.
    The described method allows automation of such tasks. The researchers processed 5.7 million high-coverage sequences in only one to two days on a standard modern laptop; this would not be possible for existing methods, putting identification of concerning pathogen strains in the hands of more researchers due to reduced resource needs.
    Thomas House, Professor of Mathematical Sciences at The University of Manchester, said: “The unprecedented amount of genetic data generated during the pandemic demands improvements to our methods to analyse it thoroughly. The data is continuing to grow rapidly but without showing a benefit to curating this data, there is a risk that it will be removed or deleted.
    “We know that human expert time is limited, so our approach should not replace the work of humans all together but work alongside them to enable the job to be done much quicker and free our experts for other vital developments.”
    The proposed method works by breaking down genetic sequences of the COVID-19 virus into smaller “words” (called 3-mers) represented as numbers by counting them. Then, it groups similar sequences together based on their word patterns using machine learning techniques.
    Stefan Güttel, Professor of Applied Mathematics at the University of Manchester, said: “The clustering algorithm CLASSIX we developed is much less computationally demanding than traditional methods and is fully explainable, meaning that it provides textual and visual explanations of the computed clusters.”
    Roberto Cahuantzi added: “Our analysis serves as a proof of concept, demonstrating the potential use of machine learning methods as an alert tool for the early discovery of emerging major variants without relying on the need to generate phylogenies.
    “Whilst phylogenetics remains the ‘gold standard’ for understanding the viral ancestry, these machine learning methods can accommodate several orders of magnitude more sequences than the current phylogenetic methods and at a low computational cost.” More

  • in

    Cicadas’ unique urination unlocks new understanding of fluid dynamics

    Cicadas are the soundtrack of summer, but their pee is more special than their music. Rather than sprinkling droplets, they emit jets of urine from their small frames. For years, Georgia Tech researchers have wanted to understand the cicada’s unique urination.
    Saad Bhamla, an assistant professor in the School of Chemical and Biochemical Engineering, and his research group hoped for an opportunity to study a cicada’s fluid excretion. However, while cicadas are easily heard, they hide in trees, making them hard to observe. As such, seeing a cicada pee is an event. Bhamla’s team had only watched the process on YouTube.
    Then, while doing field work in Peru, the team got lucky: They saw numerous cicadas in a tree, peeing.
    This moment of observation was enough to disprove two main insect pee paradigms. First, cicadas eat xylem sap, and most xylem feeders only pee in droplets because it uses less energy to excrete the sap. Cicadas, however, are such voracious eaters that individually flicking away each drop of pee would be too taxing and would not extract enough nutrients from the sap.
    “The assumption was that if an insect transitions from droplet formation into a jet, it will require more energy because the insect would have to inject more speed,” said Elio Challita, a former Ph.D. student in Bhamla’s lab and current postdoctoral researcher at Harvard University.
    Second, smaller animals are expected to pee in droplets because their orifice is too tiny to emit anything thicker. Because of cicadas’ larger size — with wingspans that can rival a small hummingbird’s — they use less energy to expel pee in jets.
    “Previously, it was understood that if a small animal wants to eject jets of water, then this becomes a bit challenging, because the animal expends more energy to force the fluid’s exit at a higher speed. This is due to surface tension and viscous forces. But a larger animal can rely on gravity and inertial forces to pee,” Challita said.

    The cicadas’ ability to jet water offered the researchers a new understanding of how fluid dynamics impacts these tiny insects — and even large mammals. The researchers published this challenge to the paradigm as a brief, “Unifying Fluidic Excretion Across Life from Cicadas to Elephants,” in Proceedings of the National Academy of Sciences the week of March 11.
    For years, the research group has been studying fluid ejection across species, culminating in a recent arXiv preprint that characterizes this phenomenon from microscopic fungi to colossal whales. Their framework reveals diverse functions — such as excretion, venom spraying, prey hunting, spore dispersal, and plant guttation — highlighting potential applications in soft robotics, additive manufacturing, and drug delivery.
    Cicadas are the smallest animal to create high-speed jets, so they can potentially inform applications in making jets in tiny robots/nozzles. And because their population reaches trillions, the ecosystem impact of their fluid ejection is substantial but unknown. Beyond bio-inspired engineering, Bhamla believes the critters could also inform bio-monitoring applications.
    “Our research has mapped the excretory patterns of animals, spanning eight orders of scale from tiny cicadas to massive elephants,” he said. “We’ve identified the fundamental constraints and forces that dictate these processes, offering a new lens through which to understand the principles of excretion, a critical function of all living systems. This work not only deepens our comprehension of biological functions but also paves the way for unifying the underlying principles that govern life’s essential processes.” More

  • in

    Robotic interface masters a soft touch

    The perception of softness can be taken for granted, but it plays a crucial role in many actions and interactions — from judging the ripeness of an avocado to conducting a medical exam, or holding the hand of a loved one. But understanding and reproducing softness perception is challenging, because it involves so many sensory and cognitive processes.
    Robotics researchers have tried to address this challenge with haptic devices, but previous attempts have not distinguished between two primary elements of softness perception: cutaneous cues (sensory feedback from the skin of the fingertip), and kinesthetic cues (feedback about the amount of force on the finger joint).
    “If you press on a marshmallow with your fingertip, it’s easy to tell that it’s soft. But if you place a hard biscuit on top of that marshmallow and press again, you can still tell that the soft marshmallow is underneath, even though your fingertip is touching a hard surface,” explains Mustafa Mete, a PhD student in the Reconfigurable Robotics Lab in the School of Engineering. “We wanted to see if we could create a robotic platform that can do the same.”
    With SORI (Softness Rendering Interface), the RRL, led by Jamie Paik, has achieved just that. By decoupling cutaneous and kinesthetic cues, SORI faithfully recreate the softness of a range of real materials, filling a gap in the robotics field enabling many applications where softness sensation is critical — from deep-sea exploration to robot-assisted surgery.
    The research appears in the Proceedings of the National Academy of Science (PNAS).
    We all feel softness differently
    Mete explains that neuroscientific and psychological studies show that cutaneous cues are largely based on how much skin is in contact with a surface, which is often related in part to the deformation of the object. In other words, a surface that envelopes a greater area of your fingertip will be perceived as softer. But because human fingertips vary widely in size and firmness, one finger may make greater contact with a given surface than another.

    “We realized that the softness I feel may not be the same as the softness you feel, because of our different finger shapes. So, for our study, we first had to develop parameters for the geometries of a fingertip and its contact surface in order to estimate the softness cues for that fingertip,” Mete explains. Then, the researchers extracted the softness parameters from a range of different materials, and mapped both sets of parameters onto the SORI device.
    Building on the RRL’s trademark origami robot research, which has fueled spinoffs for reconfigurable environments and a haptic joystick, SORI is equipped with motor-driven origami joints that can be modulated to become stiffer or more supple. Perched atop the joints is a dimpled silicone membrane. A flow of air inflates the membrane to varying degrees, to envelop a fingertip placed at its center.
    With this novel decoupling of kinesthetic and cutaneous functionality, SORI succeeded in recreating the softness of a range of materials — including beef, salmon, and marshmallow — over the course of several experiments with two human volunteers. It also mimicked materials with both soft and firm attributes (such as a biscuit on top of a marshmallow, or a leather-bound book). In one virtual experiment, SORI even reproduced the sensation of a beating heart, to demonstrate its efficacy at rendering soft materials in motion.
    Medicine is therefore a primary area of potential application for this technology; for example, to train medical students to detect cancerous tumors, or to provide crucial sensory feedback to surgeons using robots to perform operations.
    Other applications include robot-assisted exploration of space or the deep ocean, where the device could enable scientists to feel the softness of a discovered object from a remote location. SORI is also a potential answer to one of the biggest challenges in robot-assisted agriculture: harvesting tender fruits and vegetables without crushing them.
    “This is not intended to act as a softness sensor for robots, but to transfer the feeling of ‘touch’ digitally, just like sending photos or music,” Mete summarizes. More

  • in

    Accessibility toolkit for game engine Unity

    The growing popularity of video games is putting an increased focus on their accessibility for people with disabilities. While large productions are increasingly taking this into account by adding accessibility features, this aspect is usually completely absent in indie productions due to a lack of resources. To facilitate the implementation of accessibility features, Klemens Strasser developed a freely accessible toolkit for the Unity game engine as part of his master’s thesis at the Institute of Interactive Systems and Data Science at Graz University of Technology (TU Graz). It is available for free on GitHub. This makes it easy to integrate support tools for people with visual impairments into a games project. Together with his master’s thesis supervisor Johanna Pirker, Klemens Strasser has now published the toolkit and an action guide for more accessibility in games in a paper.
    Help with orientation
    When creating the “toolbox,” Klemens Strasser focused on four points: (1) support in operating menus, (2) perception of the game environment as well as (3) control on a fixed grid and (4) free navigation if the character can move in all directions. The first three points could be solved with a screen reader, but for the free navigation a so-called navigation agent had to be implemented. This guides the players to a destination they have specified via an audio signal after it has calculated the route to get there.
    For the screen reader solution to facilitate menu operation, environmental perception and control on a grid, it was first necessary to capture all visible and usable objects and characters on the screen. A tool known as an accessibility signifier was used to recognise the elements and assign them a label, traits, value and description. The game transfers this information to the screen reader used by the players, which reads it out to them.
    Developers with positive feedback
    The toolkit was evaluated in a test with nine game developers, all of whom have a university background in software engineering. Their task was to implement it in a simple match-3 game in which the aim is to arrange three identical symbols or elements next to each other by moving them. The feedback from the developers was consistently positive. The implementation was described as simple, the task was easy to understand and they comfortably found their way around the toolkit. Before the test, only three of the developers had worked with accessibility features, but afterwards most of them wanted to use them for their next project.
    “Games should be open to as many people as possible, which is why it is so important to make them more accessible for people with disabilities,” says Klemens Strasser. “With the Accessibility Toolkit for Unity, we want to make it as easy as possible for indie developers to implement these options. Since, according to the WHO, 253 million people worldwide live with a visual impairment, this would include a very large group. Nevertheless, there is still a lot to be done here, as there are numerous other impairments for which easy-to-implement solutions should be provided.” The Game Lab at TU Graz is constantly carrying out research on such solutions and other topics relating to accessibility in computer games.
    Years of success as an independent game developer
    Klemens Strasser himself has been working on the topic of accessibility for games for several years. Even during his studies and after completing his Master’s degree in Computer Science at Graz University of Technology (TU Graz), he independently developed games that take accessibility into account. In 2015, he won the Apple Design Award in the Student category with his game Elementary Minute, and was nominated for the award in the Inclusivity category in 2022 with Letter Rooms and 2023 with the Ancient Board Game Collection. His games published for iOS have been downloaded over 200,000 times to date.
    Link to the toolkit on GitHub: https://github.com/KlemensStrasser/KAP More

  • in

    Design rules and synthesis of quantum memory candidates

    In the quest to develop quantum computers and networks, there are many components that are fundamentally different than those used today. Like a modern computer, each of these components has different constraints. However, it is currently unclear what materials can be used to construct those components for the transmission and storage of quantum information.
    In new research published in the Journal of the American Chemical Society, University of Illinois Urbana Champaign materials science & engineering professor Daniel Shoemaker and graduate student Zachary Riedel used density functional theory (DFT) calculations to identify possible europium (Eu) compounds to serve as a new quantum memory platform. They also synthesized one of the predicted compounds, a brand new, air stable material that is a strong candidate for use in quantum memory, a system for storing quantum states of photons or other entangled particles without destroying the information held by that particle.
    “The problem that we are trying to tackle here is finding a material that can store that quantum information for a long time. One way to do this is to use ions of rare earth metals,” says Shoemaker.
    Found at the very bottom of the periodic table, rare earth elements, such as europium, have shown promise for use in quantum information devices due to their unique atomic structures. Specifically, rare earth ions have many electrons densely clustered close to the nucleus of the atom. The excitation of these electrons, from the resting state, can “live” for a long time — seconds or possibly even hours, an eternity in the world of computing. Such long-lived states are crucial to avoid the loss of quantum information and position rare earth ions as strong candidates for qubits, the fundamental units of quantum information.
    “Normally in materials engineering, you can go to a database and find what known material should work for a particular application,” Shoemaker explains. “For example, people have worked for over 200 years to find proper lightweight, high strength materials for different vehicles. But in quantum information, we have only been working at this for a decade or two, so the population of materials is actually very small, and you quickly find yourself in unknown chemical territory.”
    Shoemaker and Riedel imposed a few rules in their search of possible new materials. First, they wanted to use the ionic configuration Eu3+ (as opposed to the other possible configuration, Eu2+) because it operates at the right optical wavelength. To be “written” optically, the materials should be transparent. Second, they wanted a material made of other elements that have only one stable isotope. Elements with more than one isotope yield a mixture of different nuclear masses that vibrate at slightly different frequencies, scrambling the information being stored. Third, they wanted a large separation between individual europium ions to limit unintended interactions. Without separation, the large clouds of europium electrons would act like a canopy of leaves in a forest, rather than well-spaced-out trees in a suburban neighborhood, where the rustling of leaves from one tree would gently interact with leaves from another.
    With those rules in place, Riedel composed a DFT computational screening to predict which materials could form. Following this screening, Riedel was able to identify new Eu compound candidates, and further, he was able to synthesize the top suggestion from the list, the double perovskite halide Cs2NaEuF6. This new compound is air stable, which means it can be integrated with other components, a critical property in scalable quantum computing. DFT calculations also predicted several other possible compounds that have yet to be synthesized.

    “We have shown that there are a lot of unknown materials left to be made that are good candidates for quantum information storage,” Shoemaker says. “And we have shown that we can make them efficiently and predict which ones are going to be stable.”
    Daniel Shoemaker is also an affiliate of the Materials Research Laboratory (MRL) and the Illinois Quantum Information Science and Technology Center (IQUIST) at UIUC.
    Zachary Riedel is currently a postdoctoral researcher at Los Alamos National Laboratory.
    This research was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Center Q-NEXT. The National Science Foundation through the University of Illinois Materials Research Science and Engineering Center supported the use of facilities and instrumentation. More

  • in

    Going top shelf with AI to better track hockey data

    Researchers from the University of Waterloo got a valuable assist from artificial intelligence (AI) tools to help capture and analyze data from professional hockey games faster and more accurately than ever before, with big implications for the business of sports.
    The growing field of hockey analytics currently relies on the manual analysis of video footage from games. Professional hockey teams across the sport, notably in the National Hockey League (NHL), make important decisions regarding players’ careers based on that information.
    “The goal of our research is to interpret a hockey game through video more effectively and efficiently than a human,” said Dr. David Clausi, a professor in Waterloo’s Department of Systems Design Engineering. “One person cannot possibly document everything happening in a game.”
    Hockey players move fast in a non-linear fashion, dynamically skating across the ice in short shifts. Apart from numbers and last names on jerseys that are not always visible to the camera, uniforms aren’t a robust tool to identify players — particularly at the fast-paced speed hockey is known for. This makes manually tracking and analyzing each player during a game very difficult and prone to human error.
    The AI tool developed by Clausi, Dr. John Zelek, a professor in Waterloo’s Department of Systems Design Engineering, research assistant professor Yuhao Chen, and a team of graduate students use deep learning techniques to automate and improve player tracking analysis.
    The research was undertaken in partnership with Stathletes, an Ontario-based professional hockey performance data and analytics company. Working through NHL broadcast video clips frame-by-frame, the research team manually annotated the teams, the players and the players’ movements across the ice. They ran this data through a deep learning neural network to teach the system how to watch a game, compile information and produce accurate analyses and predictions.
    When tested, the system’s algorithms delivered high rates of accuracy. It scored 94.5 per cent for tracking players correctly, 97 per cent for identifying teams and 83 per cent for identifying individual players.
    The research team is working to refine their prototype, but Stathletes is already using the system to annotate video footage of hockey games. The potential for commercialization goes beyond hockey. By retraining the system’s components, it can be applied to other team sports such as soccer or field hockey.
    “Our system can generate data for multiple purposes,” Zelek said. “Coaches can use it to craft winning game strategies, team scouts can hunt for players, and statisticians can identify ways to give teams an extra edge on the rink or field. It really has the potential to transform the business of sport.” More

  • in

    Flexible artificial intelligence optoelectronic sensors towards health monitoring

    From creating images, generating text, and enabling self-driving cars, the potential uses of artificial intelligence (AI) are vast and transformative. However, all this capability comes at a very high energy cost. For instance, estimates indicate that training OPEN AI’s popular GPT-3 model consumed over 1,287 MWh, enough to supply an average U.S. household for 120 years. This energy cost poses a substantial roadblock, particularly for using AI in large-scale applications like health monitoring where large amounts of critical health information are sent to centralized data centers for processing. This not only consumes a lot of energy but also raises concerns about sustainability, bandwidth overload, and communication delays.
    Achieving AI-based health monitoring and biological diagnosis requires a standalone sensor that operates independently without the need for constant connection to a central server. At the same time, the sensor must have a low power consumption for prolonged use, should be capable of handling the rapidly changing biological signals for real-time monitoring, be flexible enough to attach comfortably to the human body, and be easy to make and dispose of due to the need for frequent replacements for hygiene reasons.
    Considering these criteria, researchers from Tokyo University of Science (TUS) led by Associate Professor Takashi Ikuno have developed a flexible paper-based sensor that operates like the human brain. Their findings were published online in the journal Advanced Electronic Materialson 22 February 2024.
    “A paper-based optoelectronic synaptic device composed of nanocellulose and ZnO was developed for realizing physical reservoir computing. This device exhibits synaptic behavior and cognitive tasks at a suitable timescale for health monitoring,” says Dr. Ikuno.
    In the human brain, information travels between networks of neurons through synapses. Each neuron can process information on its own, enabling the brain to handle multiple tasks at the same time. This ability for parallel processing makes the brain much more efficient compared to traditional computing systems. To mimic this capability, the researchers fabricated a photo-electronic artificial synapse device composed of gold electrodes on top of a 10 µm transparent film consisting of zinc oxide (ZnO) nanoparticles and cellulose nanofibers (CNFs).
    The transparent film serves three main purposes. Firstly, it allows light to pass through, enabling it to handle optical input signals representing various biological information. Secondly, the cellulose nanofibers impart flexibility and can be easily disposed of by incineration. Thirdly, the ZnO nanoparticles are photoresponsive and generate a photocurrent when exposed to pulsed UV light and a constant voltage. This photocurrent mimics the responses transmitted by synapsis in the human brain, enabling the device to interpret and process biological information received from optical sensors.
    Notably, the film was able to distinguish 4-bit input optical pulses and generate distinct currents in response to time-series optical input, with a rapid response time on the order of subseconds. This quick response is crucial for detecting sudden changes or abnormalities in health-related signals. Furthermore, when exposed to two successive light pulses, the electrical current response was stronger for the second pulse. This behavior termed post-potentiation facilitation contributes to short-term memory processes in the brain and enhances the ability of synapses to detect and respond to familiar patterns.
    To test this, the researchers converted MNIST images, a dataset of handwritten digits, into 4-bit optical pulses. They then irradiated the film with these pulses and measured the current response. Using this data as input, a neural network was able to recognize handwritten numbers with an accuracy of 88%.
    Remarkably, this handwritten-digit recognition capability remained unaffected even when the device was repeatedly bent and stretched up to 1,000 times, demonstrating its ruggedness and feasibility for repeated use. “This study highlights the potential of embedding semiconductor nanoparticles in flexible CNF films for use as flexible synaptic devices for PRC,” concludes Dr. Ikuno.
    Let us hope that these advancements pave the way for wearable sensors in health monitoring applications! More