More stories

  • in

    Dams now run smarter with AI

    In August 2020, following a period of prolonged drought and intense rainfall, a dam situated near the Seomjin River in Korea experienced overflow during a water release, resulting in damages exceeding 100 billion won (USD 76 million). The flooding was attributed to maintaining the dam’s water level 6 meters higher than the norm. Could this incident have been averted through predictive dam management?
    A research team led by Professor Jonghun Kam and Eunmi Lee, a PhD candidate, from the Division of Environmental Science & Engineering at Pohang University of Science and Technology (POSTECH), recently employed deep learning techniques to scrutinize dam operation patterns and assess their effectiveness. Their findings were published in the Journal of Hydrology.
    Korea faces a precipitation peak during the summer, relying on dams and associated infrastructure for water management. However, the escalating global climate crisis has led to the emergence of unforeseen typhoons and droughts, complicating dam operations. In response, a new study has emerged, aiming to surpass conventional physical models by harnessing the potential of an artificial intelligence (AI) model trained on extensive big data.
    The team focused on crafting an AI model aimed at not only predicting the operational patterns of dams within the Seomjin River basin, specifically focusing on the Seomjin River Dam, Juam Dam, and Juam Control Dam, but also understanding the decision-making processes of the trained AI models. Their objective was to formulate a scenario outlining the methodology for forecasting dam water levels. Employing the Gated Recurrent Unit (GRU) model, a deep learning algorithm, the team trained it using data spanning from 2002 to 2021 from dams along the Seomjin River. Precipitation, inflow, and outflow data served as inputs while hourly dam levels served as outputs. The analysis demonstrated remarkable accuracy, boasting an efficiency index exceeding 0.9.
    Subsequently, the team devised explainable scenarios, manipulating inputs by -40%, -20%, +20%, and 40%, of each input variable to examine how the trained GRU model responded to these alterations in inputs. While changes in precipitation had a negligible impact on dam water levels, variations in inflow significantly influenced the dam’s water level. Notably, the identical change in outflow yielded different water levels at distinct dams, affirming that the GRU model had effectively learned the unique operational nuances of each dam.
    Professor Jonghun Kam remarked “Our examination delved beyond predicting the patterns of dam operations securitize their effectiveness using AI models. We introduced a methodology aimed at indirectly understanding the decision-making process of AI-based black box model determining dam water levels.” He further stated, “Our aspiration is that this insight will contribute to a deeper understanding of dam operations and enhance their efficiency in the future.”
    The research was sponsored by the Mid-career Researcher Program of the National Research Foundation of Korea. More

  • in

    The mind’s eye of a neural network system

    In the background of image recognition software that can ID our friends on social media and wildflowers in our yard are neural networks, a type of artificial intelligence inspired by how own our brains process data. While neural networks sprint through data, their architecture makes it difficult to trace the origin of errors that are obvious to humans — like confusing a Converse high-top with an ankle boot — limiting their use in more vital work like health care image analysis or research. A new tool developed at Purdue University makes finding those errors as simple as spotting mountaintops from an airplane.
    “In a sense, if a neural network were able to speak, we’re showing you what it would be trying to say,” said David Gleich, a Purdue professor of computer science in the College of Science who developed the tool, which is featured in a paper published in Nature Machine Intelligence. “The tool we’ve developed helps you find places where the network is saying, ‘Hey, I need more information to do what you’ve asked.’ I would advise people to use this tool on any high-stakes neural network decision scenarios or image prediction task.”
    Code for the tool is available on GitHub, as are use case demonstrations. Gleich collaborated on the research with Tamal K. Dey, also a Purdue professor of computer science, and Meng Liu, a former Purdue graduate student who earned a doctorate in computer science.
    In testing their approach, Gleich’s team caught neural networks mistaking the identity of images in databases of everything from chest X-rays and gene sequences to apparel. In one example, a neural network repeatedly mislabeled images of cars from the Imagenette database as cassette players. The reason? The pictures were drawn from online sales listings and included tags for the cars’ stereo equipment.
    Neural network image recognition systems are essentially algorithms that process data in a way that mimics the weighted firing pattern of neurons as an image is analyzed and identified. A system is trained to its task — such as identifying an animal, a garment or a tumor — with a “training set” of images that includes data on each pixel, tagging and other information, and the identity of the image as classified within a particular category. Using the training set, the network learns, or “extracts,” the information it needs in order to match the input values with the category. This information, a string of numbers called an embedded vector, is used to calculate the probability that the image belongs to each of the possible categories. Generally speaking, the correct identity of the image is within the category with the highest probability.
    But the embedded vectors and probabilities don’t correlate to a decision-making process that humans would recognize. Feed in 100,000 numbers representing the known data, and the network produces an embedded vector of 128 numbers that don’t correspond to physical features, although they do make it possible for the network to classify the image. In other words, you can’t open the hood on the algorithms of a trained system and follow along. Between the input values and the predicted identity of the image is a proverbial “black box” of unrecognizable numbers across multiple layers.
    “The problem with neural networks is that we can’t see inside the machine to understand how it’s making decisions, so how can we know if a neural network is making a characteristic mistake?” Gleich said.

    Rather than trying to trace the decision-making path of any single image through the network, Gleich’s approach makes it possible to visualize the relationship that the computer sees among all the images in an entire database. Think of it like a bird’s-eye view of all the images as the neural network has organized them.
    The relationship among the images (like network’s prediction of the identity classification of each of the images in the database) is based on the embedded vectors and probabilities the network generates. To boost the resolution of the view and find places where the network can’t distinguish between two different classifications, Gleich’s team first developed a method of splitting and overlapping the classifications to identify where images have a high probability of belonging to more than one classification.
    The team then maps the relationships onto a Reeb graph, a tool taken from the field of topological data analysis. On the graph, each group of images the network thinks are related is represented by a single dot. Dots are color coded by classification. The closer the dots, the more similar the network considers groups to be, and most areas of the graph show clusters of dots in a single color. But groups of images with a high probability of belonging to more than one classification will be represented by two differently colored overlapping dots. With a single glance, areas where the network cannot distinguish between two classifications appear as a cluster of dots in one color, accompanied by a smattering of overlapping dots in a second color. Zooming in on the overlapping dots will show an area of confusion, like the picture of the car that’s been labeled both car and cassette player.
    “What we’re doing is taking these complicated sets of information coming out of the network and giving people an ‘in’ into how the network sees the data at a macroscopic level,” Gleich said. “The Reeb map represents the important things, the big groups and how they relate to each other, and that makes it possible to see the errors.”
    “Topological Structure of Complex Predictions” was produced with the support of the National Science Foundation and the U.S. Department of Energy. More

  • in

    Wearables capture body sounds to continuously monitor health

    During even the most routine visits, physicians listen to sounds inside their patients’ bodies — air moving in and out of the lungs, heart beats and even digested food progressing through the long gastrointestinal tract. These sounds provide valuable information about a person’s health. And when these sounds subtly change or downright stop, it can signal a serious problem that warrants time-sensitive intervention.
    Now, Northwestern University researchers are introducing new soft, miniaturized wearable devices that go well beyond episodic measurements obtained during occasional doctor exams. Softly adhered to the skin, the devices continuously track these subtle sounds simultaneously and wirelessly at multiple locations across nearly any region of the body.
    The new study will be published on Thursday (Nov. 16) in the journal Nature Medicine.
    In pilot studies, researchers tested the devices on 15 premature babies with respiratory and intestinal motility disorders and 55 adults, including 20 with chronic lung diseases. Not only did the devices perform with clinical-grade accuracy, they also offered new functionalities that have not been developed nor introduced into research or clinical care.
    “Currently, there are no existing methods for continuously monitoring and spatially mapping body sounds at home or in hospital settings,” said Northwestern’s John A. Rogers, a bioelectronics pioneer who led the device development. “Physicians have to put a conventional, or a digital, stethoscope on different parts of the chest and back to listen to the lungs in a point-by-point fashion. In close collaborations with our clinical teams, we set out to develop a new strategy for monitoring patients in real-time on a continuous basis and without encumbrances associated with rigid, wired, bulky technology.”
    “The idea behind these devices is to provide highly accurate, continuous evaluation of patient health and then make clinical decisions in the clinics or when patients are admitted to the hospital or attached to ventilators,”said Dr. Ankit Bharat, a thoracic surgeon at Northwestern Medicine, who led the clinical research in the adult subjects. “A key advantage of this device is to be able to simultaneously listen and compare different regions of the lungs. Simply put, it’s like up to 13 highly trained doctors listening to different regions of the lungs simultaneously with their stethoscopes, and their minds are synced to create a continuous and a dynamic assessment of the lung health that is translated into a movie on a real-life computer screen.”
    Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern’s McCormick School of Engineering and Northwestern University Feinberg School of Medicine. He also directs the Querrey Simpson Institute for Bioelectronics. Bharat is the chief of thoracic surgery and the Harold L. and Margaret N. Method Professor of Surgery at Feinberg. As the director of the Northwestern Medicine Canning Thoracic Institute, Bharat performed the first double-lung transplants on COVID-19 patients in the U.S. and started a first-of-its-kind lung transplant program for certain patients with stage 4 lung cancers.

    Comprehensive, non-invasive sensing network
    Containing pairs of high-performance, digital microphones and accelerometers, the small, lightweight devices gently adhere to the skin to create a comprehensive non-invasive sensing network. By simultaneously capturing sounds and correlating those sounds to body processes, the devices spatially map how air flows into, through and out of the lungs as well as how cardiac rhythm changes in varied resting and active states, and how food, gas and fluids move through the intestines.
    Encapsulated in soft silicone, each device measures 40 millimeters long, 20 millimeters wide and 8 millimeters thick. Within that small footprint, the device contains a flash memory drive, tiny battery, electronic components, Bluetooth capabilities and two tiny microphones — one facing inward toward the body and another facing outward toward the exterior. By capturing sounds in both directions, an algorithm can separate external (ambient or neighboring organ) sounds and internal body sounds.
    “Lungs don’t produce enough sound for a normal person to hear,” Bharat said. “They just aren’t loud enough, and hospitals can be noisy places. When there are people talking nearby or machines beeping, it can be incredibly difficult. An important aspect of our technology is that it can correct for those ambient sounds.”
    Not only does capturing ambient noise enable noise canceling, it also provides contextual information about the patients’ surrounding environments, which is particularly important when treating premature babies.
    “Irrespective of device location, the continuous recording of the sound environment provides objective data on the noise levels to which babies are exposed,” said Dr. Wissam Shalish, a neonatologist at the Montreal Children’s Hospital and co-first author of the paper. “It also offers immediate opportunities to address any sources of stressful or potentially compromising auditory stimuli.”
    Non-obtrusively monitoring babies’ breathing

    When developing the new devices, the researchers had two vulnerable communities in mind: premature babies in the neonatal intensive care unit (NICU) and post-surgery adults. In the third trimester during pregnancy, babies’ respiratory systems mature so babies can breathe outside the womb. Babies born either before or in the earliest stages of the third trimester, therefore, are more likely to develop lung issues and disordered breathing complications.
    Particularly common in premature babies, apneas are a leading cause of prolonged hospitalization and potentially death. When apneas occur, infants either do not take a breath (due to immature breathing centers in the brain) or have an obstruction in their airway that restricts airflow. Some babies might even have a combination of the two. Yet, there are no current methods to continuously monitor airflow at the bedside and to accurately distinguish apnea subtypes, especially in these most vulnerable infants in the clinical NICU.
    “Many of these babies are smaller than a stethoscope, so they are already technically challenging to monitor,” said Dr. Debra E. Weese-Mayer, a study co-author, chief of autonomic medicine at Ann & Robert H. Lurie Children’s Hospital of Chicago and the Beatrice Cummings Mayer Professor of Autonomic Medicine at Feinberg. “The beauty of these new acoustic devices is they can non-invasively monitor a baby continuously — during wakefulness and sleep — without disturbing them. These acoustic wearables provide the opportunity to safely and non-obtrusively determine each infant’s ‘signature’ pertinent to their air movement (in and out of airway and lungs), heart sounds and intestinal motility day and night, with attention to circadian rhythmicity. And these wearables simultaneously monitor ambient noise that might affect the internal acoustic ‘signature’ and/or introduce other stimuli that might affect healthy growth and development.”
    In collaborative studies conducted at the Montreal Children’s Hospital in Canada, health care workers placed the acoustic devices on babies just below the suprasternal notch at the base of the throat. Devices successfully detected the presence of airflow and chest movements and could estimate the degree of airflow obstruction with high reliability, therefore allowing identification and classification of all apnea subtypes.
    “When placed on the suprasternal notch, the enhanced ability to detect and classify apneas could lead to more targeted and personalized care, improved outcomes and reduced length of hospitalization and costs,” Shalish said. “When placed on the right and left chest of critically ill babies, the real-time feedback transmitted whenever the air entry is diminished on one side relative to the other could promptly alert clinicians of a possible pathology necessitating immediate intervention.”
    Tracking infant digestion
    In children and infants, cardiorespiratory and gastrointestinal problems are major causes of death during the first five years of life. Gastrointestinal issues, in particular, are accompanied by reduced bowels sounds, which could be used as an early warning sign of digestion issues, intestinal dysmotility and potential obstructions. So, as part of the pilot study in the NICU, the researchers used the devices to monitor these sounds.
    In the study, premature babies wore sensors at four locations across their abdomen. Early results aligned with measurements of adult intestinal motility using wire-based systems, which is the current standard of care.
    “When placed on the abdomen, the automatic detection of reduced bowel sounds could alert the clinician of an impending (sometimes life-threatening) gastrointestinal complication,” Shalish said. “While improved bowel sounds could indicate signs of bowel recovery, especially after a gastrointestinal surgery.”
    “Intestinal motility has its own acoustic patterns and tonal qualities,” Weese-Mayer said. “Once an individual patient’s acoustic ‘signature’ is characterized, deviations from that personalized signature have potential to alert the individual and health care team to impending ill health, while there is still time for intervention to restore health.”
    In addition to offering continuous monitoring, the devices also untethered NICU babies from the variety of sensors, wires and cables connected to bedside monitors.
    Mapping a single breath
    Accompanying the NICU study, researchers tested the devices on adult patients, which included 35 adults with chronic lung diseases and 20 healthy controls. In all subjects, the devices captured the distribution of lung sounds and body motions at various locations simultaneously, enabling researchers to analyze a single breath across a range of regions throughout the lungs.
    “As physicians, we often don’t understand how a specific region of the lungs is functioning,” Bharat said. “With these wireless sensors, we can capture different regions of the lungs and assess their specific performance and each region’s performance relative to one another.”
    In 2020, cardiovascular and respiratory diseases claimed nearly 800,000 lives in the U.S., making them the first and third leading causes of death in adults, according to the Centers for Disease Control and Prevention. With the goal of helping guide clinical decisions and improve outcomes, the researchers hope their new devices can slash these numbers to save lives.
    “Lungs can make all sorts of sounds, including crackling, wheezing, rippling and howling,” Bharat said. “It’s a fascinating microenvironment. By continuously monitoring these sounds in real time, we can determine if lung health is getting better or worse and evaluate how well a patient is responding to a particular medication or treatment. Then we can personalize treatments to individual patients.”
    The study, “Wireless broadband acousto-mechanical sensors as body area networks for continuous physiological monitoring,” was supported by the Querrey-Simpson Institute for Bioelectronics at Northwestern University. The paper’s co-first authors are Jae-Young Yoo of Northwestern, Seyong Oh of Hanyang University in Korea and Wissam Shalish of the McGill University Health Centre. More

  • in

    AI model can help predict survival outcomes for patients with cancer

    Investigators from the UCLA Health Jonsson Comprehensive Cancer Center have developed an artificial intelligence (AI) model based on epigenetic factors that is able to predict patient outcomes successfully across multiple cancer types.
    The researchers found that by examining the gene expression patterns of epigenetic factors — factors that influence how genes are turned on or off — in tumors, they could categorize them into distinct groups to predict patient outcomes across various cancer types better than traditional measures like cancer grade and stage.
    These findings, described in Communications Biology, also lay the groundwork for developing targeted therapies aimed at regulating epigenetic factors in cancer therapy, such as histone acetyltransferases and SWI/SNF chromatin remodelers.
    “Traditionally, cancer has been viewed as primarily a result of genetic mutations within oncogenes or tumor suppressors,” said co-senior author Hilary Coller, professor of molecular, cell, and developmental biology and a member of the UCLA Health Jonsson Comprehensive Cancer Center and the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA. “However, the emergence of advanced next-generation sequencing technologies has made more people realize that the state of the chromatin and the levels of epigenetic factors that maintain this state are important for cancer and cancer progression. There are different aspects of the state of the chromatin — like whether the histone proteins are modified, or whether the nucleic acid bases of the DNA contain extra methyl groups — that can affect cancer outcomes. Understanding these differences between tumors could help us learn more about why some patients respond differently to treatments and why their outcomes vary.”
    While previous studies have shown that mutations in the genes that encode epigenetic factors can affect an individual’s cancer susceptibility, little is known about how the levels of these factors impact cancer progression. This knowledge gap is crucial in fully understanding how epigenetics affects patient outcomes, noted Coller.
    To see if there was a relationship between epigenetic patterns and clinical outcomes, the researchers analyzed the expression patterns of 720 epigenetic factors to classify tumors from 24 different cancer types into distinct clusters.
    Out of the 24 adult cancer types, the team found that for 10 of the cancers, the clusters were associated with significant differences in patient outcomes, including progression-free survival, disease-specific survival, and overall survival.

    This was especially true for adrenocortical carcinoma, kidney renal clear cell carcinoma, brain lower grade glioma, liver hepatocellular carcinoma and lung adenocarcinoma, where the differences were significant for all the survival measurements.
    The clusters with poor outcomes tended to have higher cancer stage, larger tumor size, or more severe spread indicators.
    “We saw that the prognostic efficacy of an epigenetic factor was dependent on the tissue-of-origin of the cancer type,” said Mithun Mitra, co-senior author of the study and an associate project scientist in the Coller laboratory. “We even saw this link in the few pediatric cancer types we analyzed. This may be helpful in deciding the cancer-specific relevance of therapeutically targeting these factors.”
    The team then used epigenetic factor gene expression levels to train and test an AI model to predict patient outcomes. This model was specifically designed to predict what might happen for the five cancer types that had significant differences in survival measurements.
    The scientists found the model could successfully divide patients with these five cancer types into two groups: one with a significantly higher chance of better outcomes and another with a higher chance of poorer outcomes.
    They also saw that the genes that were most crucial for the AI model had a significant overlap with the cluster-defining signature genes.
    “The pan-cancer AI model is trained and tested on the adult patients from the TCGA cohort and it would be good to test this on other independent datasets to explore its broad applicability,” said Mitra. “Similar epigenetic factor-based models could be generated for pediatric cancers to see what factors influence the decision-making process compared to the models built on adult cancers.”
    “Our research helps provide a roadmap for similar AI models that can be generated through publicly-available lists of prognostic epigenetic factors,” said the study’s first author, Michael Cheng, a graduate student in the Bioinformatics Interdepartmental Program at UCLA. “The roadmap demonstrates how to identify certain influential factors in different types of cancer and contains exciting potential for predicting specific targets for cancer treatment.”
    The study was funded in part by grants from the National Cancer Institute, Cancer Research Institute, Melanoma Research Alliance, Melanoma Research Foundation, National Institutes of Health and the UCLA Spore in Prostate Cancer. More

  • in

    Nuclear expansion failure shows simulations require change

    The widespread adoption of nuclear power was predicted by computer simulations more than four decades ago but the continued reliance on fossil fuels for energy shows these simulations need improvement, a new study has shown.
    In order to assess the efficacy of energy policies implemented today, a team of researchers looked back at the influential 1980s model that predicted nuclear power would expand dramatically. Energy policies shapes how we produce and use energy, impacting jobs, costs, climate, and security. These policies are generated using simulations (also known as mathematical models) which forecast things like electricity demand and technology costs. But forecasts may miss the point altogether.
    Results published today (Wednesday, 15 November) in the journal Risk Analysis showed the team found simulations that inform energy policy had unreliable assumptions built into them and that they need more transparency about their limitations. To amend this, they recommend new ways to test simulations and be upfront about their uncertainties. This includes methods like ‘sensitivity auditing’, which evaluates model assumptions. The goal is to improve modelling and open up decision-making.
    Lead researcher Dr Samuele Lo Piano, of the University of Reading, said: “Energy policy affects everybody, so it’s worrying when decisions rely on just a few models without questioning their limits. By questioning assumptions and exploring what we don’t know, we can get better decision making. We have to acknowledge that no model can perfectly predict the future. But by being upfront about model limitations, democratic debate on energy policy will improve.”
    Modelling politics
    A chapter of a new book, The politics of modelling(to be published on November 20), written by lead author Dr Lo Piano, highlights why the research matters for all the fields where mathematical models are used to inform decision and policy-making. The chapter considers the inherent complexities and uncertainties posed by human-caused socio-economic and environmental changes.
    Entitled ‘Sensitivity auditing — A practical checklist for auditing decision-relevant models’, the chapter presents four real-world applications of sensitivity auditing in public health, education, human-water systems, and food provision systems. More

  • in

    Realistic talking faces created from only an audio clip and a person’s photo

    A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) has developed a computer program that creates realistic videos that reflect the facial expressions and head movements of the person speaking, only requiring an audio clip and a face photo.
    DIverse yet Realistic Facial Animations, or DIRFA, is an artificial intelligence-based program that takes audio and a photo and produces a 3D video showing the person demonstrating realistic and consistent facial animations synchronised with the spoken audio (see videos).
    The NTU-developed program improves on existing approaches, which struggle with pose variations and emotional control.
    To accomplish this, the team trained DIRFA on over one million audiovisual clips from over 6,000 people derived from an open-source database called The VoxCeleb2 Dataset to predict cues from speech and associate them with facial expressions and head movements.
    The researchers said DIRFA could lead to new applications across various industries and domains, including healthcare, as it could enable more sophisticated and realistic virtual assistants and chatbots, improving user experiences. It could also serve as a powerful tool for individuals with speech or facial disabilities, helping them to convey their thoughts and emotions through expressive avatars or digital representations, enhancing their ability to communicate.
    Corresponding author Associate Professor Lu Shijian, from the School of Computer Science and Engineering (SCSE) at NTU Singapore, who led the study, said: “The impact of our study could be profound and far-reaching, as it revolutionises the realm of multimedia communication by enabling the creation of highly realistic videos of individuals speaking, combining techniques such as AI and machine learning. Our program also builds on previous studies and represents an advancement in the technology, as videos created with our program are complete with accurate lip movements, vivid facial expressions and natural head poses, using only their audio recordings and static images.”
    First author Dr Wu Rongliang, a PhD graduate from NTU’s SCSE, said: “Speech exhibits a multitude of variations. Individuals pronounce the same words differently in diverse contexts, encompassing variations in duration, amplitude, tone, and more. Furthermore, beyond its linguistic content, speech conveys rich information about the speaker’s emotional state and identity factors such as gender, age, ethnicity, and even personality traits. Our approach represents a pioneering effort in enhancing performance from the perspective of audio representation learning in AI and machine learning.” Dr Wu is a Research Scientist at the Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), Singapore.

    The findings were published in the scientific journal Pattern Recognition in August.
    Speaking volumes: Turning audio into action with animated accuracy
    The researchers say that creating lifelike facial expressions driven by audio poses a complex challenge. For a given audio signal, there can be numerous possible facial expressions that would make sense, and these possibilities can multiply when dealing with a sequence of audio signals over time.
    Since audio typically has strong associations with lip movements but weaker connections with facial expressions and head positions, the team aimed to create talking faces that exhibit precise lip synchronisation, rich facial expressions, and natural head movements corresponding to the provided audio.
    To address this, the team first designed their AI model, DIRFA, to capture the intricate relationships between audio signals and facial animations. The team trained their model on more than one million audio and video clips of over 6,000 people, derived from a publicly available database.
    Assoc Prof Lu added: “Specifically, DIRFA modelled the likelihood of a facial animation, such as a raised eyebrow or wrinkled nose, based on the input audio. This modelling enabled the program to transform the audio input into diverse yet highly lifelike sequences of facial animations to guide the generation of talking faces.”
    Dr Wu added: “Extensive experiments show that DIRFA can generate talking faces with accurate lip movements, vivid facial expressions and natural head poses. However, we are working to improve the program’s interface, allowing certain outputs to be controlled. For example, DIRFA does not allow users to adjust a certain expression, such as changing a frown to a smile.”
    Besides adding more options and improvements to DIRFA’s interface, the NTU researchers will be finetuning its facial expressions with a wider range of datasets that include more varied facial expressions and voice audio clips. More

  • in

    Use it or lose it: New robotic system assesses mobility after stroke

    Stroke is a leading cause of long-term disability worldwide. Each year more than 15 million people worldwide have strokes, and three-quarters of stroke survivors will experience impairment, weakness and paralysis in their arms and hands.
    Many stroke survivors rely on their stronger arm to complete daily tasks, from carrying groceries to combing their hair, even when the weaker arm has the potential to improve. Breaking this habit, known as “arm nonuse” or “learned nonuse,” can improve strength and prevent injury.
    But, determining how much a patient is using their weaker arm outside of the clinic is challenging. In a classic case of observer’s paradox, the measurement has to be covert for the patient to behave spontaneously.
    Now, USC researchers have developed a novel robotic system for collecting precise data on how people recovering from stroke use their arms spontaneously. The first-of-its-kind method is outlined in a paper published in the November 15 issue of Science Robotics.
    Using a robotic arm to track 3D spatial information, and machine learning techniques to process the data, the method generates an “arm nonuse” metric, which could help clinicians accurately assess a patient’s rehabilitation progress. A socially assistive robot (SAR) provides instructions and encouragement throughout the challenge.
    “Ultimately, we are trying to assess how much someone’s performance in physical therapy transfers into real life,” said Nathan Dennler, the paper’s lead author and a computer science doctoral student.
    The research involved combined efforts from researchers in USC’s Thomas Lord Department of Computer Science and the Division of Biokinesiology and Physical Therapy. “This work brings together quantitative user-performance data collected using a robot arm, while also motivating the user to provide a representative performance thanks to a socially assistive robot,” said Maja Matari?, study co-author and Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience, and Pediatrics. “This novel combination can serve as a more accurate and more motivating process for stroke patient assessment.”
    Additional authors are Stefanos Nikolaidis, an assistant professor of computer science; Amelia Cain, an assistant professor of clinical physical therapy, Carolee J. Winstein, a professor emeritus and an adjunct professor in the Neuroscience Graduate Program, and computer science students Erica De Guzmann and Claudia Chiu.

    Mirroring everyday use
    For the study, the research team recruited 14 participants who were right-hand dominant before the stroke. The participant placed their hands on the device’s home position — a 3D-printed box with touch sensors.
    A socially assistive robot (SAR) described the system’s mechanics and provided positive feedback, while the robot arm moved a button to different target locations in front of the participant (100 locations in total). The “reaching trial” begins when the button lights up, and the SAR cues the participant to move.
    In the first phase, the participants were directed to reach for the button using whichever hand came naturally, mirroring everyday use. In the second phase, they were instructed to use the stroke-affected arm only, mirroring performance in physiotherapy or other clinical settings.
    Using machine learning, the team analyzed three measurements to determine a metric for arm nonuse: arm use probability, time to reach, and successful reach. A noticeable difference in performance between the phases would suggest nonuse of the affected arm.
    “The participants have a time limit to reach the button, so even though they know they’re being tested, they still have to react quickly,” said Dennler. “This way, we’re measuring gut reaction to the light turning on — which hand will you use on the spot?”
    Safe and easy to use

    In chronic stroke survivors, the researchers observed high variability in hand choice and in the time to reach targets in the workspace. The method was reliable across repeated sessions, and participants rated it as simple to use, with above-average user experience scores. All participants found the interaction to be safe and easy to use.
    Crucially, the researchers found differences in arm use between participants, which could be used by healthcare professionals to more accurately track a patient’s stroke recovery.
    “For example, one participant whose right side was more affected by their stroke exhibited lower use of their right arm specifically in areas higher on their right side, but maintained a high probability of using their right arm for lower areas on the same side,” said Dennler.
    “Another participant exhibited more symmetric use but also compensated with their less-affected side slightly more often for higher-up points that were close to the mid-line.”
    Participants felt that the system could be improved through personalization, which the team hopes to explore in future studies, in addition to incorporating other behavioral data such as facial expressions and different types of tasks.
    As a physiotherapist, Cain said the technology addresses many issues encountered with traditional methods of assessment, which “require the patient not to know they’re being tested, and are based on the tester’s observation which can leave more room for error.”
    “This type of technology could provide rich, objective information about a stroke survivor’s arm use to their rehabilitation therapist,” said Cain. “The therapist could then integrate this information into their clinical decision-making process and better tailor their interventions to address the patient’s areas of weakness and build upon areas of strength.” More

  • in

    Printed robots with bones, ligaments, and tendons

    3D printing is advancing rapidly, and the range of materials that can be used has expanded considerably. While the technology was previously limited to fast-curing plastics, it has now been made suitable for slow-curing plastics as well. These have decisive advantages as they have enhanced elastic properties and are more durable and robust.
    The use of such polymers is made possible by a new technology developed by researchers at ETH Zurich and a US start-up. As a result, researchers can now 3D print complex, more durable robots from a variety of high-quality materials in one go. This new technology also makes it easy to combine soft, elastic, and rigid materials. The researchers can also use it to create delicate structures and parts with cavities as desired.
    Materials that return to their original state
    Using the new technology, researchers at ETH Zurich have succeeded for the first time in printing a robotic hand with bones, ligaments and tendons made of different polymers in one go. “We wouldn’t have been able to make this hand with the fast-curing polyacrylates we’ve been using in 3D printing so far,” explains Thomas Buchner, a doctoral student in the group of ETH Zurich robotics professor Robert Katzschmann and first author of the study. “We’re now using slow-curing thiolene polymers. These have very good elastic properties and return to their original state much faster after bending than polyacrylates.” This makes thiolene polymers ideal for producing the elastic ligaments of the robotic hand.
    In addition, the stiffness of thiolenes can be fine-tuned very well to meet the requirements of soft robots. “Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” Katzschmann explains.
    Scanning instead of scraping
    3D printers typically produce objects layer by layer: nozzles deposit a given material in viscous form at each point; a UV lamp then cures each layer immediately. Previous methods involved a device that scraped off surface irregularities after each curing step. This works only with fast-curing polyacrylates. Slow-curing polymers such as thiolenes and epoxies would gum up the scraper.
    To accommodate the use of slow-curing polymers, the researchers developed 3D printing further by adding a 3D laser scanner that immediately checks each printed layer for any surface irregularities. “A feedback mechanism compensates for these irregularities when printing the next layer by calculating any necessary adjustments to the amount of material to be printed in real time and with pinpoint accuracy,” explains Wojciech Matusik, a professor at the Massachusetts Institute of Technology (MIT) in the US and co-author of the study. This means that instead of smoothing out uneven layers, the new technology simply takes the unevenness into account when printing the next layer.
    Inkbit, an MIT spin-off, was responsible for developing the new printing technology. The ETH Zurich researchers developed several robotic applications and helped optimise the printing technology for use with slow-curing polymers. The researchers from Switzerland and the US have now jointly published the technology and their sample applications in the journal Nature.
    At ETH Zurich, Katzschmann’s group will use the technology to explore further possibilities and to design even more sophisticated structures and develop additional applications. Inkbit is planning to use the new technology to offer a 3D printing service to its customers and to sell the new printers. More