More stories

  • in

    Nanoprinting electrodes for customized treatments of disease

    Carnegie Mellon University researchers have pioneered the CMU Array — a new type of microelectrode array for brain computer interface platforms. It holds the potential to transform how doctors are able to treat neurological disorders.
    3D printed at the nanoscale, the ultra-high-density microelectrode array (MEA) is fully customizable. This means that one day, patients suffering from epilepsy or limb function loss due to stroke could have personalized medical treatment optimized for their individual needs.
    The collaboration combines the expertise of Rahul Panat, associate professor of mechanical engineering, and Eric Yttri, assistant professor of biological sciences. The team applied the newest microfabrication technique, Aerosol Jet 3D printing, to produce arrays that solved the major design barriers of other brain computer interface (BCI) arrays. The findings were published in Science Advances.
    “Aerosol Jet 3D printing offered three major advantages,” Panat explained. “Users are able to customize their MEAs to fit particular needs; the MEAs can work in three dimensions in the brain; and the density of the MEA is increased and therefore more robust.”
    MEA-based BCIs connect neurons in the brain with external electronics to monitor or stimulate brain activity. They are often used in applications like neuroprosthetic devices, artificial limbs, and visual implants to transport information from the brain to extremities that have lost functionality. BCIs also have potential applications in treating neurological diseases such as epilepsy, depression, and obsessive-compulsive disorder. However, existing devices have limitations.
    There are two types of popular BCI devices. The oldest MEA is the Utah array, developed at the University of Utah and patented in 1993. This silicone-based array uses a field of tiny pins, or shanks, that can be inserted directly into the brain to detect electrical discharge from neurons at the tip of each pin. More

  • in

    Voice screening app delivers rapid results for Parkinson's and severe COVID

    A new screening test app could help advance the early detection of Parkinson’s disease and severe COVID-19, improving the management of these illnesses.
    Developed by a research team of engineers and neurologists led by RMIT University in Melbourne, the test can produce accurate results using just people’s voice recordings.
    Millions of people worldwide have Parkinson’s, which is a degenerative brain condition that can be challenging to diagnose as symptoms vary among people. Common symptoms include slow movement, tremor, rigidity and imbalance.
    Currently, Parkinson’s is diagnosed through an evaluation by a neurologist that can take up to 90 minutes.
    Powered by artificial intelligence, the smartphone app records a person’s voice and takes just 10 seconds to reveal whether they may to have Parkinson’s disease and should be referred to a neurologist.
    Lead researcher Professor Dinesh Kumar, from RMIT’s School of Engineering, said the easy-to-use screening test made it ideal to use in a national screening program. More

  • in

    AI models can now continually learn from new data on intelligent edge devices like smartphones and sensors

    Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on “edge devices” that work independently from central computing resources.
    Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user’s writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.
    To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).
    The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.
    This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.
    “Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing this innovation. More

  • in

    New algorithms help four-legged robots run in the wild

    A team led by the University of California San Diego has developed a new system of algorithms that enables four-legged robots to walk and run on challenging terrain while avoiding both static and moving obstacles.
    In tests, the system guided a robot to maneuver autonomously and swiftly across sandy surfaces, gravel, grass, and bumpy dirt hills covered with branches and fallen leaves without bumping into poles, trees, shrubs, boulders, benches or people. The robot also navigated a busy office space without bumping into boxes, desks or chairs.
    The work brings researchers a step closer to building robots that can perform search and rescue missions or collect information in places that are too dangerous or difficult for humans.
    The team will present its work at the 2022 International Conference on Intelligent Robots and Systems (IROS), which will take place from Oct. 23 to 27 in Kyoto, Japan.
    The system provides a legged robot more versatility because of the way it combines the robot’s sense of sight with another sensing modality called proprioception, which involves the robot’s sense of movement, direction, speed, location and touch — in this case, the feel of the ground beneath its feet.
    Currently, most approaches to train legged robots to walk and navigate rely either on proprioception or vision, but not both at the same time, said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.
    “In one case, it’s like training a blind robot to walk by just touching and feeling the ground. And in the other, the robot plans its leg movements based on sight alone. It is not learning two things at the same time,” said Wang. “In our work, we combine proprioception with computer vision to enable a legged robot to move around efficiently and smoothly — while avoiding obstacles — in a variety of challenging environments, not just well-defined ones.”
    The system that Wang and his team developed uses a special set of algorithms to fuse data from real-time images taken by a depth camera on the robot’s head with data from sensors on the robot’s legs. This was not a simple task. “The problem is that during real-world operation, there is sometimes a slight delay in receiving images from the camera,” explained Wang, “so the data from the two different sensing modalities do not always arrive at the same time.”
    The team’s solution was to simulate this mismatch by randomizing the two sets of inputs — a technique the researchers call multi-modal delay randomization. The fused and randomized inputs were then used to train a reinforcement learning policy in an end-to-end manner. This approach helped the robot to make decisions quickly during navigation and anticipate changes in its environment ahead of time, so it could move and dodge obstacles faster on different types of terrains without the help of a human operator.
    Moving forward, Wang and his team are working on making legged robots more versatile so that they can conquer even more challenging terrains. “Right now, we can train a robot to do simple motions like walking, running and avoiding obstacles. Our next goals are to enable a robot to walk up and down stairs, walk on stones, change directions and jump over obstacles.”
    Video: https://youtu.be/GKbTklHrq60
    The team has released their code online at: https://github.com/Mehooz/vision4leg.
    Story Source:
    Materials provided by University of California – San Diego. Original written by Liezel Labios. Note: Content may be edited for style and length. More

  • in

    Microscopic octopuses from a 3D printer

    Although just cute little creatures at first glance, the microscopic geckos and octopuses fabricated by 3D laser printing in the molecular engineering labs at Heidelberg University could open up new opportunities in fields such as microrobotics or biomedicine. The printed microstructures are made from novel materials — known as smart polymers — whose size and mechanical properties can be tuned on demand and with high precision. These “life-like” 3D microstructures were developed in the framework of the “3D Matter Made to Order” (3DMM2O) Cluster of Excellence, a collaboration between Ruperto Carola and the Karlsruhe Institute of Technology (KIT).
    “Manufacturing programmable materials whose mechanical properties can be adapted on demand is highly desired for many applications,” states Junior Professor Dr Eva Blasco, group leader at the Institute of Organic Chemistry and the Institute for Molecular Systems Engineering and Advanced Materials of Heidelberg University. This concept is known as 4D printing, and the additional fourth dimension refers to the ability of three-dimensionally printed objects to alter their properties over time. One prominent example of materials for 4D printing are shape memory polymers — smart materials that can return to their original shape from a deformed state in response to an external stimulus such as temperature.
    The team led by Prof. Blasco recently introduced one of the first examples of 3D printed shape memory polymers at the microscale. In cooperation with the working group of biophysicist Prof. Dr Joachim Spatz, a scientist at Ruperto Carola and Director at the Max Planck Institute for Medical Research, the researchers developed a new shape memory material that can be 3D printed with high resolution both at the macro and at the microscale. The structures produced include box-shaped microarchitectures whose lids close in response to heat and can then be reopened. “These tiny structures show unusual shape memory properties at low activation temperatures, which is extremely interesting for bioapplications,” explains Christoph Spiegel, a doctoral researcher in the working group of Eva Blasco.
    Using adaptive materials, the researchers succeeded in a follow-up study in producing much more complex 3D microstructures like geckos, octopuses, and even sunflowers with “life-like” properties. These materials are based on dynamic chemical bonds. The Heidelberg researchers report that alkoxyamines are particularly suitable for this purpose. After the printing process, these dynamic bonds allow for the complex, micrometric structures to grow eight-fold in just a few hours and to harden, while maintaining their shape. “Conventional inks do not offer such features,” emphasises Prof. Blasco. “Adaptive materials containing dynamic bonds have a bright future in the field of 3D printing,” adds the chemist.
    Materials scientists at the Karlsruhe Institute of Technology (KIT) also participated in the research on adaptable materials with “life-like” properties. The German Research Foundation and the Carl Zeiss Foundation funded the work, which was carried out within the framework of the 3DMM2O Cluster of Excellence. The results were published in two papers in the journal Advanced Functional Materials.
    Story Source:
    Materials provided by Heidelberg University. Note: Content may be edited for style and length. More

  • in

    'Game-changing' study offers a powerful computer-modeling approach to cell simulations

    A milestone report from the University of Kansas appearing this week in the Proceedings of the National Academy of Sciences proposes a new technique for modeling molecular life with computers.
    According to lead author Ilya Vakser, director of the Computational Biology Program and Center for Computational Biology and professor of molecular biosciences at KU, the investigation into computer modeling of life processes is a major step toward creating a working simulation of a living cell at atomic resolution. The advance promises new insights into the fundamental biology of a cell, as well as faster and more precise treatment of human disease.
    “It is about tens or hundreds of thousands of times faster than the existing atomic resolution techniques,” Vakser said. “This provides unprecedented opportunities to characterize physiological mechanisms that now are far beyond the reach of computational modeling, to get insights into cellular mechanisms and to use this knowledge to improve our ability to treat diseases.”
    Until now, a major hurdle to modeling cells via computer has been how to approach proteins and their interactions that lie at the heart of cellular processes. To date, established techniques for modeling protein interactions have depended on either “protein docking” or “molecular simulation.”
    According to the investigators, both approaches have advantages and drawbacks. While protein docking algorithms are great for sampling spatial coordinates, they do not account for the “time coordinate,” or dynamics of protein interactions. By contrast, molecular simulations model dynamics well, but these simulations are too slow or low-resolution.
    “Our proof-of-concept study bridges the two modeling methodologies, developing an approach that can reach unprecedented simulation timescales at all-atom resolution,” the authors wrote. More

  • in

    Machine learning model predicts health conditions of people with MS during stay-at-home periods

    Research led by Carnegie Mellon University has developed a model that can accurately predict how stay-at-home orders like those put in place during the COVID-19 pandemic affect the mental health of people with chronic neurological disorders such as multiple sclerosis.
    Researchers from CMU, the University of Pittsburgh and the University of Washington gathered data from the smartphones and fitness trackers of people with MS both before and during the early wave of the pandemic. Specifically, they used the passively collected sensor data to build machine learning models to predict depression, fatigue, poor sleep quality and worsening MS symptoms during the unprecedented stay-at-home period.
    Before the pandemic began, the original research question was whether digital data from the smartphones and fitness trackers of people with MS could predict clinical outcomes. By March 2020, as study participants were required to stay at home, their daily behavior patterns were significantly altered. The research team realized the data being collected could inform the effect of the stay-at-home orders on people with MS.
    “It presented us with an exciting opportunity,” said Mayank Goel, head of the Smart Sensing for Humans (SMASH) Lab at CMU. “If we look at the data points before and during the stay-at-home period, can we identify factors that signal changes in the health of people with MS?”
    The team gathered data passively over three to six months, collecting information such as the number of calls on the participants’ smartphones and the duration of those calls; the number of missed calls; and the participants’ location and screen activity data. The team also collected heart rate, sleep information and step count data from their fitness trackers. The research, “Predicting Multiple Sclerosis Outcomes During the COVID-19 Stay-at-Home Period: Observational Study Using Passively Sensed Behaviors and Digital Phenotyping,” was recently published in the Journal of Medical Internet Research Mental Health. Goel, an associate professor in the School of Computer Science’s Software and Societal Systems Department (S3D) and Human-Computer Interaction Institute (HCII), collaborated with Prerna Chikersal, a Ph.D. student in the HCII; Dr. Zongqi Xia, an associate professor of Neurology and director of the Translational and Computational Neuroimmunology Research Program at the University of Pittsburgh; and Anind Dey, a professor and dean of the University of Washington’s Information School.
    The work was based on previous studies from Goel’s and Dey’s research groups. In 2020, a CMU team published research that presented a machine learning model that could identify depression in college students at the end of the semester using smartphone and fitness tracker data. Participants in the earlier study, specifically 138 first-year CMU students, were relatively similar to each other when compared to the larger population beyond the university. The researchers set out to test whether their modeling approach could accurately predict clinically relevant health outcomes in a real-world patient population with greater demographic and clinical diversity, leading them to collaborate with Xia’s MS research program.
    People with MS can experience several chronic comorbidities, which gave the team a chance to test if their model could predict adverse health outcomes such as severe fatigue, poor sleep quality and worsening of MS symptoms in addition to depression. Building on this study, the team hopes to advance precision medicine for people with MS by improving early detection of disease progression and implementing targeted interventions based on digital phenotyping.
    The work could also help inform policymakers tasked with issuing future stay-at-home orders or other similar responses during pandemics or natural disasters. When the original COVID-19 stay-at-home orders were issued, there were early concerns about its economic impacts but only a belated appreciation for the toll on peoples’ mental and physical health — particularly among vulnerable populations such as those with chronic neurological conditions.
    “We were able to capture the change in people’s behaviors and accurately predict clinical outcomes when they are forced to stay at home for prolonged periods,” Goel said. “Now that we have a working model, we could evaluate who is at risk for worsening mental health or physical health, inform clinical triage decisions, or shape future public health policies.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Video games offer the potential of 'experiential medicine'

    After a decade of work, scientists at UC San Francisco’s Neuroscape Center have developed a suite of video game interventions that improve key aspects of cognition in aging adults.
    The games, which co-creator Adam Gazzaley, MD, PhD, says can be adapted to clinical populations as a new form of “experiential medicine,” showed benefits on an array of important cognitive processes, including short-term memory, attention and long-term memory.
    Each employs adaptive closed-loop algorithms that Gazzaley’s lab pioneered in the widely cited 2013 Neuroracer study published in Nature, which first demonstrated it was possible to restore diminished mental faculties in older people with just four weeks of training on a specially designed video game.
    These algorithms achieve better results than commercial games by automatically increasing or decreasing in difficulty, depending on how well someone is playing the game. That keeps less skilled players from becoming overwhelmed, while still challenging those with greater ability. The games using these algorithms recreate common activities, such as driving, exercising and playing a drum, and use the skills each can engender to retrain cognitive processes that become deficient with age.
    “All of these are taking experiences and delivering them in a very personalized, fun manner, and our brains respond through a process called plasticity,” said Gazzaley, who is professor of neurology in the UCSF Weill Institute for Neurosciences and the founder and executive director of Neuroscape. “Experiences are a powerful way of changing our brain, and this form of experience allows us to deliver it in a manner that’s very accessible.”
    The lab’s most recent invention is a musical rhythm game, developed in consultation with drummer Mickey Hart, that not only taught the 60 to 79-year-old participants how to drum, but also improved their ability to remember faces. The study appears Oct. 3, 2022, in PNAS. More