More stories

  • in

    Researchers demonstrate that quantum entanglement and topology are inextricably linked

    For the first time, researchers from the Structured Light Laboratory (School of Physics) at the University of the Witwatersrand in South Africa, led by Professor Andrew Forbes, in collaboration with string theorist Robert de Mello Koch from Huzhou University in China (previously from Wits University), have demonstrated the remarkable ability to perturb pairs of spatially separated yet interconnected quantum entangled particles without altering their shared properties.
    “We achieved this experimental milestone by entangling two identical photons and customising their shared wave-function in such a way that their topology or structure becomes apparent only when the photons are treated as a unified entity,” explains lead author, Pedro Ornelas, an MSc student in the structured light laboratory.
    This connection between the photons was established through quantum entanglement, often referred to as ‘spooky action at a distance’, enabling particles to influence each other’s measurement outcomes even when separated by significant distances. The research was published in Nature Photonics on 8 January 2024.
    The role of topology and its ability to preserve properties, in this work, can be likened to how a coffee mug can be reshaped into the form of a doughnut; despite the changes in appearance and shape during the transformation, a singular hole — a topological characteristic — remains constant and unaltered. In this way, the two objects are topologically equivalent. “The entanglement between our photons is malleable, like clay in a potter’s hands, but during the moulding process, some features are retained,” explains Forbes.
    The nature of the topology investigated here, termed Skyrmion topology, was initially explored by Tony Skyrme in the 1980s as field configurations displaying particle-like characteristics. In this context, topology refers to a global property of the fields, akin to a piece of fabric (the wave-function) whose texture (the topology) remains unchanged regardless of the direction in which it is pushed.
    These concepts have since been realised in modern magnetic materials, liquid crystals, and even as optical analogues using classical laser beams. In the realm of condensed matter physics, skyrmions are highly regarded for their stability and noise resistance, leading to groundbreaking advancements in high-density data storage devices. “We aspire to see a similar transformative impact with our quantum-entangled skyrmions,” says Forbes.
    Previous research depicted these Skyrmions as localised at a single location. “Our work presents a paradigm shift: the topology that has traditionally been thought to exist in a single and local configuration is now nonlocal or shared between spatially separated entities,” says Ornelas.

    Expanding on this concept, the researchers utilise topology as a framework to classify or distinguish entangled states. They envisage that “this fresh perspective can serve as a labelling system for entangled states, akin to an alphabet!” says Dr Isaac Nape, a co-investigator.
    “Similar to how spheres, doughnuts, and handcuffs are distinguished by the number of holes they contain, our quantum skyrmions can be differentiated by their topological aspects in the same fashion,” says Nape. The team hopes that this might become a powerful tool that paves the way for new quantum communication protocols that use topology as an alphabet for quantum information processing across entanglement based channels.
    The findings reported in the article are crucial because researchers have grappled for decades with developing techniques to preserve entangled states. The fact that topology remains intact even as entanglement decays suggests a potentially new encoding mechanism that utilises entanglement, even in scenarios with minimal entanglement where traditional encoding protocols would fail.
    “We will focus our research efforts on defining these new protocols and expanding the landscape of topological nonlocal quantum states,” says Forbes. More

  • in

    Severe MS predicted using machine learning

    A combination of only 11 proteins can predict long-term disability outcomes in multiple sclerosis (MS) for different individuals. The identified proteins could be used to tailor treatments to the individual based on the expected severity of the disease. The study, led by researchers at Linköping University in Sweden, has been published in the journal Nature Communications.
    “A combination of 11 proteins predicted both short and long-term disease activity and disability outcomes. We also concluded that it’s important to measure these proteins in cerebrospinal fluid, which better reflects what’s going on in the central nervous system, compared with measuring in the blood,” says Julia Åkesson, doctoral student at Linköping University and the University of Skövde.
    In multiple sclerosis, the immune system attacks the person’s own body, damaging nerves in the brain and in the spinal cord. What is attacked primarily is a fatty compound called myelin, which surrounds and insulates the nerve axons so that signals can be transmitted. When myelin is damaged, transmission becomes less efficient.
    Disease progression in multiple sclerosis varies considerably from person to person. To those for whom a more severe disease is predicted, it is important not to lose valuable time at the onset of the disease but to get the right treatment quickly. The researchers behind the current study, which is a collaboration between Linköping University, the Karolinska Institute and the University of Skövde, wanted to find out whether it was possible to detect at an early stage of disease which patients would require a more powerful treatment. Being able to do so would be relevant both to physicians and those living with MS.
    “I think we’ve come one step closer to an analysis tool for selecting which patients would need more effective treatment in an early stage of the disease. But such a treatment may have side effects and be relatively expensive, and some patients don’t need it,” says Mika Gustafsson, professor of bioinformatics at the Department of Physics, Chemistry and Biology at Linköping University, who led the study.
    Finding markers linked to disease severity many years ahead is a complicated challenge. In their study, the researchers analysed nearly 1,500 proteins in samples from 92 people with suspected or recently diagnosed MS. Data from the protein analyses were combined with a large amount of information from the patients’ journals, such as disability, results from MRI scans of the nervous system, and treatments received. Using machine learning, the researchers found a number of proteins that could predict disease progression.
    “Having a panel consisting of only 11 proteins makes it easy should anyone want to develop analysis for this. It won’t be as costly as measuring 1,500 proteins, so we’ve really narrowed it down to make it useful for others wanting to take this further,” says Sara Hojjati, doctoral student at the Department of Biomedical and Clinical Sciences at Linköping University.

    The research team also found that a specific protein, leaking from damaged nerve axons, is a reliable biomarker for disease activity in the short term. This protein is called neurofilament light chain, NfL. These findings confirm earlier research on the use of NfL to identify nerve damage and also suggest that the protein indicates how active the disease is.
    One of the main strengths of the study is that the combination of proteins found in the patient group from which samples were taken at Linköping University Hospital was later confirmed in a separate group consisting of 51 MS patients sampled at the Karolinska University Hospital in Stockholm.
    This study is the first to measure such a large amount of proteins with a highly sensitive method, proximity extension assay, combined with next-generation sequencing, PEA-NGS. This technology allows for high-accuracy measuring also of very small amounts, which is important as these proteins are often present in very low levels.
    The study was funded by the Swedish Foundation for Strategic Research, the Swedish Brain Foundation, Knut and Alice Wallenberg Foundation, Margaretha af Ugglas Foundation, the Swedish Research Council, NEURO Sweden and the Swedish Foundation for MS research, and others. More

  • in

    New study uses machine learning to bridge the reality gap in quantum devices

    A study led by the University of Oxford has used the power of machine learning to overcome a key challenge affecting quantum devices. For the first time, the findings reveal a way to close the ‘reality gap’: the difference between predicted and observed behaviour from quantum devices. The results have been published in Physical Review X.
    Quantum computing could supercharge a wealth of applications, from climate modelling and financial forecasting, to drug discovery and artificial intelligence. But this will require effective ways to scale and combine individual quantum devices (also called qubits). A major barrier against this is inherent variability: where even apparently identical units exhibit different behaviours.
    Functional variability is presumed to be caused by nanoscale imperfections in the materials that quantum devices are made from. Since there is no way to measure these directly, this internal disorder cannot be captured in simulations, leading to the gap in predicted and observed outcomes.
    To address this, the research group used a “physics-informed” machine learning approach to infer these disorder characteristics indirectly. This was based on how the internal disorder affected the flow of electrons through the device.
    Lead researcher Associate Professor Natalia Ares (Department of Engineering Science, University of Oxford) said: ‘As an analogy, when we play “crazy golf” the ball may enter a tunnel and exit with a speed or direction that doesn’t match our predictions. But with a few more shots, a crazy golf simulator, and some machine learning, we might get better at predicting the ball’s movements and narrow the reality gap.’
    The researchers measured the output current for different voltage settings across an individual quantum dot device. The data was input into a simulation which calculated the difference between the measured current with the theoretical current if no internal disorder was present. By measuring the current at many different voltage settings, the simulation was constrained to find an arrangement of internal disorder that could explain the measurements at all voltage settings. This approach used a combination of mathematical and statistical approaches coupled with deep learning.
    Associate Professor Ares added: ‘In the crazy golf analogy, it would be equivalent to placing a series of sensors along the tunnel, so that we could take measurements of the ball’s speed at different points. Although we still can’t see inside the tunnel, we can use the data to inform better predictions of how the ball will behave when we take the shot.’
    Not only did the new model find suitable internal disorder profiles to describe the measured current values, it was also able to accurately predict voltage settings required for specific device operating regimes.
    Crucially, the model provides a new method to quantify the variability between quantum devices. This could enable more accurate predictions of how devices will perform, and also help to engineer optimum materials for quantum devices. It could inform compensation approaches to mitigate the unwanted effects of material imperfections in quantum devices.
    Co-author David Craig, a PhD student at the Department of Materials, University of Oxford, added, ‘Similar to how we cannot observe black holes directly but we infer their presence from their effect on surrounding matter, we have used simple measurements as a proxy for the internal variability of nanoscale quantum devices. Although the real device still has greater complexity than the model can capture, our study has demonstrated the utility of using physics-aware machine learning to narrow the reality gap.’ More

  • in

    Towards more accurate 3D object detection for robots and self-driving cars

    Robotics and autonomous vehicles are among the most rapidly growing domains in the technological landscape, with the potential to make work and transportation safer and more efficient. Since both robots and self-driving cars need to accurately perceive their surroundings, 3D object detection methods are an active area of study. Most 3D object detection methods employ LiDAR sensors to create 3D point clouds of their environment. Simply put, LiDAR sensors use laser beams to rapidly scan and measure the distances of objects and surfaces around the source. However, using LiDAR data alone can lead to errors due to the high sensitivity of LiDAR to noise, especially in adverse weather conditions like during rainfall.
    To tackle this issue, scientists have developed multi-modal 3D object detection methods that combine 3D LiDAR data with 2D RGB images taken by standard cameras. While the fusion of 2D images and 3D LiDAR data leads to more accurate 3D detection results, it still faces its own set of challenges, with accurate detection of small objects remaining difficult. The problem mainly lies in properly aligning the semantic information extracted independently from the 2D and 3D datasets, which is hard due to issues such as imprecise calibration or occlusion.
    Against this backdrop, a research team led by Professor Hiroyuki Tomiyama from Ritsumeikan University, Japan, has developed an innovative approach to make multi-modal 3D object detection more accurate and robust. The proposed scheme, called “Dynamic Point-Pixel Feature Alignment Network” (DPPFA−Net), is described in their paper published in IEEE Internet of Things Journal on 3 November 2023.
    The model comprises an arrangement of multiple instances of three novel modules: the Memory-based Point-Pixel Fusion (MPPF) module, the Deformable Point-Pixel Fusion (DPPF) module, and the Semantic Alignment Evaluator (SAE) module. The MPPF module is tasked with performing explicit interactions between intra-modal features (2D with 2D and 3D with 3D) and cross-modal features (2D with 3D). The use of the 2D image as a memory bank reduces the difficulty in network learning and makes the system more robust against noise in 3D point clouds. Moreover, it promotes the use of more comprehensive and discriminative features.
    In contrast, the DPPF module performs interactions only at pixels in key positions, which are determined via a smart sampling strategy. This allows for feature fusion in high resolutions at a low computational complexity. Finally, the SAE module helps ensure semantic alignment between both data representations during the fusion process, which mitigates the issue of feature ambiguity.
    The researchers tested DPPFA−Net by comparing it to the top performers for the widely used KITTI Vision Benchmark. Notably, the proposed network achieved average precision improvements as high as 7.18% under different noise conditions. To further test the capabilities of their model, the team created a new noisy dataset by introducing artificial multi-modal noise in the form of rainfall to the KITTI dataset. The results show that the proposed network performed better than existing models not only in the face of severe occlusions but also under various levels of adverse weather conditions. “Our extensive experiments on the KITTI dataset and challenging multi-modal noisy cases reveal that DPPFA-Net reaches a new state-of-the-art,” remarks Prof. Tomiyama.
    Notably, there are various ways in which accurate 3D object detection methods could improve our lives. Self-driving cars, which rely on such techniques, have the potential to reduce accidents and improve traffic flow and safety. Furthermore, the implications in the field of robotics should not be understated. “Our study could facilitate a better understanding and adaptation of robots to their working environments, allowing a more precise perception of small targets,” explains Prof. Tomiyama. “Such advancements will help improve the capabilities of robots in various applications.” Another use for 3D object detection networks is the pre-labeling of raw data for deep-learning perception systems. This would greatly reduce the cost of manual annotation, accelerating developments in the field.
    Overall, this study is a step in the right direction towards making autonomous systems more perceptive and assisting us better with human activities. More

  • in

    New soft robots roll like tires, spin like tops and orbit like moons

    Researchers have developed a new soft robot design that engages in three simultaneous behaviors: rolling forward, spinning like a record,and following a path that orbits around a central point. The device, which operates without human or computer control, holds promise for developing soft robotic devices that can be used to navigate and map unknown environments.
    The new soft robots are called twisted ringbots. They are made of ribbon-like liquid crystal elastomers that are twisted — like a rotini noodle — and then joined together at the end to form a loop that resembles a bracelet. When the robots are placed on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion; the warmer the surface, the faster the robot rolls.
    “The ribbon rolls on its horizontal axis, giving the ring forward momentum,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University.
    The twisted ringbot also spins along its central axis, like a record on a turntable. And as the twisted ringbot moves forward it travels in an orbital path around a central point, essentially moving in a large circle. However, if the twisted ringbot encounters a boundary — like the wall of a box — it will travel along the boundary.
    “This behavior could be particularly useful for mapping unknown environments,” Yin says.
    The twisted ringbots are examples of devices whose behavior is governed by physical intelligence, meaning their actions are determined by their structural design and the materials they are made of, rather than being directed by a computer or human intervention.
    The researchers are able to fine-tune the behavior of the twisted ringbot by engineering the geometry of the device. For example, they can control the direction that the twisted ringbot spins by twisting the ribbon one way or the other. Speed can be influenced by varying the width of the ribbon, the number of twists in the ribbon, and so on.

    In proof-of-concept testing, the researchers showed that the twisted ringbot was able to follow the contours of various confined spaces.
    “Regardless of where the twisted ringbot is introduced to these spaces, it is able to make its way to a boundary and follow the boundary lines to map the space’s contours — whether it’s a square, a triangle and so on,” says Fangjie Qi, first author of the paper and a Ph.D. student at NC State. “It also identifies gaps or damage in the boundary.
    “We were also able to map the boundaries of more complex spaces by introducing two twisted ringbots into the space, with each robot rotating in a different direction,” Qi says. “This causes them to take different paths along the boundary. And by comparing the paths of both twisted ringbots, we’re able to capture the contours of the more complex space.”
    “In principle, no matter how complex a space is, you would be able to map it if you introduced enough of the twisted ringbots to map the whole picture, each one giving part of it,” says Yin. “And, given that these are relatively inexpensive to produce, that’s viable.
    “Soft robotics is still a relatively new field,” Yin says. “Finding new ways to control the movement of soft robots in a repeatable, engineered way moves the field forward. And advancing our understanding of what is possible is exciting.”
    The paper, “Defected Twisted Ring Topology For Autonomous Periodic Flip-Spin-Orbit Soft Robot,” will be published the week of January 8 in Proceedings of the National Academy of Sciences. The paper was co-authored by Yanbin Li and Yao Zhao, postdoctoral researchers at NC State; Yaoye Hong, a recent Ph.D. graduate of NC State; and Haitao Qing, a Ph.D. student at NC State.
    The work was done with support from the National Science Foundation under grants 2005374 and 2126072. More

  • in

    New AI tool accurately detects COVID-19 from chest X-rays

    Researchers have developed a groundbreaking Artificial Intelligence (AI) system that can rapidly detect COVID-19 from chest X-rays with more than 98% accuracy. The study results have just been published in Nature Scientific Reports.
    Corresponding author Professor Amir H Gandomi, from the University of Technology Sydney (UTS) Data Science Institute, said there was a pressing need for effective automated tools to detect COVID-19, given the significant impact on public health and the global economy.
    “The most widely used COVID-19 test, real time polymerase chain reaction (PCR), can be slow and costly, and produce false-negatives. To confirm a diagnosis, radiologists need to manually examine a CT scans or X-rays, which can be time consuming and prone to error,” said Professor Gandomi.
    “The new AI system could be particularly beneficial in countries experiencing high levels of COVID-19 where there is a shortage of radiologists. Chest X-rays are portable, widely available and provide lower exposure to ionizing radiation than CT scans,” he said.
    Common symptoms of COVID-19 include fever, cough, difficulty breathing and a sore throat, however it can be difficult to distinguish COVID-19 from Flu and other types of pneumonia.
    The new AI system uses a deep learning-based algorithm called a Custom Convolutional Neural Network (Custom-CNN) that is able to quickly and accurately distinguish between COVID-19 cases, normal cases, and pneumonia in X-ray images.
    “Deep learning offers an end-to-end solution, eliminating the need to manually search for biomarkers. The Custom-CNN model streamlines the detection process, providing a faster and more accurate diagnosis of COVID-19,” said Professor Gandomi.

    “If a PCR test or rapid antigen test shows a negative or inconclusive result, due to low sensitivity, patients may require further examination via radiological imaging to confirm or rule out the virus’s presence. In this situation the new AI system could prove beneficial.
    “While radiologists play a crucial role in medical diagnosis, AI technology can assist them in making accurate and efficient diagnoses,” said Professor Gandomi.
    The performance of the Custom-CNN model was evaluated via a comprehensive comparative analysis, with accuracy as the performance criterion. The results showed that the new model outperforms the other AI diagnostic models.
    Fast and accurate diagnosis of COVID-19 can ensure patients get the correct treatment, including COVID-19 antivirals, which work best if taken within five days of the onset of symptoms. It could also help them isolate and protect others from getting infected, reducing pandemic outbreaks.
    This breakthrough represents a significant step in combatting the ongoing challenges posed by the pandemic, potentially transforming the landscape of COVID-19 diagnosis and management. More

  • in

    Researchers develop algorithm to determine how cellular ‘neighborhoods’ function in tissues

    Researchers from Children’s Hospital of Philadelphia (CHOP) have developed a new AI-powered algorithm to help understand how different cells organize themselves into particular tissues and communicate with one another. This new tool was tested on two types of cancer tissues to reveal how these “neighborhoods” of cells interact with one another to evade therapy, and more studies could reveal more information about the function of these cells in the tumor microenvironment.
    The findings were published online today by the journal Nature Methods.
    To understand how different cells organize themselves to support the functions of a tissue, researchers proposed the concept of tissue cellular neighborhoods (TCNs) to describe functional units in which different, recurrent cell types work together to support specific tissue functions. Across individuals, the functions of these TCNs would remain the same. However, translating the huge amount of information in spatial omics data into models and hypotheses that can be interpreted and tested by researchers requires advanced AI algorithms.
    “It is very difficult to study the tissue microenvironment, how certain cells organize, behave and communicate with one another,” said senior study author Kai Tan, PhD, an investigator in the Center for Childhood Cancer Research at CHOP and a professor in the Department of Pediatrics and the Perelman School of Medicine at the University of Pennsylvania. “Until recent advances in so-called spatial omics technology, it was impossible to spatially characterize more than 100 proteins or hundreds or even thousands of genes across a piece of tissue, which might be home to hundreds of thousands of cells and their respective genes.”
    In this study, researchers developed the deep-learning-based CytoCommunity algorithm to identify TCNs based on cell identities of a tissue sample, their spatial distributions as well as patient clinical data, which can help researchers better understand how these neighborhoods of cells are organized and are associated with certain clinical outcomes. In this study, tissue samples from breast and colorectal tumors were used because of a high volume of data available, enough to train the algorithm to identify TCNs associated with high-risk disease subtypes.
    By using CytoCommunity for breast and colorectal cancer data, the algorithm revealed new fibroblast-enriched TCNs and granulocyte-enriched TCNs specific to high-risk breast cancer and colorectal cancer, respectively.
    “Since we were able to prove the effectiveness of CytoCommunity, the next step is to apply this algorithm to both healthy and diseased tissue data generated by research consortia such as HuBMAP (Human BioMolecular Atlas Program) and HTAN (Human Tumor Atlas Network)” Tan said. “For instance, using data from childhood cancers such as leukemia, neuroblastoma and high-grade gliomas, we hope to find tissue cellular neighborhoods that might be associated with responses to certain therapies and combine our findings with genetic data to help determine which genetic pathways may be involved at the cellular and molecular levels.”
    This study was supported by National Institutes of Health grant CA233285, HL165442 and HL156090, a grant from the Chan Zuckerberg Initiative (AWD-2021-237920), a grant from the Leona M. and Harry B Helmsley Charitable Trust (no. 2008-04062), a National Natural Science Foundation of China grant no. 62002277, a grant from the Young Talent Fund of University Association for Science and Technology in Shaanxi (no. 20210101), a grant from the Fundamental Research Funds for the Central Universities (no. QTZX23051), and National Natural Science Foundation of China grant nos. 62132015 and U22A2037. More

  • in

    Soft robotic, wearable device improves walking for individual with Parkinson’s disease

    Freezing is one of the most common and debilitating symptoms of Parkinson’s disease, a neurodegenerative disorder that affects more than 9 million people worldwide. When individuals with Parkinson’s disease freeze, they suddenly lose the ability to move their feet, often mid-stride, resulting in a series of staccato stutter steps that get shorter until the person stops altogether. These episodes are one of the biggest contributors to falls among people living with Parkinson’s disease.
    Today, freezing is treated with a range of pharmacological, surgical or behavioral therapies, none of which are particularly effective.
    What if there was a way to stop freezing altogether?
    Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Boston University Sargent College of Health & Rehabilitation Sciences have used a soft, wearable robot to help a person living with Parkinson’s walk without freezing. The robotic garment, worn around the hips and thighs, gives a gentle push to the hips as the leg swings, helping the patient achieve a longer stride.
    The device completely eliminated the participant’s freezing while walking indoors, allowing them to walk faster and further than they could without the garment’s help.
    “We found that just a small amount of mechanical assistance from our soft robotic apparel delivered instantaneous effects and consistently improved walking across a range of conditions for the individual in our study,” said Conor Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS and co-corresponding author of the study.
    The research demonstrates the potential of soft robotics to treat this frustrating and potentially dangerous symptom of Parkinson’s disease and could allow people living with the disease to regain not only their mobility but their independence.

    The research is published in Nature Medicine.
    For over a decade, Walsh’s Biodesign Lab at SEAS has been developing assistive and rehabilitative robotic technologies to improve mobility for individuals’ post-stroke and those living with ALS or other diseases that impact mobility. Some of that technology, specifically an exosuit for post-stroke gait retraining, received support from the Wyss Institute for Biologically Inspired Engineering, and was licensed and commercialized by ReWalk Robotics.
    In 2022, SEAS and Sargent College received a grant from the Massachusetts Technology Collaborative to support the development and translation of next-generation robotics and wearable technologies. The research is centered at the Move Lab, whose mission is to support advances in human performance enhancement with the collaborative space, funding, R&D infrastructure, and experience necessary to turn promising research into mature technologies that can be translated through collaboration with industry partners.
    This research emerged from that partnership.
    “Leveraging soft wearable robots to prevent freezing of gait in patients with Parkinson’s required a collaboration between engineers, rehabilitation scientists, physical therapists, biomechanists and apparel designers,” said Walsh, whose team collaborated closely with that of Terry Ellis, Professor and Physical Therapy Department Chair and Director of the Center for Neurorehabilitation at Boston University.
    The team spent six months working with a 73-year-old man with Parkinson’s disease, who — despite using both surgical and pharmacologic treatments — endured substantial and incapacitating freezing episodes more than 10 times a day, causing him to fall frequently. These episodes prevented him from walking around his community and forced him to rely on a scooter to get around outside.

    In previous research, Walsh and his team leveraged human-in-the-loop optimization to demonstrate that a soft, wearable device could be used to augment hip flexion and assist in swinging the leg forward to provide an efficient approach to reduce energy expenditure during walking in healthy individuals.
    Here, the researchers used the same approach but to address freezing. The wearable device uses cable-driven actuators and sensors worn around the waist and thighs. Using motion data collected by the sensors, algorithms estimate the phase of the gait and generate assistive forces in tandem with muscle movement.
    The effect was instantaneous. Without any special training, the patient was able to walk without any freezing indoors and with only occasional episodes outdoors. He was also able to walk and talk without freezing, a rarity without the device.
    “Our team was really excited to see the impact of the technology on the participant’s walking,” said Jinsoo Kim, former PhD student at SEAS and co-lead author on the study.
    During the study visits, the participant told researchers: “The suit helps me take longer steps and when it is not active, I notice I drag my feet much more. It has really helped me, and I feel it is a positive step forward. It could help me to walk longer and maintain the quality of my life.”
    “Our study participants who volunteer their time are real partners,” said Walsh. “Because mobility is difficult, it was a real challenge for this individual to even come into the lab, but we benefited so much from his perspective and feedback.”
    The device could also be used to better understand the mechanisms of gait freezing, which is poorly understood.
    “Because we don’t really understand freezing, we don’t really know why this approach works so well,” said Ellis. “But this work suggests the potential benefits of a ‘bottom-up’ rather than ‘top-down’ solution to treating gait freezing. We see that restoring almost-normal biomechanics alters the peripheral dynamics of gait and may influence the central processing of gait control.”
    The research was co-authored by Jinsoo Kim, Franchino Porciuncula, Hee Doo Yang, Nicholas Wendel, Teresa Baker and Andrew Chin. Asa Eckert-Erdheim and Dorothy Orzel also contributed to the design of the technology, as well as Ada Huang, and Sarah Sullivan managed the clinical research. It was supported by the National Science Foundation under grant CMMI-1925085; the National Institutes of Health under grant NIH U01 TR002775; and the Massachusetts Technology Collaborative, Collaborative Research and Development Matching Grant. More