More stories

  • in

    Self-assembling and disassembling swarm molecular robots via DNA molecular controller

    Researchers from Tohoku University and Kyoto University have successfully developed a DNA-based molecular controller that autonomously directs the assembly and disassembly of molecular robots. This pioneering technology marks a significant step towards advanced autonomous molecular systems with potential applications in medicine and nanotechnology.
    “Our newly developed molecular controller, composed of artificially designed DNA molecules and enzymes, coexists with molecular robots and controls them by outputting specific DNA molecules,” points out Shin-ichiro M. Nomura, an associate professor at Tohoku University’s Graduate School of Engineering and co-author of the study. “This allows the molecular robots to self-assemble and disassemble automatically, without the need for external manipulation.”
    Such autonomous operation is a crucial advancement, as it enables the molecular robots to perform tasks in environments where external signals cannot reach.
    In addition to Nomura, the research team included Ibuki Kawamata (an associate professor at Kyoto University’s Graduate School of Science), Kohei Nishiyama (a graduate student at Johannes Gutenberg University Mainz), and Akira Kakugo (a professor at Kyoto University’s Graduate School of Science).
    Research on molecular robots, which are designed to aid in disease treatment and diagnosis by functioning both inside and outside the body, is gaining significant attention. Previous research by Kakugo and colleagues had developed swarm-type molecular robots that move individually. These robots could be assembled and disassembled as a group through external manipulation. But thanks to the constructed molecular controller, the robots can self-assemble and disassemble according to a programmed sequence.
    The molecular controller initiates the process by outputting a specific DNA signal equivalent to the “assemble” command. The microtubules in the same solution, modified with DNA and propelled by kinesin molecular motors, receive the DNA signal, align their movement direction, and automatically assemble into a bundled structure. Subsequently, the controller outputs a “disassemble” signal, causing the microtubule bundles to disassemble automatically. This dynamic change was achieved through precise control by the molecular circuit, which functions like a highly sophisticated signal processor. Moreover, the molecular controller coexists with molecular robots, eliminating the need for external manipulation.
    Advancing this technology is expected to contribute to the development of more complex and advanced autonomous molecular systems. As a result, molecular robots might perform tasks that cannot be accomplished alone by assembling according to commands and then dispersing to explore targets. Additionally, this research expanded the activity conditions of molecular robots by integrating different molecular groups, such as the DNA circuit system and the motor protein operating system.
    “By developing the molecular controller and combining it with increasingly sophisticated and precise DNA circuits, molecular information amplification devices, and biomolecular design technologies, we expect swarm molecular robots to process a more diverse range of biomolecular information automatically,” adds Nomura. ” This advancement may lead to the realization of innovative technologies in nanotechnology and the medical field, such as nanomachines for in-situ molecular recognition and diagnosis or smart drug delivery systems.” More

  • in

    AI can help doctors make better decisions and save lives

    Deploying and evaluating a machine learning intervention to improve clinical care and patient outcomes is a key step in moving clinical deterioration models from byte to bedside, according to a June 13 editorial in Critical Care Medicine that comments on a Mount Sinai study published in the same issue. The main study found that hospitalized patients were 43 percent more likely to have their care escalated and significantly less likely to die if their care team received AI-generated alerts signaling adverse changes in their health.
    “We wanted to see if quick alerts made by AI and machine learning, trained on many different types of patient data, could help reduce both how often patients need intensive care and their chances of dying in the hospital,” says lead study author Matthew A. Levin, MD, Professor of Anesthesiology, Perioperative and Pain Medicine, and Genetics and Genomic Sciences, at Icahn Mount Sinai, and Director of Clinical Data Science at The Mount Sinai Hospital. “Traditionally, we have relied on older manual methods such as the Modified Early Warning Score (MEWS) to predict clinical deterioration. However, our study shows automated machine learning algorithm scores that trigger evaluation by the provider can outperform these earlier methods in accurately predicting this decline. Importantly, it allows for earlier intervention, which could save more lives.”
    The non-randomized, prospective study looked at 2,740 adult patients who were admitted to four medical-surgical units at The Mount Sinai Hospital in New York. The patients were split into two groups: one that received real-time alerts based on the predicted likelihood of deterioration, sent directly to their nurses and physicians or a “rapid response team” of intensive care physicians, and another group where alerts were created but not sent. In the units where the alerts were suppressed, patients who met standard deterioration criteria received urgent interventions from the rapid response team.
    Additional findings in the intervention group demonstrated that patients: were more likely to get medications to support the heart and circulation, indicating that doctors were taking early action; and were less likely to die within 30 days”Our research shows that real-time alerts using machine learning can substantially improve patient outcomes,” says senior study author David L. Reich, MD, President of The Mount Sinai Hospital and Mount Sinai Queens, the Horace W. Goldsmith Professor of Anesthesiology, and Professor of Artificial Intelligence and Human Health at Icahn Mount Sinai. “These models are accurate and timely aids to clinical decision-making that help us bring the right team to the right patient at the right time. We think of these as ‘augmented intelligence’ tools that speed in-person clinical evaluations by our physicians and nurses and prompt the treatments that keep our patients safer. These are key steps toward the goal of becoming a learning health system.”
    The study was terminated early due to the COVID-19 pandemic. The algorithm has been deployed on all stepdown units within The Mount Sinai Hospital, using a simplified workflow. A stepdown unit is a specialized area in the hospital where patients who are stable but still require close monitoring and care are placed. It’s a step between the intensive care unit (ICU) and a general hospital area, ensuring that patients receive the right level of attention as they recover.
    A team of intensive care physicians visits the 15 patients with the highest prediction scores every day and makes treatment recommendations to the doctors and nurses caring for the patient. As the algorithm is continually retrained on larger numbers of patients over time, the assessments by the intensive care physicians serve as the gold standard of correctness, and the algorithm becomes more accurate through reinforcement learning.
    In addition to this clinical deterioration algorithm, the researchers have developed and deployed 15 additional AI-based clinical decision support tools throughout the Mount Sinai Health System.
    The Mount Sinai paper is titled “Real-Time Machine Learning Alerts to Prevent Escalation of Care: A Nonrandomized Clustered Pragmatic Clinical Trial.” The remaining authors of the paper, all with Icahn Mount Sinai except where indicated, are Arash Kia, MD, MSc; Prem Timsina, PhD; Fu-yuan Cheng, MS; Kim-Anh-Nhi Nguyen, MS; Roopa Kohli-Seth, MD; Hung-Mo Lin, ScD (Yale University); Yuxia Ouyang, PhD; and Robert Freeman, RN, MSN, NE-BC. More

  • in

    Making ferromagnets ready for ultra-fast communication and computation technology

    An international team led by researchers at the University of California, Riverside, has made a significant breakthrough in how to enable and exploit ultra-fast spin behavior in ferromagnets. The research, published in Physical Review Letters and highlighted as an editors’ suggestion, paves the way for ultra-high frequency applications.
    Today’s smartphones and computers operate at gigahertz frequencies, a measure of how fast they operate, with scientists working to make them even faster. The new research has found a way to achieve terahertz frequencies using conventional ferromagnets, which could lead to next-generation communication and computation technologies that operate a thousand times faster.
    Ferromagnets are materials where electron spins align in the same direction, but these spins also oscillate around this direction, creating “spin waves.” These spin waves are crucial for emerging computer technologies, playing a key role in processing information and signals.
    “When spins oscillate, they experience friction due to interactions with electrons and the crystal lattice of the ferromagnet,” said Igor Barsukov, an associate professor of physics and astronomy, who led the study. “Interestingly, these interactions also cause spins to acquire inertia, leading to an additional type of spin oscillation called nutation.”
    Barsukov explained that nutation occurs at ultra-high frequencies, making it highly desirable for future computer and communication technologies. Recently, physicists’ experimental confirmation of nutational oscillations excited the magnetism research community, he said.
    “Modern spintronic applications manipulate spins using spin currents injected into the magnet,” said Rodolfo Rodriguez, the first author of the paper, a former graduate student in the Barsukov Group, and now a scientist at HRL Labs, LLC.
    Barsukov and his team discovered that injecting a spin current with the “wrong” sign can excite nutational auto-oscillations.

    “These self-sustained oscillations hold great promise for next-generation computation and communication technologies,” said coauthor Allison Tossounian, until recently an undergraduate student in the Barsukov Group.
    According to Barsukov, spin inertia introduces a second time-derivative in the equation of motion, making some phenomena counterintuitive.
    “We managed to harmonize spin-current-driven dynamics and spin inertia,” he said. “We also found an isomorphism, a parallel, between the spin dynamics in ferromagnets and ferrimagnets, which could accelerate technological innovation by exploiting synergies between these fields.”
    In ferrimagnets, usually two antiparallel spin lattices have an unequal amount of spin. Materials with antiparallel spin lattices recently received increased interest as candidates for ultrafast applications, Barsukov said.
    “But many technological challenges remain,” he said. “Our understanding of spin currents and materials engineering for ferromagnets has significantly advanced over the past few decades. Coupled with the recent confirmation of nutation, we saw an opportunity for ferromagnets to become excellent candidates for ultra-high frequency applications. Our study prepares the stage for concerted efforts to explore optimal materials and design efficient architectures to enable terahertz devices.”
    The title of the paper is “Spin inertia and auto-oscillations in ferromagnets.”
    The study was supported by the National Science Foundation. More

  • in

    Scientists preserve DNA in an amber-like polymer

    In the movie “Jurassic Park,” scientists extracted DNA that had been preserved in amber for millions of years, and used it to create a population of long-extinct dinosaurs.
    Inspired partly by that film, MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, whether entire human genomes or digital files such as photos.
    Most current methods for storing DNA require freezing temperatures, so they consume a great deal of energy and are not feasible in many parts of the world. In contrast, the new amber-like polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water.
    The researchers showed that they could use this polymer to store DNA sequences encoding the theme music from Jurassic Park, as well as an entire human genome. They also demonstrated that the DNA can be easily removed from the polymer without damaging it.
    “Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”
    Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published in the Journal of the American Chemical Society. Former MIT postdoc Elizabeth Prince and MIT postdoc Ho Fung Cheng are the lead authors of the paper.
    Capturing DNA
    DNA, a very stable molecule, is well-suited for storing massive amounts of information, including digital data. Digital storage systems encode text, photos, and other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.

    DNA offers a way to store this digital information at very high density: In theory, a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.
    In 2021, Banal and his postdoc advisor, Mark Bathe, an MIT professor of biological engineering, developed a way to store DNA in particles of silica, which could be labeled with tags that revealed the particles’ contents. That work led to a spinout called Cache DNA.
    One downside to that storage system is that it takes several days to embed DNA into the silica particles. Furthermore, removing the DNA from the particles requires hydrofluoric acid, which can be hazardous to workers handling the DNA.
    To come up with alternative storage materials, Banal began working with Johnson and members of his lab. Their idea was to use a type of polymer known as a degradable thermoset, which consists of polymers that form a solid when heated. The material also includes cleavable links that can be easily broken, allowing the polymer to be degraded in a controlled way.
    “With these deconstructable thermosets, depending on what cleavable bonds we put into them, we can choose how we want to degrade them,” Johnson says.
    For this project, the researchers decided to make their thermoset polymer from styrene and a cross-linker, which together form an amber-like thermoset called cross-linked polystyrene. This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA. To make the thermoset degradable, the styrene monomers and cross-linkers are copolymerized with monomers called thionolactones. These links can be broken by treating them with a molecule called cysteamine.

    Because styrene is so hydrophobic, the researchers had to come up with a way to entice DNA — a hydrophilic, negatively charged molecule — into the styrene.
    To do that, they identified a combination of three monomers that they could turn into polymers that dissolve DNA by helping it interact with styrene. Each of the monomers has different features that cooperate to get the DNA out of water and into the styrene. There, the DNA forms spherical complexes, with charged DNA in the center and hydrophobic groups forming an outer layer that interacts with styrene. When heated, this solution becomes a solid glass-like block, embedded with DNA complexes.
    The researchers dubbed their method T-REX (Thermoset-REinforced Xeropreservation). The process of embedding DNA into the polymer network takes a few hours, but that could become shorter with further optimization, the researchers say.
    To release the DNA, the researchers first add cysteamine, which cleaves the bonds holding the polystyrene thermoset together, breaking it into smaller pieces. Then, a detergent called SDS can be added to remove the DNA from polystyrene without damaging it.
    Storing information
    Using these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). They were able to store DNA encoding the Emancipation Proclamation and the MIT logo, in addition to the theme music from “Jurassic Park.”
    After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.
    The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.
    Cache DNA, a company started by Banal and Bathe, with Johnson as a member of the scientific advisory board, is now working on further developing DNA storage technology. The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future.
    “The idea is, why don’t we preserve the master record of life forever?” Banal says. “Ten years or 20 years from now, when technology has advanced way more than we could ever imagine today, we could learn more and more things. We’re still in the very infancy of understanding the genome and how it relates to disease.”
    The research was funded by the National Science Foundation. More

  • in

    Clinical decision support software can prevent 95% of medication errors in the operating room, study shows

    A new study by investigators from Massachusetts General Hospital, a founding member of the Mass General Brigham healthcare system, reveals that computer software that helps inform clinicians’ decisions about a patient’s care can prevent 95% of medication errors in the operating room. The findings are reported in Anesthesia & Analgesia, published by Wolters Kluwer.
    “Medication errors in the operating room have high potential for patient harm,” said senior author Karen C. Nanji, MD, MPH, a physician investigator in the Department of Anesthesia, Critical Care, and Pain Medicine at Massachusetts General Hospital and an associate professor in the Department of Anesthesia at Harvard Medical School. “Clinical decision support involves comprehensive software algorithms that provide evidence-based information to clinicians at the point-of-care to enhance decision-making and prevent errors.”
    “While clinical decision support improves both efficiency and quality of care in operating rooms, it is still in the early stages of adoption,” added first author Lynda Amici, DNP, CRNA, of Cooper University Hospital (who was at Massachusetts General Hospital at the time of this study).
    For the study, Nanji, Amici, and their colleagues obtained all safety reports involving medication errors documented by anesthesia clinicians for surgical procedures from August 2020 to August 2022 at Massachusetts General Hospital. Two independent reviewers classified each error by its timing and type, whether it was associated with patient harm and the severity of that harm, and whether it was preventable by clinical decision support algorithms.
    The reviewers assessed 127 safety reports involving 80 medication errors, and they found that 76 (95%) of the errors would have been prevented by clinical decision support. Certain error types, such as wrong medication and wrong dose, were more likely to be preventable by clinical decision support algorithms than other error types.
    “Our results support emerging guidelines from the Institute for Safe Medication Practices and the Anesthesia Patient Safety Foundation that recommend the use of clinical decision support to prevent medication errors in the operating room,” said Nanji. “Massachusetts General Hospital researchers have designed and built a comprehensive intraoperative clinical decision support software platform, called GuidedOR, that improves both quality of care and workflow efficiency. GuidedOR is currently implemented at our hospital and is being adopted at additional Mass General Brigham sites to make surgery and anesthesia safer for patients.”
    Nanji noted that future research should include large multi-center randomized controlled trials to more precisely measure the effect of clinical decision support on medication errors in the operating room.
    Authorship: Lynda D. Amici DNP, CRNA; Maria van Pelt, PhD, CRNA, FAAN; Laura Mylott, RN, PhD, NEA-BC; Marin Langlieb, BA; and Karen C. Nanji, MD, MPH. Funding: Research support was provided from institutional and/or departmental sources from Massachusetts General Hospital’s Department of Anesthesia, Critical Care, and Pain Medicine. Dr. Nanji is additionally supported by a grant from the Doris Duke Foundation. More

  • in

    New technique improves AI ability to map 3D space with 2D cameras

    Researchers have developed a technique that allows artificial intelligence (AI) programs to better map three-dimensional spaces using two-dimensional images captured by multiple cameras. Because the technique works effectively with limited computational resources, it holds promise for improving the navigation of autonomous vehicles.
    “Most autonomous vehicles use powerful AI programs called vision transformers to take 2D images from multiple cameras and create a representation of the 3D space around the vehicle,” says Tianfu Wu, corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “However, while each of these AI programs takes a different approach, there is still substantial room for improvement.
    “Our technique, called Multi-View Attentive Contextualization (MvACon), is a plug-and-play supplement that can be used in conjunction with these existing vision transformer AIs to improve their ability to map 3D spaces,” Wu says. “The vision transformers aren’t getting any additional data from their cameras, they’re just able to make better use of the data.”
    MvACon effectively works by modifying an approach called Patch-to-Cluster attention (PaCa), which Wu and his collaborators released last year. PaCa allows transformer AIs to more efficiently and effectively identify objects in an image.
    “The key advance here is applying what we demonstrated with PaCa to the challenge of mapping 3D space using multiple cameras,” Wu says.
    To test the performance of MvACon, the researchers used it in conjunction with three leading vision transformers — BEVFormer, the BEVFormer DFA3D variant, and PETR. In each case, the vision transformers were collecting 2D images from six different cameras. In all three instances, MvACon significantly improved the performance of each vision transformer.
    “Performance was particularly improved when it came to locating objects, as well as the speed and orientation of those objects,” says Wu. “And the increase in computational demand of adding MvACon to the vision transformers was almost negligible.
    “Our next steps include testing MvACon against additional benchmark datasets, as well as testing it against actual video input from autonomous vehicles. If MvACon continues to outperform the existing vision transformers, we’re optimistic that it will be adopted for widespread use.”
    The paper, “Multi-View Attentive Contextualization for Multi-View 3D Object Detection,” will be presented June 20 at the IEEE/CVF Conference on Computer Vision and Pattern Recognition, being held in Seattle, Wash. First author of the paper is Xianpeng Liu, a recent Ph.D. graduate of NC State. The paper was co-authored by Ce Zheng and Chen Chen of the University of Central Florida; Ming Qian and Nan Xue of the Ant Group; and Zhebin Zhang and Chen Li of the OPPO U.S. Research Center.
    The work was done with support from the National Science Foundation, under grants 1909644, 2024688 and 2013451; the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and a research gift fund from Innopeak Technology, Inc. More

  • in

    Quantum data assimilation: A quantum leap in weather prediction

    Data assimilation is a mathematical discipline that integrates observed data and numerical models to improve the interpretation and prediction of dynamical systems. It is a crucial component of earth sciences, particularly in numerical weather prediction (NWP). Data assimilation techniques have been widely investigated in NWP in the last two decades to refine the initial conditions of weather models by combining model forecasts and observational data. Most NWP centers around the world employ variational and ensemble-variational data assimilation methods, which iteratively reduce cost functions via gradient-based optimization. However, these methods require significant computational resources.
    Recently, quantum computing has emerged as a new avenue of computational technology, offering a promising solution for overcoming the computational challenges of classical computers. Quantum computers can take advantage of quantum effects such as tunneling, superposition, and entanglement to significantly reduce computational demands. Quantum annealing machines, in particular, are powerful for solving optimization problems.
    In a recent study, Professor Shunji Kotsuki from the Institute for Advanced Academic Research/Center for Environmental Remote Sensing/Research Institute of Disaster Medicine, Chiba University, along with his colleagues Fumitoshi Kawasaki from the Graduate School of Science and Engineering and Masanao Ohashi from the Center for Environmental Remote Sensing, developed a novel data assimilation technique designed for quantum annealing machines. “Our study introduces a novel quantum annealing approach to accelerate data assimilation, which is the main computational bottleneck for numerical weather predictions. With this algorithm, we successfully solved data assimilation on quantum annealers for the first time,” explains Prof. Kotsuki. Their study has been published in the journal Nonlinear Processes in Geophysics on June 07, 2024.
    In the study, the researchers focused on the four-dimensional variational data assimilation (4DVAR) method, one of the most widely used data assimilation methods in NWP systems. However, since 4DVAR is designed for classical computers, it cannot be directly used on quantum hardware. Prof. Kotsuki clarifies, “Unlike the conventional 4DVAR, which requires a cost function and its gradient, quantum annealers require only the cost function. However, the cost function must be represented by binary variables (0 or 1). Therefore, we reformulated the 4DVAR cost function, a quadratic unconstrained optimization (QUO) problem, into a quadratic unconstrained binary optimization (QUBO) problem, which quantum annealers can solve.”
    The researchers applied this QUBO approach to a series of 4DVAR experiments using a 40-variable Lorentz-96 model, which is a dynamical system commonly used to test data assimilation. They conducted the experiments using the D-Wave Advantage physical quantum annealer, or Phy-QA, and the Fixstars Amplify’s simulated quantum annealer, or Sim-QA. Moreover, they tested the conventionally utilized quasi-Newton-based iterative approaches, using the Broyden-Fletcher-Goldfarb-Shanno formula, in solving linear and nonlinear QUO problems and compared their performance to that of quantum annealers.
    The results revealed that quantum annealers produced analysis with comparable accuracy to conventional quasi-Newton-based approaches but in a fraction of the time they took. The D-Wave’s Phy-QA required less than 0.05 seconds for computation, much faster than conventional approaches. However, it also exhibited slightly larger root mean square errors, which the researchers attributed to the inherent stochastic quantum effects. To address this, they found that reading out multiple solutions from the quantum annealer improved stability and accuracy. They also noted that the scaling factor for quantum data assimilation, which is important for regulating the analysis accuracy, was different for the D-Wave Phy-QA and the Sim-QA, owing to the stochastic quantum effects associated with the former annealer.
    These findings signify the role of quantum computers in reducing the computational cost of data assimilation. “Our approach could revolutionize future NWP systems, enabling a deeper understanding and improved predictions with much less computational time. In addition, it has the potential to advance the practical applications of quantum annealers in solving complex optimization problems in earth science,” remarks Prof. Kotsuki.
    Overall, the proposed innovative method holds great promise for inspiring future applications of quantum computers in advancing data assimilation, potentially leading to more accurate weather predictions. More

  • in

    Swimming microrobots deliver cancer-fighting drugs to metastatic lung tumors in mice

    Engineers at the University of California San Diego have developed microscopic robots, known as microrobots, capable of swimming through the lungs to deliver cancer-fighting medication directly to metastatic tumors. This approach has shown promise in mice, where it inhibited the growth and spread of tumors that had metastasized to the lungs, thereby boosting survival rates compared to control treatments.
    The findings are detailed in a paper published on June 12 in Science Advances.
    The microrobots are an ingenious combination of biology and nanotechnology. They are a joint effort between the labs of Joseph Wang and Liangfang Zhang, both professors in the Aiiso Yufeng Li Family Department of Chemical and Nano Engineering at the UC San Diego Jacobs School of Engineering.
    To create the microrobots, researchers chemically attached drug-filled nanoparticles to the surface of green algae cells. The algae, which provide the microrobots with their movement, enable the nanoparticles to efficiently swim around in the lungs and deliver their therapeutic payload to tumors.
    The nanoparticles are made of tiny biodegradable polymer spheres, which are loaded with the chemotherapeutic drug doxorubicin and coated with red blood cell membranes. This coating serves a critical function: it protects the nanoparticles from the immune system, allowing them to stay in the lungs long enough to exert their anti-tumor effects. “It acts as a camouflage,” said study co-first author Zhengxing Li, who is a nanoengineering Ph.D. student in both Wang and Zhang’s research groups. “This coating makes the nanoparticle look like a red blood cell from the body, so it will not trigger an immune response.”
    This formulation of nanoparticle-carrying algae is safe, the researchers noted. The materials used to make the nanoparticles are biocompatible while the green algae employed, Chlamydomonas reinhardtii, are recognized as safe for use by the U.S. Food and Drug Administration.
    This study builds on prior work by Wang and Zhang’s teams using similar microrobots to treat deadly pneumonia in mice. “Those were the first microrobots to be safely tested in the lungs of live animals,” said Wang.

    In previous work, the microrobots fought the spread of pneumonia-causing bacteria using a different drug and cell membrane combination for the nanoparticles. By tweaking these components, the team has now tailored the microrobots to fight the spread of cancer cells in the lungs. “We demonstrate that this is a platform technology that can actively and efficiently deliver therapeutics throughout the entire lung tissue to combat different types of deadly diseases in the lungs,” said Zhang.
    In the current study, mice with melanoma that had metastasized to the lungs were treated with the microrobots, which were administered to the lungs through a small tube inserted into the windpipe. Treated mice experienced a median survival time of 37 days, an improvement over the 27-day median survival time observed in untreated mice, as well as mice that received either the drug alone or drug-filled nanoparticles without algae.
    “The active swimming motion of the microrobots significantly improved distribution of the drug to the deep lung tissue, while prolonging retention time,” said Li. “This enhanced distribution and prolonged retention time allowed us to reduce the required drug dosage, potentially reducing side effects while maintaining high survival efficacy.”
    Moving forward, the team is working on advancing this microrobot treatment to trials in larger animals, with the ultimate goal of human clinical trials.
    Paper: “Biohybrid microrobots locally and actively deliver drug-loaded nanoparticles to inhibit the progression of lung metastasis.” Co-authors of the study include Fangyu Zhang*, Zhongyuan Guo*, Zhengxing Li*, Hao Luan, Yiyan Yu, Audrey T. Zhu, Shichao Ding, Weiwei Gao and Ronnie H. Fang.
    *These authors contributed equally to this work.
    This work was supported by the Defense Threat Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense (HDTRA1-21-1-0010) and the National Institutes of Health (R21AI175904). More