More stories

  • in

    AI tool predicts responses to cancer therapy using information from each cell of the tumor

    With more than 200 types of cancer and every cancer individually unique, ongoing efforts to develop precision oncology treatments remain daunting. Most of the focus has been on developing genetic sequencing assays or analyses to identify mutations in cancer driver genes, then trying to match treatments that may work against those mutations.
    But many, if not most, cancer patients do not benefit from these early targeted therapies. In a new study published on April 18, 2024, in the journal Nature Cancer, first author Sanju Sinha, Ph.D., assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys, with senior authors Eytan Ruppin, M.D., Ph.D., and Alejandro Schaffer, Ph.D., at the National Cancer Institute, part of the National Institutes of Health (NIH) — and colleagues — describe a first-of-its-kind computational pipeline to systematically predict patient response to cancer drugs at single-cell resolution.
    Dubbed PERsonalized Single-Cell Expression-Based Planning for Treatments in Oncology, or PERCEPTION, the new artificial intelligence-based approach dives deeper into the utility of transcriptomics — the study of transcription factors, the messenger RNA molecules expressed by genes that carry and convert DNA information into action.
    “A tumor is a complex and evolving beast. Using single-cell resolution can allow us to tackle both of these challenges,” says Sinha. “PERCEPTION allows for the use of rich information within single-cell omics to understand the clonal architecture of the tumor and monitor the emergence of resistance.” (In biology, omics refers to the sum of constituents within a cell.)
    Sinha says, “The ability to monitor the emergence of resistance is the most exciting part for me. It has the potential to allow us to adapt to the evolution of cancer cells and even modify our treatment strategy.”
    Sinha and colleagues used transfer learning — a branch of AI — to build PERCEPTION.
    “Limited single-cell data from clinics was our biggest challenge. An AI model needs large amounts of data to understand a disease, not unlike how ChatGPT needs huge amounts of text data scraped from the internet.”
    PERCEPTION uses published bulk-gene expression from tumors to pre-train its models. Then, single-cell data from cell lines and patients, even though limited, was used to tune the models.

    PERCEPTION was successfully validated by predicting the response to monotherapy and combination treatment in three independent, recently published clinical trials for multiple myeloma, breast and lung cancer.
    In each case, PERCEPTION correctly stratified patients into responder and non-responder categories. In lung cancer, it even captured the development of drug resistance as the disease progressed, a notable discovery with great potential.
    Sinha says that PERCEPTION is not ready for clinics, but the approach shows that single-cell information can be used to guide treatment. He hopes to encourage the adoption of this technology in clinics to generate more data, which can be used to further develop and refine the technology for clinical use.
    “The quality of the prediction rises with the quality and quantity of the data serving as its foundation,” says Sinha. “Our goal is to create a clinical tool that can predict the treatment response of individual cancer patients in a systematic, data-driven manner. We hope these findings spur more data and more such studies, sooner rather than later.”
    Additional authors on the study include Rahulsimham Vegesna, Sumit Mukherjee, Ashwin V. Kammula, Saugato Rahman Dhruba, Nishanth Ulhas Nair, Peng Jiang, Alejandro Schäffer, Kenneth D. Aldape and Eytan Ruppin, National Cancer Institute (NCI); Wei Wu, Lucas Kerr, Collin M. Blakely and Trever G. Biovona, University of California, San Francisco; Mathew G. Jones and Nir Yosef, University of California, Berkeley; Oleg Stroganov and Ivan Grishagin, Rancho BioSciences; Craig J. Thomas, National Institutes of Health; and Cyril H. Benes, Harvard University.
    This research was supported in part by the Intramural Research Program of the NIH; NCI; and NIH grants R01CA231300, R01CA204302, R01CA211052, R01CA169338 and U54CA224081. More

  • in

    How data provided by fitness trackers and smartphones can help people with MS

    Multiple sclerosis (MS) is an insidious disease. Patients suffer because their immune system is attacking their own nerve fibres, which inhibits the transmission of nerve signals. People with MS experience mild to severe impairment of their motor function and sensory perception in a variety of ways. These impairments disrupt their daily activities and reduce their overall quality of life. As individual as the symptoms and progression of the disease are, so too is the way it is managed. To monitor the disease progression and be able to recommend effective treatments, physicians ask their patients on a regular basis to describe their symptoms, such as fatigue.
    Going off memory
    Patients are thus faced with the tricky task of having to provide information about their state of health and what they have been capable of over the past few weeks and even months from memory. The data gathered in this way can be inaccurate and incomplete because patients might misremember details or tailor their responses to social expectations. And since these responses have a significant impact on how the progression of the disease is recorded, it could be mismanaged.
    “Physicians would benefit from having access to reliable, frequent and long-term measurements of patients’ health parameters that give an accurate and comprehensive view of their state of health,” explains Shkurta Gashi. She is lead author of a new study and postdoc in the groups led by ETH Professors Christian Holz and Gunnar Rätsch at the Department of Computer Science as well as a fellow of the ETH AI Center.
    Together with colleagues from ETH Zurich, the University Hospital Zurich, and the University of Zurich, Gashi has now shown that fitness trackers and smartphones can provide this kind of reliable long-term data with a high temporal resolution. Their study was published in the journal NPJ Digital Medicine.
    Digital markers for MS
    The researchers recruited a group of volunteers — 55 with MS and a further 24 serving as control subjects — and provided each person with a fitness tracking armband. Over the course of two weeks, the researchers collected data from these wearable devices as well as from participants’ smartphones. They then performed statistical tests and a machine learning analysis of this data to identify reliable and clinically useful information.

    What proved particularly meaningful was the data on physical activity and heart rate, which was collected from participants’ wearable devices. The higher the participants’ disease severity and fatigue levels, the lower their physical activity and heart rate variability proved to be. Compared to the controls, MS patients took fewer steps per day, engaged in an overall lower level of physical activity and registered more consistent intervals between heartbeats.
    How often people used their smartphone also delivered important information about their disease severity and fatigue levels: the less often a study participant used their phone, the greater their level of disability and the more severe their level of fatigue. The researchers gained insights into motor function through a game-like smartphone test. Developed at ETH a few years ago, this test requires the user to tap the screen as quickly as possible to make a virtual person move as fast as possible. Monitoring how fast a person taps and how their tapping frequency changes over time allows the researchers to draw conclusions about their motor skills and physical fatigue.
    “Altogether, the combination of data from the fitness tracker and smartphone lets us distinguish between healthy participants and those with MS with a high degree of accuracy,” Gashi says. “Combining information related to several aspects of the disease, including physiological, behavioural, motor performance and sleep information, is crucial for more effective and accurate monitoring of the disease.”
    Reliable approach
    This new approach gives MS sufferers a straightforward way of collecting reliable and clinically useful long-term data as they go about their day-to-day lives. The researchers expect that this type of data can lead to better treatments and more effective disease management techniques: more comprehensive, precise and reliable data helps experts make better decisions and possibly even propose effective treatments sooner than before. What’s more, evaluating this patient data lets the experts verify the effectiveness of different treatments.
    The researchers have now made their data set available to other scientists. They also point out the need for a larger study and more data to develop reliable and generalizable models for automatic evaluation. In the future, such models could enable MS patients to experience a significant improvement in their lives thanks to data from fitness trackers and smartphones. More

  • in

    An ink for 3D-printing flexible devices without mechanical joints

    EPFL researchers are targeting the next generation of soft actuators and robots with an elastomer-based ink for 3D printing objects with locally changing mechanical properties, eliminating the need for cumbersome mechanical joints.
    For engineers working on soft robotics or wearable devices, keeping things light is a constant challenge: heavier materials require more energy to move around, and — in the case of wearables or prostheses — cause discomfort. Elastomers are synthetic polymers that can be manufactured with a range of mechanical properties, from stiff to stretchy, making them a popular material for such applications. But manufacturing elastomers that can be shaped into complex 3D structures that go from rigid to rubbery has been unfeasible until now.
    “Elastomers are usually cast so that their composition cannot be changed in all three dimensions over short length scales. To overcome this problem, we developed DNGEs: 3D-printable double network granular elastomers that can vary their mechanical properties to an unprecedented degree,” says Esther Amstad, head of the Soft Materials Laboratory in EPFL’s School of Engineering.
    Eva Baur, a PhD student in Amstad’s lab, used DNGEs to print a prototype ‘finger’, complete with rigid ‘bones’ surrounded by flexible ‘flesh’. The finger was printed to deform in a pre-defined way, demonstrating the technology’s potential to manufacture devices that are sufficiently supple to bend and stretch, while remaining firm enough to manipulate objects.
    With these advantages, the researchers believe that DNGEs could facilitate the design of soft actuators, sensors, and wearables free of heavy, bulky mechanical joints. The research has been published in the journal Advanced Materials.
    Two elastomeric networks; twice as versatile
    The key to the DNGEs’ versatility lies in engineering two elastomeric networks. First, elastomer microparticles are produced from oil-in-water emulsion drops. These microparticles are placed in a precursor solution, where they absorb elastomer compounds and swell up. The swollen microparticles are then used to make a 3D printable ink, which is loaded into a bioprinter to create a desired structure. The precursor is polymerized within the 3D-printed structure, creating a second elastomeric network that rigidifies the entire object.

    While the composition of the first network determines the structure’s stiffness, the second determines its fracture toughness, meaning that the two networks can be fine-tuned independently to achieve a combination of stiffness, toughness, and fatigue resistance. The use of elastomers over hydrogels — the material used in state-of-the-art approaches — has the added advantage of creating structures that are water-free, making them more stable over time. To top it off, DNGEs can be printed using commercially available 3D printers.
    “The beauty of our approach is that anyone with a standard bioprinter can use it,” Amstad emphasizes.
    One exciting potential application of DNGEs is in devices for motion-guided rehabilitation, where the ability to support movement in one direction while restricting it in another could be highly useful. Further development of DNGE technology could result in prosthetics, or even motion guides to assist surgeons. Sensing remote movements, for example in robot-assisted crop harvesting or underwater exploration, is another area of application.
    Amstad says that the Soft Materials Lab is already working on the next steps toward developing such applications by integrating active elements — such as responsive materials and electrical connections — into DNGE structures. More

  • in

    Researchers create new AI pipeline for identifying molecular interactions

    Understanding how proteins interact with each other is crucial for developing new treatments and understanding diseases. Thanks to computational advances, a team of researchers led by Assistant Professor of Chemistry Alberto Perez has developed a groundbreaking algorithm to identify these molecular interactions.
    Perez’s research team included two graduate students from UF, Arup Mondal and Bhumika Singh, and a handful of researchers from Rutgers University and Rensselaer Polytechnic Institute. The team published their findings in Angewandte Chemie, a leading chemistry journal based in Germany.
    Named the AF-CBA Pipeline, this innovative tool offers unparalleled accuracy and speed in pinpointing the strongest peptide binders to a specific protein. It does this by using AI to simulate molecular interactions, sorting through thousands of candidate molecules to identify the molecule that interacts best with the protein of interest.
    The AI-driven approach allows the pipeline to perform these actions in a fraction of the time it would take humans or traditional physics based-approaches to accomplish the same task.
    “Think of it like a grocery store,” Perez explained. “When you want to buy the best possible fruit, you have to compare sizes and aspects. There are too many fruits to try them all of course, so you compare a few before making a selection. This AI method, however, can not only try them all, but can also reliably pick out the best one.”
    Typically, the proteins of interest are the ones that cause the most damage to our bodies when they misbehave. By finding what molecules interact with these problematic proteins, the pipeline opens avenues for targeted therapies to combat ailments such as inflammation, immune dysregulation, and cancer.
    “Knowing the structure of the strongest peptide binder in turn helps us in the rational designing of new drug therapeutics,” Perez said.
    The groundbreaking nature of the pipeline is enhanced by its foundation on pre-existing technology: a program called AlphaFold. Developed by Google Deepmind, AlphaFold uses deep learning to predict protein structures. This reliance on familiar technology will be a boon for the pipeline’s accessibility to researchers and will help ensure its future adoption.
    Moving forward, Perez and his team aim to expand their pipeline to gain further biological insights and inhibit disease agents. They have two viruses in their sights: murine leukemia virus and Kaposi’s sarcoma virus. Both viruses can cause serious health issues, especially tumors, and interact with as-of-now unknown proteins.
    “We want to design novel libraries of peptides,” Perez said. “AF-CBA will allow us to identify those designed peptides that bind stronger than the viral peptides.” More

  • in

    How 3D printers can give robots a soft touch

    Soft skin coverings and touch sensors have emerged as a promising feature for robots that are both safer and more intuitive for human interaction, but they are expensive and difficult to make. A recent study demonstrates that soft skin pads doubling as sensors made from thermoplastic urethane can be efficiently manufactured using 3D printers.
    “Robotic hardware can involve large forces and torques, so it needs to be made quite safe if it’s going to either directly interact with humans or be used in human environments,” said project lead Joohyung Kim, a professor of electrical & computer engineering at the University of Illinois Urbana-Champaign. “It’s expected that soft skin will play an important role in this regard since it can be used for both mechanical safety compliance and tactile sensing.
    As reported in the journal IEEE Transactions on Robotics, the 3D-printed pads function as both soft skin for a robotic arm and pressure-based mechanical sensors. The pads have airtight seals and connect to pressure sensors. Like a squeezed balloon, the pad deforms when it touches something, and the displaced air activates the pressure sensor.
    Kim explained, “Tactile robotic sensors usually contain very complicated arrays of electronics and are quite expensive, but we have shown that functional, durable alternatives can be made very cheaply. Moreover, since it’s just a question of reprogramming a 3D printer, the same technique can be easily customized to different robotic systems.”
    The researchers demonstrated that this functionality can be naturally used for safety: if the pads detect anything near a dangerous area such as a joint, the arm automatically stops. They can also be used for operational functionality with the robot interpreting touches and taps as instructions.
    Since 3D-printed parts are comparatively simple and inexpensive to manufacture, they can be easily adapted to new robotic systems and replaced. Kim noted that this feature is desirable in applications where cleaning and maintaining parts is expensive or infeasible.
    “Imagine you want to use soft-skinned robots to assist in a hospital setting,” he said. “They would need to be regularly sanitized, or the skin would need to be regularly replaced. Either way,there’s a huge cost. However, 3D printing is a very scalable process, so interchangeable parts can be inexpensively made and easily snapped on and off the robot body.”
    Tactile inputs like the kind provided by the new pads are a relatively unexplored facet of robotic sensing and control. Kim hopes that the ease of this new manufacturing technique will inspire more interest.
    “Right now, computer vision and language models are the two major ways that humans can interact with robotic systems, but there is a need for more data on physical interactions, or ‘force-level’ data,” he said. “From the robot’s point of view, this information is the most direct interaction with its environment, but there are very few users — mostly researchers — who think about this. Collecting this force-level data is a target task for me and my group. More

  • in

    Artificial Intelligence beats doctors in accurately assessing eye problems

    The clinical knowledge and reasoning skills of GPT-4 are approaching the level of specialist eye doctors, a study led by the University of Cambridge has found.
    GPT-4 — a ‘large language model’ — was tested against doctors at different stages in their careers, including unspecialised junior doctors, and trainee and expert eye doctors. Each was presented with a series of 87 patient scenarios involving a specific eye problem, and asked to give a diagnosis or advise on treatment by selecting from four options.
    GPT-4 scored significantly better in the test than unspecialised junior doctors, who are comparable to general practitioners in their level of specialist eye knowledge.
    GPT-4 gained similar scores to trainee and expert eye doctors — although the top performing doctors scored higher.
    The researchers say that large language models aren’t likely to replace healthcare professionals, but have the potential to improve healthcare as part of the clinical workflow.
    They say state-of-the-art large language models like GPT-4 could be useful for providing eye-related advice, diagnosis, and management suggestions in well-controlled contexts, like triaging patients, or where access to specialist healthcare professionals is limited.
    “We could realistically deploy AI in triaging patients with eye issues to decide which cases are emergencies that need to be seen by a specialist immediately, which can be seen by a GP, and which don’t need treatment,” said Dr Arun Thirunavukarasu, lead author of the study, which he carried out while a student at the University of Cambridge’s School of Clinical Medicine.

    He added: “The models could follow clear algorithms already in use, and we’ve found that GPT-4 is as good as expert clinicians at processing eye symptoms and signs to answer more complicated questions.
    “With further development, large language models could also advise GPs who are struggling to get prompt advice from eye doctors. People in the UK are waiting longer than ever for eye care.
    Large volumes of clinical text are needed to help fine-tune and develop these models, and work is ongoing around the world to facilitate this.
    The researchers say that their study is superior to similar, previous studies because they compared the abilities of AI to practicing doctors, rather than to sets of examination results.
    “Doctors aren’t revising for exams for their whole career. We wanted to see how AI fared when pitted against to the on-the-spot knowledge and abilities of practicing doctors, to provide a fair comparison,” said Thirunavukarasu, who is now an Academic Foundation Doctor at Oxford University Hospitals NHS Foundation Trust.
    He added: “We also need to characterise the capabilities and limitations of commercially available models, as patients may already be using them — rather than the internet — for advice.”
    The test included questions about a huge range of eye problems, including extreme light sensitivity, decreased vision, lesions, itchy and painful eyes, taken from a textbook used to test trainee eye doctors. This textbook is not freely available on the internet, making it unlikely that its content was included in GPT-4’s training datasets.

    The results are published today in the journal PLOS Digital Health.
    “Even taking the future use of AI into account, I think doctors will continue to be in charge of patient care. The most important thing is to empower patients to decide whether they want computer systems to be involved or not. That will be an individual decision for each patient to make,” said Thirunavukarasu.
    GPT-4 and GPT-3.5 — or ‘Generative Pre-trained Transformers’ — are trained on datasets containing hundreds of billions of words from articles, books, and other internet sources. These are two examples of large language models; others in wide use include Pathways Language Model 2 (PaLM 2) and Large Language Model Meta AI 2 (LLaMA 2).
    The study also tested GPT-3.5, PaLM2, and LLaMA with the same set of questions. GPT-4 gave more accurate responses than all of them.
    GPT-4 powers the online chatbot ChatGPT to provide bespoke responses to human queries. In recent months, ChatGPT has attracted significant attention in medicine for attaining passing level performance in medical school examinations, and providing more accurate and empathetic messages than human doctors in response to patient queries.
    The field of artificially intelligent large language models is moving very rapidly. Since the study was conducted, more advanced models have been released — which may be even closer to the level of expert eye doctors. More

  • in

    AI speeds up drug design for Parkinson’s by ten-fold

    Researchers have used artificial intelligence techniques to massively accelerate the search for Parkinson’s disease treatments.
    The researchers, from the University of Cambridge, designed and used an AI-based strategy to identify compounds that block the clumping, or aggregation, of alpha-synuclein, the protein that characterises Parkinson’s.
    The team used machine learning techniques to quickly screen a chemical library containing millions of entries, and identified five highly potent compounds for further investigation.
    Parkinson’s affects more than six million people worldwide, with that number projected to triple by 2040. No disease-modifying treatments for the condition are currently available. The process of screening large chemical libraries for drug candidates — which needs to happen well before potential treatments can be tested on patients — is enormously time-consuming and expensive, and often unsuccessful.
    Using machine learning, the researchers were able to speed up the initial screening process by ten-fold, and reduce the cost by a thousand-fold, which could mean that potential treatments for Parkinson’s reach patients much faster. The results are reported in the journal Nature Chemical Biology.
    Parkinson’s is the fastest-growing neurological condition worldwide. In the UK, one in 37 people alive today will be diagnosed with Parkinson’s in their lifetime. In addition to motor symptoms, Parkinson’s can also affect the gastrointestinal system, nervous system, sleeping patterns, mood and cognition, and can contribute to a reduced quality of life and significant disability.
    Proteins are responsible for important cell processes, but when people have Parkinson’s, these proteins go rogue and cause the death of nerve cells. When proteins misfold, they can form abnormal clusters called Lewy bodies, which build up within brain cells stopping them from functioning properly.

    “One route to search for potential treatments for Parkinson’s requires the identification of small molecules that can inhibit the aggregation of alpha-synuclein, which is a protein closely associated with the disease,” said Professor Michele Vendruscolo from the Yusuf Hamied Department of Chemistry, who led the research. “But this is an extremely time-consuming process — just identifying a lead candidate for further testing can take months or even years.”
    While there are currently clinical trials for Parkinson’s currently underway, no disease-modifying drug has been approved, reflecting the inability to directly target the molecular species that cause the disease.
    This has been a major obstacle in Parkinson’s research, because of the lack of methods to identify the correct molecular targets and engage with them. This technological gap has severely hampered the development of effective treatments.
    The Cambridge team developed a machine learning method in which chemical libraries containing millions of compounds are screened to identify small molecules that bind to the amyloid aggregates and block their proliferation.
    A small number of top-ranking compounds were then tested experimentally to select the most potent inhibitors of aggregation. The information gained from these experimental assays was fed back into the machine learning model in an iterative manner, so that after few iterations, highly potent compounds were identified.
    “Instead of screening experimentally, we screen computationally,” said Vendruscolo, who is co-Director of the Centre for Misfolding Diseases. “By using the knowledge we gained from the initial screening with our machine learning model, we were able to train the model to identify the specific regions on these small molecules responsible for binding, then we can re-screen and find more potent molecules.”
    Using this method, the Cambridge team developed compounds to target pockets on the surfaces of the aggregates, which are responsible for the exponential proliferation of the aggregates themselves. These compounds are hundreds of times more potent, and far cheaper to develop, than previously reported ones.
    “Machine learning is having a real impact on the drug discovery process — it’s speeding up the whole process of identifying the most promising candidates,” said Vendruscolo. “For us this means we can start work on multiple drug discovery programmes — instead of just one. So much is possible due to the massive reduction in both time and cost — it’s an exciting time.”
    The research was conducted in the Chemistry of Health Laboratory in Cambridge, which was established with the support of the UK Research Partnership Investment Fund (UKRPIF) to promote the translation of academic research into clinical programmes. More

  • in

    Novel robotic training program reduces physician errors placing central lines

    More than five million central lines are placed in patients who need prolonged drug delivery, such as those undergoing cancer treatments, in the United States every year, yet the common procedure can lead to a bevy of complications in almost a million of those cases. To help decrease the rate of infections, blood clots and other complications associated with placing a central line catheter, Penn State researchers developed an online curriculum coupled with a hands-on simulation training to provide trainee physicians with more practice.
    Deployed in 2022 at the Penn State College of Medicine, the researchers recently assessed how the training impacted the prevalence of central line complications by comparing error rates from 2022-23, when the training had been fully implemented, to two prior years, 2016-17 and 2017-18, from before implementing the training. They found that all complication types — mechanical issues, infections and blood clots — were significantly lower after the training was launched.
    They published their results in the Journal of Surgical Education. The researchers hold patents on the technology used in this work. In addition to working to improve the central line placement training, the team is also applying the framework to other common procedures with high complication rates, such as colonoscopies and laparoscopic surgeries.
    “Our approach is focused on reducing preventable errors — this paper is the first significant clinical evidence that we are moving the needle on the gap in clinical education and clinical practice,” said Scarlett Miller, professor of industrial engineering and of mechanical engineering at Penn State and principal investigator on the project. “If we ensure physicians going through residency training are proficient in a skill, like placing central lines, we can minimize the risk on human life.”
    Traditional training for placing a central line and other routine surgical procedures starts with a resident watching a more senior doctor complete the process. Then, the resident is expected to do the procedure themselves, and, finally, they teach someone else to do the procedure.
    “The problem with that approach is that there are very few checks in the process, and the resident only improves by working with patients — who are at risk of complications,” Miller said. “The simulation approach allows someone to try the procedure hundreds, thousands of times without putting anyone at risk.”
    The new approach — the result of interdisciplinary work between engineers and clinicians, Miller said — uses online- and simulation-based training to perform standardized ultrasound-guided internal jugular central venous catheterization (US-IJCVC), which is a central line placed into the internal jugular vein via the neck.

    Residents first complete online training, which includes pre- and post-tests to evaluate knowledge gained. They then take that knowledge and apply in a skills lab, where they practice placing the central line on a novel dynamic haptic robotic trainer that can simulate various conditions and reactions. Residents can use ultrasound to image the line placement, like they would on a real person, on the robotic trainer, which offers automated feedback.
    “We started with 25 surgical residents at the Penn State Health Milton S. Hershey Medical Center, then expanded to all of the residents at Hershey and partnered with Cedars-Sinai Medical Center in Los Angeles to bring the training to their residents,” Miller said. “In total, we have trained about 700 physicians to date, and we train about 200 a year with our current funding.”
    It seems practice may get physicians closer to perfect, without the risk to human life, according to Miller. In this study, Miller and her team compared error rates from 2022, the first year the simulation training was fully deployed, to error rates from 2016 and 2017, when the training was not yet established. They did not use data from 2018-21, as the training was partially implemented but undergoing startup adjustments and challenges related to COVID that could not be controlled for a direct comparison. The researchers found that the range of reported error rates for mechanical complications — such as puncturing an artery or misplacing the catheter — increased from 10.4% in 2016 to 12.4% in 2017 but dropped to 7.3% in 2022. The same trend continued for error rates related to infections, with the 6.6% rate in 2016 increasing to 7.6% in 2017 and dropping to 4.1% in 2022. For blood clots, the error rates decreased from 12.3% in 2016 to 11.4% in 2017 to 8.1% in 2022.
    “We’re very motivated by the results to improve the system and hopefully expand it to other hospitals,” Miller said. “We’re reducing the error rates in a significant way, but we want more. We want zero errors.”
    Miller is also affiliated with the School of Engineering Design in the Penn State College of Engineering, the College of Information Sciences and Technology and the Department of Surgery in the Penn State College of Medicine. Paper co-authors include Jessica M. Gonzalez-Vargas, postdoctoral scholar in industrial engineering at Penn State; Elizabeth Sinz, associate medical director of the West Virginia University Critical Care and Trauma Institute; and Jason Moore, professor of mechanical engineering at Penn State.
    The U.S. National Science Foundation and the National Institutes of Health’s National Heart, Lung, and Blood Institute supported this work. More