More stories

  • in

    New study offers a better way to make AI fairer for everyone

    In a new paper, researchers from Carnegie Mellon University and Stevens Institute of Technology show a new way of thinking about the fair impacts of AI decisions.
    They draw on a well-established tradition known as social welfare optimization, which aims to make decisions fairer by focusing on the overall benefits and harms to individuals. This method can be used to evaluate the industry standard assessment tools for AI fairness, which look at approval rates across protected groups.
    “In assessing fairness, the AI community tries to ensure equitable treatment for groups that differ in economic level, race, ethnic background, gender, and other categories,” explained John Hooker, professor of operations research at the Tepper School of Business at Carnegie Mellon, who coauthored the study and presented the paper at the International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR) on May 29 in Uppsala, Sweden. The paper received the Best Paper Award.
    Imagine a situation where an AI system decides who gets approved for a mortgage or who gets a job interview. Traditional fairness methods might only ensure that the same percentage of people from different groups get approved.
    But what if being denied a mortgage has a much bigger negative impact on someone from a disadvantaged group than on someone from an advantaged group? By employing a social welfare optimization method, AI systems can make decisions that lead to better outcomes for everyone, especially for those in disadvantaged groups.
    The study focuses on “alpha fairness,” a method of finding a balance between being fair and getting the most benefit for everyone. Alpha fairness can be adjusted to balance fairness and efficiency more or less, depending on the situation.
    Hooker and his co-authors show how social welfare optimization can be used to compare different assessments for group fairness currently used in AI. By using this method, we can understand the benefits of applying different group fairness tools in different contexts. It also ties these group fairness assessment tools to the larger world of fairness-efficiency standards used in economics and engineering.

    Derek Leben, associate teaching professor of business ethics at the Tepper School, and Violet Chen, assistant professor at Stevens Institute of Technology, who received her Ph.D. from the Tepper School, coauthored the study.
    “Common group fairness criteria in AI typically compare statistical metrics of AI-supported decisions across different groups, ignoring the actual benefits or harms of being selected or rejected,” said Chen. “We propose a direct, welfare-centric approach to assess group fairness by optimizing decision social welfare. Our findings offer new perspectives on selecting and justifying group fairness criteria.”
    “Our findings suggest that social welfare optimization can shed light on the intensely discussed question of how to achieve group fairness in AI,” Leben said.
    The study is important for both AI system developers and policymakers. Developers can create more equitable and effective AI models by adopting a broader approach to fairness and understanding the limitations of fairness measures. It also highlights the importance of considering social justice in AI development, ensuring that technology promotes equity across diverse groups in society.
    The paper is published in CPAIOR 2024 Proceedings. More

  • in

    Quantum entanglement measures Earth rotation

    A team of researchers led by Philip Walther at the University of Vienna carried out a pioneering experiment where they measured the effect of the rotation of Earth on quantum entangled photons. The work, just published in Science Advances, represents a significant achievement that pushes the boundaries of rotation sensitivity in entanglement-based sensors, potentially setting the stage for further exploration at the intersection between quantum mechanics and general relativity.
    Optical Sagnac interferometers are the most sensitive devices to rotations. They have been pivotal in our understanding of fundamental physics since the early years of the last century, contributing to establish Einstein’s special theory of relativity. Today, their unparalleled precision makes them the ultimate tool for measuring rotational speeds, limited only by the boundaries of classical physics.
    Interferometers employing quantum entanglement have the potential to break those bounds. If two or more particles are entangled, only the overall state is known, while the state of the individual particle remains undetermined until measurement. This can be used to obtain more information per measurement than would be possible without it. However, the promised quantum leap in sensitivity has been hindered by the extremely delicate nature of entanglement. Here is where the Vienna experiment made the difference. They built a giant optical fiber Sagnac interferometer and kept the noise low and stable for several hours. This enabled the detection of enough high-quality entangled photon pairs such to outperform the rotation precision of previous quantum optical Sagnac interferometers by a thousand times.
    In a Sagnac interferometer, two particles travelling in opposite directions of a rotating closed path reach the starting point at different times. With two entangled particles, it becomes spooky: they behave like a single particle testing both directions simultaneously while accumulating twice the time delay compared to the scenario where no entanglement is present. This unique property is known as super-resolution. In the actual experiment, two entangled photons were propagating inside a 2-kilometer-long optical fiber wounded onto a huge coil, realizing an interferometer with an effective area of more than 700 square meters.
    A significant hurdle the researchers faced was isolating and extracting Earth’s steady rotation signal. “The core of the matter,” explains lead author Raffaele Silvestri, “lays in establishing a reference point for our measurement, where light remains unaffected by Earth’s rotational effect. Given our inability to halt Earth’s from spinning, we devised a workaround: splitting the optical fiber into two equal-length coils and connecting them via an optical switch.” By toggling the switch on and off the researchers could effectively cancel the rotation signal at will, which also allowed them to extend the stability of their large apparatus. “We have basically tricked the light into thinking it’s in a non-rotating universe,” says Silvestri.
    The experiment, which was conducted as part of the research network TURIS hosted by the University of Vienna and the Austrian Academy of Sciences, has successfully observed the effect of the rotation of Earth on a maximally entangled two-photon state. This confirms the interaction between rotating reference systems and quantum entanglement, as described in Einstein’s special theory of relativity and quantum mechanics, with a thousand-fold precision improvement compared to previous experiments. “That represents a significant milestone since, a century after the first observation of Earth’s rotation with light, the entanglement of individual quanta of light has finally entered the same sensitivity regimes,” says Haocun Yu, who worked on this experiment as a Marie-Curie Postdoctoral Fellow. “I believe our result and methodology will set the ground to further improvements in the rotation sensitivity of entanglement-based sensors. This could open the way for future experiments testing the behavior of quantum entanglement through the curves of spacetime,” adds Philip Walther. More

  • in

    Researchers use large language models to help robots navigate

    Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.
    For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.
    To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.
    Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.
    Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.
    While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.
    “By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen Pan, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this approach.

    Pan’s co-authors include his advisor, Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Philip Isola, an associate professor of EECS and a member of CSAIL; senior author Yoon Kim, an assistant professor of EECS and a member of CSAIL; and others at the MIT-IBM Watson AI Lab and Dartmouth College. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.
    Solving a vision problem with language
    Since large language models are the most powerful machine-learning models available, the researchers sought to incorporate them into the complex task known as vision-and-language navigation, Pan says.
    But such models take text-based inputs and can’t process visual data from a robot’s camera. So, the team needed to find a way to use language instead.
    Their technique utilizes a simple captioning model to obtain text descriptions of a robot’s visual observations. These captions are combined with language-based instructions and fed into a large language model, which decides what navigation step the robot should take next.
    The large language model outputs a caption of the scene the robot should see after completing that step. This is used to update the trajectory history so the robot can keep track of where it has been.

    The model repeats these processes to generate a trajectory that guides the robot to its goal, one step at a time.
    To streamline the process, the researchers designed templates so observation information is presented to the model in a standard form — as a series of choices the robot can make based on its surroundings.
    For instance, a caption might say “to your 30-degree left is a door with a potted plant beside it, to your back is a small office with a desk and a computer,” etc. The model chooses whether the robot should move toward the door or the office.
    “One of the biggest challenges was figuring out how to encode this kind of information into language in a proper way to make the agent understand what the task is and how they should respond,” Pan says.
    Advantages of language
    When they tested this approach, while it could not outperform vision-based techniques, they found that it offered several advantages.
    First, because text requires fewer computational resources to synthesize than complex image data, their method can be used to rapidly generate synthetic training data. In one test, they generated 10,000 synthetic trajectories based on 10 real-world, visual trajectories.
    The technique can also bridge the gap that can prevent an agent trained with a simulated environment from performing well in the real world. This gap often occurs because computer-generated images can appear quite different from real-world scenes due to elements like lighting or color. But language that describes a synthetic versus a real image would be much harder to tell apart, Pan says.
    Also, the representations their model uses are easier for a human to understand because they are written in natural language.
    “If the agent fails to reach its goal, we can more easily determine where it failed and why it failed. Maybe the history information is not clear enough or the observation ignores some important details,” Pan says.
    In addition, their method could be applied more easily to varied tasks and environments because it uses only one type of input. As long as data can be encoded as language, they can use the same model without making any modifications.
    But one disadvantage is that their method naturally loses some information that would be captured by vision-based models, such as depth information.
    However, the researchers were surprised to see that combining language-based representations with vision-based methods improves an agent’s ability to navigate.
    “Maybe this means that language can capture some higher-level information than cannot be captured with pure vision features,” he says.
    This is one area the researchers want to continue exploring. They also want to develop a navigation-oriented captioner that could boost the method’s performance. In addition, they want to probe the ability of large language models to exhibit spatial awareness and see how this could aid language-based navigation.
    This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Self-assembling and disassembling swarm molecular robots via DNA molecular controller

    Researchers from Tohoku University and Kyoto University have successfully developed a DNA-based molecular controller that autonomously directs the assembly and disassembly of molecular robots. This pioneering technology marks a significant step towards advanced autonomous molecular systems with potential applications in medicine and nanotechnology.
    “Our newly developed molecular controller, composed of artificially designed DNA molecules and enzymes, coexists with molecular robots and controls them by outputting specific DNA molecules,” points out Shin-ichiro M. Nomura, an associate professor at Tohoku University’s Graduate School of Engineering and co-author of the study. “This allows the molecular robots to self-assemble and disassemble automatically, without the need for external manipulation.”
    Such autonomous operation is a crucial advancement, as it enables the molecular robots to perform tasks in environments where external signals cannot reach.
    In addition to Nomura, the research team included Ibuki Kawamata (an associate professor at Kyoto University’s Graduate School of Science), Kohei Nishiyama (a graduate student at Johannes Gutenberg University Mainz), and Akira Kakugo (a professor at Kyoto University’s Graduate School of Science).
    Research on molecular robots, which are designed to aid in disease treatment and diagnosis by functioning both inside and outside the body, is gaining significant attention. Previous research by Kakugo and colleagues had developed swarm-type molecular robots that move individually. These robots could be assembled and disassembled as a group through external manipulation. But thanks to the constructed molecular controller, the robots can self-assemble and disassemble according to a programmed sequence.
    The molecular controller initiates the process by outputting a specific DNA signal equivalent to the “assemble” command. The microtubules in the same solution, modified with DNA and propelled by kinesin molecular motors, receive the DNA signal, align their movement direction, and automatically assemble into a bundled structure. Subsequently, the controller outputs a “disassemble” signal, causing the microtubule bundles to disassemble automatically. This dynamic change was achieved through precise control by the molecular circuit, which functions like a highly sophisticated signal processor. Moreover, the molecular controller coexists with molecular robots, eliminating the need for external manipulation.
    Advancing this technology is expected to contribute to the development of more complex and advanced autonomous molecular systems. As a result, molecular robots might perform tasks that cannot be accomplished alone by assembling according to commands and then dispersing to explore targets. Additionally, this research expanded the activity conditions of molecular robots by integrating different molecular groups, such as the DNA circuit system and the motor protein operating system.
    “By developing the molecular controller and combining it with increasingly sophisticated and precise DNA circuits, molecular information amplification devices, and biomolecular design technologies, we expect swarm molecular robots to process a more diverse range of biomolecular information automatically,” adds Nomura. ” This advancement may lead to the realization of innovative technologies in nanotechnology and the medical field, such as nanomachines for in-situ molecular recognition and diagnosis or smart drug delivery systems.” More

  • in

    AI can help doctors make better decisions and save lives

    Deploying and evaluating a machine learning intervention to improve clinical care and patient outcomes is a key step in moving clinical deterioration models from byte to bedside, according to a June 13 editorial in Critical Care Medicine that comments on a Mount Sinai study published in the same issue. The main study found that hospitalized patients were 43 percent more likely to have their care escalated and significantly less likely to die if their care team received AI-generated alerts signaling adverse changes in their health.
    “We wanted to see if quick alerts made by AI and machine learning, trained on many different types of patient data, could help reduce both how often patients need intensive care and their chances of dying in the hospital,” says lead study author Matthew A. Levin, MD, Professor of Anesthesiology, Perioperative and Pain Medicine, and Genetics and Genomic Sciences, at Icahn Mount Sinai, and Director of Clinical Data Science at The Mount Sinai Hospital. “Traditionally, we have relied on older manual methods such as the Modified Early Warning Score (MEWS) to predict clinical deterioration. However, our study shows automated machine learning algorithm scores that trigger evaluation by the provider can outperform these earlier methods in accurately predicting this decline. Importantly, it allows for earlier intervention, which could save more lives.”
    The non-randomized, prospective study looked at 2,740 adult patients who were admitted to four medical-surgical units at The Mount Sinai Hospital in New York. The patients were split into two groups: one that received real-time alerts based on the predicted likelihood of deterioration, sent directly to their nurses and physicians or a “rapid response team” of intensive care physicians, and another group where alerts were created but not sent. In the units where the alerts were suppressed, patients who met standard deterioration criteria received urgent interventions from the rapid response team.
    Additional findings in the intervention group demonstrated that patients: were more likely to get medications to support the heart and circulation, indicating that doctors were taking early action; and were less likely to die within 30 days”Our research shows that real-time alerts using machine learning can substantially improve patient outcomes,” says senior study author David L. Reich, MD, President of The Mount Sinai Hospital and Mount Sinai Queens, the Horace W. Goldsmith Professor of Anesthesiology, and Professor of Artificial Intelligence and Human Health at Icahn Mount Sinai. “These models are accurate and timely aids to clinical decision-making that help us bring the right team to the right patient at the right time. We think of these as ‘augmented intelligence’ tools that speed in-person clinical evaluations by our physicians and nurses and prompt the treatments that keep our patients safer. These are key steps toward the goal of becoming a learning health system.”
    The study was terminated early due to the COVID-19 pandemic. The algorithm has been deployed on all stepdown units within The Mount Sinai Hospital, using a simplified workflow. A stepdown unit is a specialized area in the hospital where patients who are stable but still require close monitoring and care are placed. It’s a step between the intensive care unit (ICU) and a general hospital area, ensuring that patients receive the right level of attention as they recover.
    A team of intensive care physicians visits the 15 patients with the highest prediction scores every day and makes treatment recommendations to the doctors and nurses caring for the patient. As the algorithm is continually retrained on larger numbers of patients over time, the assessments by the intensive care physicians serve as the gold standard of correctness, and the algorithm becomes more accurate through reinforcement learning.
    In addition to this clinical deterioration algorithm, the researchers have developed and deployed 15 additional AI-based clinical decision support tools throughout the Mount Sinai Health System.
    The Mount Sinai paper is titled “Real-Time Machine Learning Alerts to Prevent Escalation of Care: A Nonrandomized Clustered Pragmatic Clinical Trial.” The remaining authors of the paper, all with Icahn Mount Sinai except where indicated, are Arash Kia, MD, MSc; Prem Timsina, PhD; Fu-yuan Cheng, MS; Kim-Anh-Nhi Nguyen, MS; Roopa Kohli-Seth, MD; Hung-Mo Lin, ScD (Yale University); Yuxia Ouyang, PhD; and Robert Freeman, RN, MSN, NE-BC. More

  • in

    Making ferromagnets ready for ultra-fast communication and computation technology

    An international team led by researchers at the University of California, Riverside, has made a significant breakthrough in how to enable and exploit ultra-fast spin behavior in ferromagnets. The research, published in Physical Review Letters and highlighted as an editors’ suggestion, paves the way for ultra-high frequency applications.
    Today’s smartphones and computers operate at gigahertz frequencies, a measure of how fast they operate, with scientists working to make them even faster. The new research has found a way to achieve terahertz frequencies using conventional ferromagnets, which could lead to next-generation communication and computation technologies that operate a thousand times faster.
    Ferromagnets are materials where electron spins align in the same direction, but these spins also oscillate around this direction, creating “spin waves.” These spin waves are crucial for emerging computer technologies, playing a key role in processing information and signals.
    “When spins oscillate, they experience friction due to interactions with electrons and the crystal lattice of the ferromagnet,” said Igor Barsukov, an associate professor of physics and astronomy, who led the study. “Interestingly, these interactions also cause spins to acquire inertia, leading to an additional type of spin oscillation called nutation.”
    Barsukov explained that nutation occurs at ultra-high frequencies, making it highly desirable for future computer and communication technologies. Recently, physicists’ experimental confirmation of nutational oscillations excited the magnetism research community, he said.
    “Modern spintronic applications manipulate spins using spin currents injected into the magnet,” said Rodolfo Rodriguez, the first author of the paper, a former graduate student in the Barsukov Group, and now a scientist at HRL Labs, LLC.
    Barsukov and his team discovered that injecting a spin current with the “wrong” sign can excite nutational auto-oscillations.

    “These self-sustained oscillations hold great promise for next-generation computation and communication technologies,” said coauthor Allison Tossounian, until recently an undergraduate student in the Barsukov Group.
    According to Barsukov, spin inertia introduces a second time-derivative in the equation of motion, making some phenomena counterintuitive.
    “We managed to harmonize spin-current-driven dynamics and spin inertia,” he said. “We also found an isomorphism, a parallel, between the spin dynamics in ferromagnets and ferrimagnets, which could accelerate technological innovation by exploiting synergies between these fields.”
    In ferrimagnets, usually two antiparallel spin lattices have an unequal amount of spin. Materials with antiparallel spin lattices recently received increased interest as candidates for ultrafast applications, Barsukov said.
    “But many technological challenges remain,” he said. “Our understanding of spin currents and materials engineering for ferromagnets has significantly advanced over the past few decades. Coupled with the recent confirmation of nutation, we saw an opportunity for ferromagnets to become excellent candidates for ultra-high frequency applications. Our study prepares the stage for concerted efforts to explore optimal materials and design efficient architectures to enable terahertz devices.”
    The title of the paper is “Spin inertia and auto-oscillations in ferromagnets.”
    The study was supported by the National Science Foundation. More

  • in

    Scientists preserve DNA in an amber-like polymer

    In the movie “Jurassic Park,” scientists extracted DNA that had been preserved in amber for millions of years, and used it to create a population of long-extinct dinosaurs.
    Inspired partly by that film, MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, whether entire human genomes or digital files such as photos.
    Most current methods for storing DNA require freezing temperatures, so they consume a great deal of energy and are not feasible in many parts of the world. In contrast, the new amber-like polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water.
    The researchers showed that they could use this polymer to store DNA sequences encoding the theme music from Jurassic Park, as well as an entire human genome. They also demonstrated that the DNA can be easily removed from the polymer without damaging it.
    “Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”
    Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published in the Journal of the American Chemical Society. Former MIT postdoc Elizabeth Prince and MIT postdoc Ho Fung Cheng are the lead authors of the paper.
    Capturing DNA
    DNA, a very stable molecule, is well-suited for storing massive amounts of information, including digital data. Digital storage systems encode text, photos, and other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.

    DNA offers a way to store this digital information at very high density: In theory, a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.
    In 2021, Banal and his postdoc advisor, Mark Bathe, an MIT professor of biological engineering, developed a way to store DNA in particles of silica, which could be labeled with tags that revealed the particles’ contents. That work led to a spinout called Cache DNA.
    One downside to that storage system is that it takes several days to embed DNA into the silica particles. Furthermore, removing the DNA from the particles requires hydrofluoric acid, which can be hazardous to workers handling the DNA.
    To come up with alternative storage materials, Banal began working with Johnson and members of his lab. Their idea was to use a type of polymer known as a degradable thermoset, which consists of polymers that form a solid when heated. The material also includes cleavable links that can be easily broken, allowing the polymer to be degraded in a controlled way.
    “With these deconstructable thermosets, depending on what cleavable bonds we put into them, we can choose how we want to degrade them,” Johnson says.
    For this project, the researchers decided to make their thermoset polymer from styrene and a cross-linker, which together form an amber-like thermoset called cross-linked polystyrene. This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA. To make the thermoset degradable, the styrene monomers and cross-linkers are copolymerized with monomers called thionolactones. These links can be broken by treating them with a molecule called cysteamine.

    Because styrene is so hydrophobic, the researchers had to come up with a way to entice DNA — a hydrophilic, negatively charged molecule — into the styrene.
    To do that, they identified a combination of three monomers that they could turn into polymers that dissolve DNA by helping it interact with styrene. Each of the monomers has different features that cooperate to get the DNA out of water and into the styrene. There, the DNA forms spherical complexes, with charged DNA in the center and hydrophobic groups forming an outer layer that interacts with styrene. When heated, this solution becomes a solid glass-like block, embedded with DNA complexes.
    The researchers dubbed their method T-REX (Thermoset-REinforced Xeropreservation). The process of embedding DNA into the polymer network takes a few hours, but that could become shorter with further optimization, the researchers say.
    To release the DNA, the researchers first add cysteamine, which cleaves the bonds holding the polystyrene thermoset together, breaking it into smaller pieces. Then, a detergent called SDS can be added to remove the DNA from polystyrene without damaging it.
    Storing information
    Using these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). They were able to store DNA encoding the Emancipation Proclamation and the MIT logo, in addition to the theme music from “Jurassic Park.”
    After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.
    The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.
    Cache DNA, a company started by Banal and Bathe, with Johnson as a member of the scientific advisory board, is now working on further developing DNA storage technology. The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future.
    “The idea is, why don’t we preserve the master record of life forever?” Banal says. “Ten years or 20 years from now, when technology has advanced way more than we could ever imagine today, we could learn more and more things. We’re still in the very infancy of understanding the genome and how it relates to disease.”
    The research was funded by the National Science Foundation. More

  • in

    Clinical decision support software can prevent 95% of medication errors in the operating room, study shows

    A new study by investigators from Massachusetts General Hospital, a founding member of the Mass General Brigham healthcare system, reveals that computer software that helps inform clinicians’ decisions about a patient’s care can prevent 95% of medication errors in the operating room. The findings are reported in Anesthesia & Analgesia, published by Wolters Kluwer.
    “Medication errors in the operating room have high potential for patient harm,” said senior author Karen C. Nanji, MD, MPH, a physician investigator in the Department of Anesthesia, Critical Care, and Pain Medicine at Massachusetts General Hospital and an associate professor in the Department of Anesthesia at Harvard Medical School. “Clinical decision support involves comprehensive software algorithms that provide evidence-based information to clinicians at the point-of-care to enhance decision-making and prevent errors.”
    “While clinical decision support improves both efficiency and quality of care in operating rooms, it is still in the early stages of adoption,” added first author Lynda Amici, DNP, CRNA, of Cooper University Hospital (who was at Massachusetts General Hospital at the time of this study).
    For the study, Nanji, Amici, and their colleagues obtained all safety reports involving medication errors documented by anesthesia clinicians for surgical procedures from August 2020 to August 2022 at Massachusetts General Hospital. Two independent reviewers classified each error by its timing and type, whether it was associated with patient harm and the severity of that harm, and whether it was preventable by clinical decision support algorithms.
    The reviewers assessed 127 safety reports involving 80 medication errors, and they found that 76 (95%) of the errors would have been prevented by clinical decision support. Certain error types, such as wrong medication and wrong dose, were more likely to be preventable by clinical decision support algorithms than other error types.
    “Our results support emerging guidelines from the Institute for Safe Medication Practices and the Anesthesia Patient Safety Foundation that recommend the use of clinical decision support to prevent medication errors in the operating room,” said Nanji. “Massachusetts General Hospital researchers have designed and built a comprehensive intraoperative clinical decision support software platform, called GuidedOR, that improves both quality of care and workflow efficiency. GuidedOR is currently implemented at our hospital and is being adopted at additional Mass General Brigham sites to make surgery and anesthesia safer for patients.”
    Nanji noted that future research should include large multi-center randomized controlled trials to more precisely measure the effect of clinical decision support on medication errors in the operating room.
    Authorship: Lynda D. Amici DNP, CRNA; Maria van Pelt, PhD, CRNA, FAAN; Laura Mylott, RN, PhD, NEA-BC; Marin Langlieb, BA; and Karen C. Nanji, MD, MPH. Funding: Research support was provided from institutional and/or departmental sources from Massachusetts General Hospital’s Department of Anesthesia, Critical Care, and Pain Medicine. Dr. Nanji is additionally supported by a grant from the Doris Duke Foundation. More