More stories

  • in

    AI transformation of medicine: Why doctors are not prepared

    As artificial intelligence systems like ChatGPT find their way into everyday use, physicians will start to see these tools incorporated into their clinical practice to help them make important decisions on diagnosis and treatment of common medical conditions. These tools, called clinical decision support (CDS) algorithms, can be enormously helpful in helping guide health care providers in determining, for example, which antibiotics to prescribe or whether to recommend a risky heart surgery.
    The success of these new technologies, however, depends largely on how physicians interpret and act upon a tool’s risk predictions — and that requires a unique set of skills that many are currently lacking, according to a new perspective article published today in the New England Journal of Medicine that was written by faculty in the University of Maryland School of Medicine (UMSOM).
    CDS algorithms, which make predictions under conditions of clinical uncertainty, can include everything from regression-derived risk calculators to sophisticated machine learning and artificial intelligence-based systems. They can be used to predict which patients are most likely to go into life-threatening sepsis from an uncontrolled infection or which therapy has the highest probability of preventing sudden death in an individual heart disease patient.
    “These new technologies have the potential to significantly impact patient care, but doctors need to first learn how machines think and work before they can incorporate algorithms into their medical practice,” said Daniel Morgan, MD, MS, Professor of Epidemiology & Public Health at UMSOM and co-author of the perspective.
    While some clinical decision support tools are already incorporated into electronic medical record systems, health care providers often find the current software to be cumbersome and difficult to use. “Doctors don’t need to be math or computer experts, but they do need to have a baseline understanding of what an algorithm does in terms of probability and risk adjustment, but most have never been trained in those skills,” said Katherine Goodman, JD, PhD, Assistant Professor of Epidemiology & Public Health at UMSOM and co-author of the perspective.
    To address this gap, medical education and clinical training need to incorporate explicit coverage of probabilistic reasoning tailored specifically to CDS algorithms. Drs. Morgan, Goodman, and their co-author Adam Rodman, MD, MPH, at Beth Israel Deaconess Medical Center in Boston, proposed the following: Improve Probabilistic Skills: Early in medical school, students should learn the fundamental aspects of probability and uncertainty and use visualization techniques to make thinking in terms of probability more intuitive. This training should include interpreting performance measures like sensitivity and specificity to better understand test and algorithm performance. Incorporate Algorithmic Output into Decision Making: Physicians should be taught to critically evaluate and use CDS predictions in their clinical decision-making. This training involves understanding the context in which algorithms operate, recognizing limitations, and considering relevant patient factors that algorithms may have missed. Practice Interpreting CDS Predictions in Applied Learning: Medical students and physicians can engage in practice-based learning by applying algorithms to individual patients and examining how different inputs affect predictions. They should also learn to communicate with patients about CDS-guided decision making.The University of Maryland, Baltimore (UMB), University of Maryland, College Park (UMCP) and University of Maryland Medical System (UMMS) recently launched plans for a new Institute for Health Computing (IHC). The UM-IHC will leverage recent advances in artificial intelligence, network medicine, and other computing methods to create a premier learning health care system that evaluates both de-identified and secure digitized medical health data to enhance disease diagnosis, prevention, and treatment. Dr. Goodman is beginning a position at IHC, which will be a site that is dedicated to educating and training health care providers on the latest technologies. The Institute plans to eventually offer a certification in health data science among other formal educational opportunities in data sciences.
    “Probability and risk analysis is foundational to the practice of evidence-based medicine, so improving physicians’ probabilistic skills can provide advantages that extend beyond the use of CDS algorithms,” saidUMSOM Dean Mark T. Gladwin, MD, Vice President for Medical Affairs, University of Maryland, Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor. “We’re entering a transformative era of medicine where new initiatives like our Institute for Health Computing will integrate vast troves of data into machine learning systems to personalize care for the individual patient.” More

  • in

    Extreme heat taxes the body in many ways. Here’s how

    Luis Melecio-Zambrano is the summer 2023 science writing intern at Science News. They are finishing their master’s degree in science communication from the University of California, Santa Cruz, where they have reported on issues of environmental justice and agriculture. More

  • in

    Modified virtual reality tech can measure brain activity

    Researchers have modified a commercial virtual reality headset, giving it the ability to measure brain activity and examine how we react to hints, stressors and other outside forces.
    The research team at The University of Texas at Austin created a noninvasive electroencephalogram (EEG) sensor that they installed in a Meta VR headset that can be worn comfortably for long periods. The EEG measures the brain’s electrical activity during the immersive VR interactions.
    The device could be used in many ways, from helping people with anxiety, to measuring the attention or mental stress of aviators using a flight simulator, to giving a human the chance to see through the eyes of a robot.
    “Virtual reality is so much more immersive than just doing something on a big screen,” said Nanshu Lu, a professor in the Cockrell School of Engineering’s Department of Aerospace Engineering and Engineering Mechanics who led the research. “It gives the user a more realistic experience, and our technology enables us to get better measurements of how the brain is reacting to that environment.”
    The research is published in Soft Science.
    The pairing of VR and EEG sensors has made its way into the commercial sphere already. However, the devices that exist today are costly, and the researchers say their electrodes are more comfortable for the user, extending the potential wearing time and opening up additional applications.
    The best EEG devices today consist of a cap covered in electrodes, but that does not work well with the VR headset. And individual electrodes struggle to get a strong reading because our hair blocks them from connecting with the scalp. The most popular electrodes are rigid and comb-shaped, inserting through the hairs to connect with the skin, an uncomfortable experience for the user. More

  • in

    How good is that AI-penned radiology report?

    AI tools that quickly and accurately create detailed narrative reports of a patient’s CT scan or X-ray can greatly ease the workload of busy radiologists.
    Instead of merely identifying the presence or absence of abnormalities on an image, these AI reports convey complex diagnostic information, detailed descriptions, nuanced findings, and appropriate degrees of uncertainty. In short, they mirror how human radiologists describe what they see on a scan.
    Several AI models capable of generating detailed narrative reports have begun to appear on the scene. With them have come automated scoring systems that periodically assess these tools to help inform their development and augment their performance.
    So how well do the current systems gauge an AI model’s radiology performance?
    The answer is good but not great, according to a new study by researchers at Harvard Medical School published Aug. 3 in the journal Patterns.
    Ensuring that scoring systems are reliable is critical for AI tools to continue to improve and for clinicians to trust them, the researchers said, but the metrics tested in the study failed to reliably identify clinical errors in the AI reports, some of them significant. The finding, the researchers said, highlights an urgent need for improvement and the importance of designing high-fidelity scoring systems that faithfully and accurately monitor tool performance.
    The team tested various scoring metrics on AI-generated narrative reports. The researchers also asked six human radiologists to read the AI-generated reports. More

  • in

    Deep learning for new protein design

    The key to understanding proteins — such as those that govern cancer, COVID-19, and other diseases — is quite simple. Identify their chemical structure and find which other proteins can bind to them. But there’s a catch.
    “The search space for proteins is enormous,” said Brian Coventry, a research scientist with the Institute for Protein Design, University of Washington and The Howard Hughes Medical Institute.
    A protein studied by his lab typically is made of 65 amino acids, and with 20 different amino acid choices at each position, there are 65 to the 20th power binding combinations, a number bigger than the estimated number of atoms there are in the universe.
    Coventry is the co-author of a study published May 2023 in the journal Nature Communications.
    In it his team used deep learning methods to augment existing energy-based physical models in ‘do novo’ or from-scratch computational protein design, resulting in a 10-fold increase in success rates verified in the lab for binding a designed protein with its target protein.
    “We showed that you can have a significantly improved pipeline by incorporating deep learning methods to evaluate the quality of the interfaces where hydrogen bonds form or from hydrophobic interactions,” said study co-author Nathaniel Bennett, a post-doctoral scholar at the Institute for Protein Design, University of Washington.
    “This is as opposed to trying to exactly enumerate all of these energies by themselves,” he added. More

  • in

    Denial of service threats detected thanks to asymmetric behavior in network traffic

    Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.
    The new technique developed by computer scientists at the Department of Energy’s Pacific Northwest National Laboratory works by keeping a watchful eye over ever-changing traffic patterns on the internet. The findings were presented on August 2 by PNNL scientist Omer Subasi at the IEEE International Conference on Cyber Security and Resilience, where the manuscript was recognized as the best research paper presented at the meeting.
    The scientists modified the playbook most commonly used to detect denial-of-service attacks, where perpetrators try to shut down a website by bombarding it with requests. Motives vary: Attackers might hold a website for ransom, or their aim might be to disrupt businesses or users.
    Many systems try to detect such attacks by relying on a raw number called a threshold. If the number of users trying to access a site rises above that number, an attack is considered likely, and defensive measures are triggered. But relying on a threshold can leave systems vulnerable.
    “A threshold just doesn’t offer much insight or information about what it is really going on in your system,” said Subasi. “A simple threshold can easily miss actual attacks, with serious consequences, and the defender may not even be aware of what’s happening.”
    A threshold can also create false alarms that have serious consequences themselves. False positives can force defenders to take a site offline and bring legitimate traffic to a standstill — effectively doing what a real denial-of-service attack, also known as a DOS attack, aims to do.
    “It’s not enough to detect high-volume traffic. You need to understand that traffic, which is constantly evolving over time,” said Subasi. “Your network needs to be able to differentiate between an attack and a harmless event where traffic suddenly surges, like the Super Bowl. The behavior is almost identical.”
    As principal investigator Kevin Barker said: “You don’t want to throttle the network yourself when there isn’t an attack underway.” More

  • in

    Current takes a surprising path in quantum material

    Cornell researchers used magnetic imaging to obtain the first direct visualization of how electrons flow in a special type of insulator, and by doing so they discovered that the transport current moves through the interior of the material, rather than at the edges, as scientists had long assumed.
    The finding provides new insights into the electron behavior in so-called quantum anomalous Hall insulators and should help settle a decades-long debate about how current flows in more general quantum Hall insulators. These insights will inform the development of topological materials for next-generation quantum devices.
    The team’s paper, “Direct Visualization of Electronic Transport in a Quantum Anomalous Hall Insulator,” published Aug. 3 in Nature Materials. The lead author is Matt Ferguson, Ph.D. ’22, currently a postdoctoral researcher at the Max Planck Institute for Chemical Physics of Solids in Germany.
    The project, led by Katja Nowack, assistant professor of physics in the College of Arts and Sciences and the paper’s senior author, has its origins in what’s known as the quantum Hall effect. First discovered in 1980, this effect results when a magnetic field is applied to a specific material to trigger an unusual phenomena: The interior of the bulk sample becomes an insulator while an electrical current moves in a single direction along the outer edge. The resistances are quantized, or restricted, to a value defined by the fundamental universal constant and drop to zero.
    A quantum anomalous Hall insulator, first discovered in 2013, achieves the same effect by using a material that is magnetized. Quantization still occurs and longitudinal resistance vanishes, and the electrons speed along the edge without dissipating energy, somewhat like a superconductor.
    At least that is the popular conception.
    “The picture where the current flows along the edges can really nicely explain how you get that quantization. But it turns out, it’s not the only picture that can explain quantization,” Nowack said. “This edge picture has really been the dominant one since the spectacular rise of topological insulators starting in the early 2000s. The intricacies of the local voltages and local currents have largely been forgotten. In reality, these can be much more complicated than the edge picture suggests.”
    Only a handful of materials are known to be quantum anomalous Hall insulators. For their new work, Nowack’s group focused on chromium-doped bismuth antimony telluride — the same compound in which the quantum anomalous Hall effect was first observed a decade ago. More

  • in

    Social media algorithms exploit how humans learn from their peers

    In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities — and especially in social media — these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media. In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant to foster cooperation, which can lead to large-scale polarization and misinformation.
    “Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there’s a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation,” says first author William Brady, a social psychologist in the Kellogg School of Management at Northwestern.
    “We wanted to put out a systematic review that’s trying to help understand how human psychology and algorithms interact in ways that can have these consequences,” says Brady. “One of the things that this review brings to the table is a social learning perspective. As social psychologists, we’re constantly studying how we can learn from others. This framework is fundamentally important if we want to understand how algorithms influence our social interactions.”
    Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their ingroup and those they perceive to be prestigious. In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforcing group norms and ensuring collective survival.
    In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of the content’s accuracy or representativeness of a group’s opinions. As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the majority opinion of different groups.
    “It’s not that the algorithm is designed to disrupt cooperation,” says Brady. “It’s just that its goals are different. And in practice, when you put those functions together, you end up with some of these potentially negative effects.”
    To address this problem, the research group first proposes that social media users need to be more aware of how algorithms work and why certain content shows up on their feed. Social media companies don’t typically disclose the full details of how their algorithms select for content, but one start might be offering explainers for why a user is being shown a particular post. For example, is it because the user’s friends are engaging with the content or because the content is generally popular? Outside of social media companies, the research team is developing their own interventions to teach people how to be more conscious consumers of social media.
    In addition, the researchers propose that social media companies could take steps to change their algorithms, so they are more effective at fostering community. Instead of solely favoring PRIME information, algorithms could set a limit on how much PRIME information they amplify and prioritize presenting users with a diverse set of content. These changes could continue to amplify engaging information while preventing more polarizing or politically extreme content from becoming overrepresented in feeds.
    “As researchers we understand the tension that companies face when it comes to making these changes and their bottom line. That’s why we actually think these changes could theoretically still maintain engagement while also disallowing this overrepresentation of PRIME information,” says Brady. “User experience might actually improve by doing some of this.” More