More stories

  • in

    Artificial intelligence tool to improve heart failure care

    UVA Health researchers have developed a powerful new risk assessment tool for predicting outcomes in heart failure patients. The researchers have made the tool publicly available for free to clinicians.
    The new tool improves on existing risk assessment tools for heart failure by harnessing the power of machine learning (ML) and artificial intelligence (AI) to determine patient-specific risks of developing unfavorable outcomes with heart failure.
    “Heart failure is a progressive condition that affects not only quality of life but quantity as well. All heart failure patients are not the same. Each patient is on a spectrum along the continuum of risk of suffering adverse outcomes,” said researcher Sula Mazimba, MD, a heart failure expert. “Identifying the degree of risk for each patient promises to help clinicians tailor therapies to improve outcomes.”
    About Heart Failure
    Heart failure occurs when the heart is unable to pump enough blood for the body’s needs. This can lead to fatigue, weakness, swollen legs and feet and, ultimately, death. Heart failure is a progressive condition, so it is extremely important for clinicians to be able to identify patients at risk of adverse outcomes.
    Further, heart failure is a growing problem. More than 6 million Americans already have heart failure, and that number is expected to increase to more than 8 million by 2030. The UVA researchers developed their new model, called CARNA, to improve care for these patients. (Finding new ways to improve care for patients across Virginia and beyond is a key component of UVA Health’s first-ever 10-year strategic plan.)
    The researchers developed their model using anonymized data drawn from thousands of patients enrolled in heart failure clinical trials previously funded by the National Institutes of Health’s National Heart, Lung and Blood Institute. Putting the model to the test, they found it outperformed existing predictors for determining how a broad spectrum of patients would fare in areas such as the need for heart surgery or transplant, the risk of rehospitalization and the risk of death.

    The researchers attribute the model’s success to the use of ML/AI and the inclusion of “hemodynamic” clinical data, which describe how blood circulates through the heart, lungs and the rest of the body.
    “This model presents a breakthrough because it ingests complex sets of data and can make decisions even among missing and conflicting factors,” said researcher Josephine Lamp, of the University of Virginia School of Engineering’s Department of Computer Science. “It is really exciting because the model intelligently presents and summarizes risk factors reducing decision burden so clinicians can quickly make treatment decisions.”
    By using the model, doctors will be better equipped to personalize care to individual patients, helping them live longer, healthier lives, the researchers hope.
    “The collaborative research environment at the University of Virginia made this work possible by bringing together experts in heart failure, computer science, data science and statistics,” said researcher Kenneth Bilchick, MD, a cardiologist at UVA Health. “Multidisciplinary biomedical research that integrates talented computer scientists like Josephine Lamp with experts in clinical medicine will be critical to helping our patients benefit from AI in the coming years and decades.”
    Findings Published
    The researchers have made their new tool available online for free at https://github.com/jozieLamp/CARNA.
    In addition, they have published the results of their evaluation of CARNA in the American Heart Journal. The research team consisted of Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Nicholas Ashur, Bilchick, Khadijah Breathett, Younghoon Kwon, Song Li, Nishaki Mehta, Edward Rojas Pena, Lu Feng and Mazimba. The researchers have no financial interest in the work.
    The project was based on one of the winning submissions to the National Heart, Lung and Blood Institute’s Big Data Analysis Challenge: Creating New Paradigms for Heart Failure Research. The work was supported by the National Science Foundation Graduate Research Fellowship, grant 842490, and NHLBI grants R56HL159216, K01HL142848 and L30HL148881.
    To keep up with the latest medical research news from UVA, subscribe to the Making of Medicine blog. More

  • in

    An easy pill to swallow — new 3D printing research paves way for personalized medication

    A new technique for 3D printing medication has enabled the printing of multiple drugs in a single tablet, paving the way for personalised pills that can deliver timed doses.
    Researchers from the University of Nottingham’s, Centre for Additive Manufacturing have led research alongside the School of Pharmacy that has fabricated personalised medicine using Multi-Material InkJet 3D Printing (MM-IJ3DP). The research has been published in Materials Today Advances.
    The team have developed a cutting-edge method that enables the fabrication of customised pharmaceutical tablets with tailored drug release profiles, ensuring more precise and effective treatment options for patients.
    Using Multi-Material InkJet 3D Printing (MM-IJ3DP), tablets can be printed that release drugs at a controlled rate, determined by the tablet’s design. This is made possible by a novel ink formulation based on molecules that are sensitive to ultraviolet light. When printed, these molecules form a water-soluble structure.
    The drug release rate is controlled by the unique interior structure of the tablet, allowing for timing the dosage release. This method can print multiple drugs in a single tablet, allowing for complex medication regimens to be simplified into a single dose.
    Dr Yinfeng He, Assistant Professor in the Faculty of Engineering’s Centre for Additive Manufacturing led the research, he said: “This is an exciting step forwards in the development of personalised medication. This breakthrough not only highlights the potential of 3D printing in revolutionizing drug delivery but also opens up new avenues for the development of next-generation personalized medicines.”
    “While promising, the technology faces challenges, including the need for more formulations that support a wider range of materials. The ongoing research aims to refine these aspects, enhancing the feasibility of MM-IJ3DP for widespread application.” Professor Ricky Wildman added.
    This technology will be particularly beneficial in creating medication that needs to release drugs at specific times, making it ideal for treating diseases, where timing and dosage accuracy are crucial. The ability to print 56 pills in a single batch demonstrates the scalability of this technology, providing a strong potential for the production of personalised medicines.
    Professor Felicity Rose at the University of Nottingham’s School of Pharmacy was one of the co-authors on the research, she says: “The future of prescribed medication lies in a personalised approach, and we know that up 50% of people in the UK alone don’t take their medicines correctly and this has an impact on poorer health outcomes with conditions not being controlled or properly treated. A single pill approach would simplify taking multiple medications at different times and this research is an exciting step towards that.” More

  • in

    Century of statistical ecology reviewed

    Crunching numbers isn’t exactly how Neil Gilbert, a postdoctoral researcher at Michigan State University, envisioned a career in ecology.
    “I think it’s a little funny that I’m doing this statistical ecology work because I was always OK at math, but never particularly enjoyed it,” he explained. “As an undergrad, I thought, I’ll be an ecologist — that means that I can be outside, looking at birds, that sort of thing.”
    As it turns out,” he chuckled, “ecology is a very quantitative discipline.”
    Now, working in the Zipkin Quantitative Ecology lab, Gilbert is the lead author on a new article in a special collection of the journal Ecology that reviews the past century of statistical ecology.
    Statistical ecology, or the study of ecological systems using mathematical equations, probability and empirical data, has grown over the last century. As increasingly large datasets and complex questions took center stage in ecological research, new tools and approaches were needed to properly address them.
    To better understand how statistical ecology changed over the last century, Gilbert and his fellow authors examined a selection of 36 highly cited papers on statistical ecology — all published in Ecology since its inception in 1920.
    The team’s paper examines work on statistical models across a range of ecological scales from individuals to populations, communities, ecosystems and beyond. The team also reviewed publications providing practical guidance on applying models. Gilbert noted that because, “many practicing ecologists lack extensive quantitative training,” such publications are key to shaping studies.

    Ecology is an advantageous place for such papers, because it is one of, “the first internationally important journals in the field. It has played an outsized role in publishing important work,” said lab leader Elise Zipkin, a Red Cedar Distinguished Associate Professor in the Department of Integrative Biology.
    “It has a reputation of publishing some of the most influential papers on the development and application of analytical techniques from the very beginning of modern ecological research.”
    The team found a persistent evolution of models and concepts in the field, especially over the past few decades, driven by refinements in techniques and exponential increases in computational power.
    “Statistical ecology has exploded in the last 20 to 30 years because of advances in both data availability and the continued improvement of high-performance computing clusters,” Gilbert explained.
    Included among the 36 reviewed papers were a landmark 1945 study by Lee R. Dice on predicting the co-occurrence of species in space — Ecology’s most highly cited paper of all time — and an influential 2002 paper led by Darryl MacKenzie on occupancy models. Ecologists use these models to identify the range and distribution of species in an environment.
    Mackenzie’s work on species detection and sampling, “played an outsized role in the study of species distributions,” says Zipkin. MacKenzie’s paper, which was cited more than 5,400 times, spawned various software packages that are now widely used by ecologists, she explained. More

  • in

    Coming out to a chatbot?

    Today, there are dozens of large language model (LLM) chatbots aimed at mental health care — addressing everything from loneliness among seniors to anxiety and depression in teens.
    But the efficacy of these apps is unclear. Even more unclear is how well these apps work in supporting specific, marginalized groups like LGBTQ+ communities.
    A team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences, Emory University, Vanderbilt University and the University of California Irvine, found that while large language models can offer fast, on-demand support, they frequently fail to grasp the specific challenges that many members of the LGBTQ+ community face.
    That failure could lead the chatbot to give at best unhelpful and at worst dangerous advice.
    The paper is being presented this week at the ACM (Association of Computing Machinery) conference on Human Factors in Computing System in Honolulu, Hawai’i.
    The researchers interviewed 31 participants — 18 identifying as LGBTQ+ and 13 as non-LGBTQ+ — about their usage of LLM-based chatbots for mental health support and how the chatbots supported their individual needs.
    On one hand, many participants reported that the chatbots offered a sense of solidarity and a safe space to explore and express their identities. Some used the chatbots for practice coming out to friends and family, others to practice asking someone out for the first time.

    But many of the participants also noted the programs’ shortfalls.
    One participant wrote, “I don’t think I remember any time that it gave me a solution. It will just be like empathetic. Or maybe, if I would tell it that I’m upset with someone being homophobic. It will suggest, maybe talking to that person. But most of the time it just be like, ‘I’m sorry that happened to you.'”
    “The boilerplate nature of the chatbots’ responses highlights their failure to recognize the complex and nuanced LGBTQ+ identities and experiences, making the chatbots’ suggestions feel emotionally disengaged,” said Zilin Ma, a PhD student at SEAS and co-first author of the paper.
    Because these chatbots tend to be sycophantic, said Ma, they’re actually very bad at simulating hostility, which makes them ill-suited to practice potentially fraught conversations like coming out.
    They also gave some participants staggeringly bad advice — telling one person to quit their job after experiencing workplace homophobia, without considering their financial or personal consequences.
    Ma, who is in the lab of Krzysztof Gajos, the Gordon McKay Professor of Computer Science, stressed that while there are ways to improve these programs, it is not a panacea.

    “There are ways we could improve these limitations by fine tuning the LLMs for contexts relevant to LGBTQ+ users or implementing context-sensitive guardrails or regularly updating feedback loops, but we wonder if this tendency to implement technology at every aspect of social problem is the right approach,” said Ma. “We can optimize all these LLMs all we want but there are aspects of LGBTQ+ mental health that cannot be solved with LLM chatbots — such as discrimination, bullying, the stress of coming out or the lack of representation. For that, we need a holistic support system for LGBTQ+ people.”
    One area where LLM chatbots could be useful is in the training of human counselors or online community moderators.
    “Rather than having teens in crisis talk to the chatbot directly, you could use the chatbot to train counselors,” said Ma. “Then you have a real human to talk to, but it empowers the counselors with technology, which is a socio-technical solution which I think works well in this case.”
    “Research in public health suggests that interventions that directly target the affected individuals — like the chatbots for improving individual well-being — risk leaving the most vulnerable people behind,” said Gajos. “It is harder but potentially more impactful to change the communities themselves through training counselors or online community moderators.”
    The research was co-authored by Yiyang Mei, Yinru Long, Zhaoyuan “Nick” Su and Gajos. More

  • in

    Chatbots tell people what they want to hear

    Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University-led research.
    The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation.
    “Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers,” said lead author Ziang Xiao, an assistant professor of computer science at Johns Hopkins who studies human-AI interactions. “Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear.”
    Xiao and his team share their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems at 5 p.m. ET on Monday, May 13.
    To see how chatbots influence online searches, the team compared how people interacted with different search systems and how they felt about controversial issues before and after using them.
    The researchers asked 272 participants to write out their thoughts about topics including health care, student loans, or sanctuary cities, and then look up more information online about that topic using either a chatbot or a traditional search engine built for the study. After considering the search results, participants wrote a second essay and answered questions about the topic. Researchers also had participants read two opposing articles and questioned them about how much they trusted the information and if they found the viewpoints to be extreme.
    Because chatbots offered a narrower range of information than traditional web searches and provided answers that reflected the participants’ preexisting attitudes, the participants who used them became more invested in their original ideas and had stronger reactions to information that challenged their views, the researchers found.

    “People tend to seek information that aligns with their viewpoints, a behavior that often traps them in an echo chamber of like-minded opinions,” Xiao said. “We found that this echo chamber effect is stronger with the chatbots than traditional web searches.”
    The echo chamber stems, in part, from the way participants interacted with chatbots, Xiao said. Rather than typing in keywords, as people do for traditional search engines, chatbot users tended to type in full questions, such as, What are the benefits of universal health care? or What are the costs of universal health care? A chatbot would answer with a summary that included only benefits or costs.
    “With chatbots, people tend to be more expressive and formulate questions in a more conversational way. It’s a function of how we speak,” Xiao said. “But our language can be used against us.”
    AI developers can train chatbots to extract clues from questions and identify people’s biases, Xiao said. Once a chatbot knows what a person likes or doesn’t like, it can tailor its responses to match.
    In fact, when the researchers created a chatbot with a hidden agenda, designed to agree with people, the echo chamber effect was even stronger.
    To try to counteract the echo chamber effect, researchers trained a chatbot to provide answers that disagreed with participants. People’s opinions didn’t change, Xiao said. The researchers also programmed a chatbot to link to source information to encourage people to fact-check, but only a few participants did.
    “Given AI-based systems are becoming easier to build, there are going to be opportunities for malicious actors to leverage AIs to make a more polarized society,” Xiao said. “Creating agents that always present opinions from the other side is the most obvious intervention, but we found they don’t work.” More

  • in

    Just believing that an AI is helping boosts your performance

    Sometimes it seems like an AI is helping, but the benefit is actually a placebo effect — people performing better simply because they expect to be doing so — according to new research from Aalto University in Finland. The study also shows how difficult it is to shake people’s trust in the capabilities of AI systems.
    In this study, participants were tasked with a simple letter recognition exercise. They performed the task once on their own and once supposedly aided by an AI system. Half of the participants were told the system was reliable and it would enhance their performance, and the other half was told that it was unreliable and would worsen their performance.
    ‘In fact, neither AI system ever existed. Participants were led to believe an AI system was assisting them, when in reality, what the sham-AI was doing was completely random,’ explains doctoral researcher Agnes Kloft.
    The participants had to pair letters that popped up on screen at varying speeds. Surprisingly, both groups performed the exercise more efficiently — more quickly and attentively — when they believed an AI was involved.
    ‘What we discovered is that people have extremely high expectations of these systems, and we can’t make them AI doomers simply by telling them a program doesn’t work,’ says Assistant Professor Robin Welsch.
    Following the initial experiments, the researchers conducted an online replication study that produced similar results. They also introduced a qualitative component, inviting participants to describe their expectations of performing with an AI. Most had a positive outlook toward AI and, surprisingly even skeptical people still had positive expectations about its performance.
    The findings pose a problem for the methods generally used to evaluate emerging AI systems. ‘This is the big realization coming from our study — that it’s hard to evaluate programmes that promise to help you because of this placebo effect’, Welsch says.

    While powerful technologies like large language models undoubtedly streamline certain tasks, subtle differences between versions may be amplified or masked by the placebo effect — and this is effectively harnessed through marketing.
    The results also pose a significant challenge for research on human-computer interaction, since expectations would influence the outcome unless placebo control studies were used.
    ‘These results suggest that many studies in the field may have been skewed in favour of AI systems,’ concludes Welsch.
    The researchers will present the study at the CHI-conference on May 14. More

  • in

    Cats purrfectly demonstrate what it takes to trust robots

    Would you trust a robot to look after your cat? New research suggests it takes more than a carefully designed robot to care for your cat, the environment in which they operate is also vital, as well as human interaction.
    Cat Royale is a unique collaboration between Computer Scientists from the University of Nottingham and artists at Blast Theory who worked together to create a multispecies world centred around a be-spoke enclosure in which three cats and a robot arm coexist for six hours a day during a twelve-day installation as part of an artist-led project. The installation was launched in 2023 at the World Science Festival in Brisbane, Australia and has been touring since, it has just won a Webby award for its creative experience.
    The research paper, “Designing Multispecies Worlds for Robots, Cats, and Humans” has just been presented at the annual Computer-Human Conference (CHI’24) where it won best paper. It outlines how designing the technology and its interactions is not sufficient, but that it is equally important to consider the design of the `world’ in which the technology operates. The research also highlights the necessity of human involvement in areas such as breakdown recovery, animal welfare, and their role as audience.
    Cat Royale centred around a robot arm offering activities to make the cats happier, these included dragging a ‘mouse’ toy along the floor, raising a feather ‘bird’ into the air, and even offering them treats to eat. The team then trained an AI to learn what games the cats liked best so that it could personalise their experiences.
    “At first glance, the project is about designing a robot to enrich the lives of a family of cats by playing with them. ” commented Professor Steve Benford from the University of Nottingham who led the research, “Under the surface, however, it explores the question of what it takes to trust a robot to look after our loved ones and potentially ourselves.”
    Working with Blast Theory to develop and then study Cat Royale, the research team gained important insights into the design of robots and its interactions with the cats. They had to design the robot to pick up toys, deploy them in ways that excited the cats, while it learned which games each cat liked. They also designed the entire world in which the cats and the robot lived, providing safe spaces for the cats to observe the robot and from which to sneak up on it, and decorating it so that the robot had the best chance of spotting the approaching cats.
    The implication is designing robots involves interior design as well as engineering and AI. If you want to introduce robots into your home to look after your loved ones, then you will likely need to redesign your home.
    Research workshops for Cat Royale were held at the Univeraity of Nottingham’s unique Cobotmaker Space where stakeholders were bought together to think about the design of the robot /welfare of cats. Eike Schneiders, Transitional Assistant Professor in the Mixed Reality Lab at the University of Nottingham worked on the design, he said: “As we learned through Cat Royale, creating a multispecies system — where cats, robots, and humans are all accounted for — takes more than just designing the robot. We had to ensure animal wellbeing at all times, while simultaneously ensuring that the interactive installation engaged the (human) audiences around the world. This involved consideration of many elements, including the design of the enclosure, the robot and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats.” More

  • in

    New work extends the thermodynamic theory of computation

    Every computing system, biological or synthetic, from cells to brains to laptops, has a cost. This isn’t the price, which is easy to discern, but an energy cost connected to the work required to run a program and the heat dissipated in the process.
    Researchers at SFI and elsewhere have spent decades developing a thermodynamic theory of computation, but previous work on the energy cost has focused on basic symbolic computations — like the erasure of a single bit — that aren’t readily transferable to less predictable, real-world computing scenarios.
    In a paper published in Physical Review X on May 13, a quartet of physicists and computer scientists expand the modern theory of the thermodynamics of computation. By combining approaches from statistical physics and computer science, the researchers introduce mathematical equations that reveal the minimum and maximum predicted energy cost of computational processes that depend on randomness, which is a powerful tool in modern computers.
    In particular, the framework offers insights into how to compute energy-cost bounds on computational processes with an unpredictable finish. For example: A coin-flipping simulator may be instructed to stop flipping once it achieves 10 heads. In biology, a cell may stop producing a protein once it elicits a certain reaction from another cell. The “stopping times” of these processes, or the time required to achieve the goal for the first time, can vary from trial to trial. The new framework offers a straightforward way to calculate the lower bounds on the energy cost of those situations.
    The research was conducted by SFI Professor David Wolpert, Gonzalo Manzano (Institute for Cross-Disciplinary Physics and Complex Systems, Spain), Édgar Roldán (Institute for Theoretical Physics, Italy), and SFI graduate fellow Gülce Kardes (CU Boulder). The study uncovers a way to lower-bound the energetic costs of arbitrary computational processes. For example: an algorithm that searches for a person’s first or last name in a database might stop running if it finds either, but we don’t know which one it found. “Many computational machines, when viewed as dynamical systems, have this property where if you jump from one state to another you really can’t go back to the original state in just one step,” says Kardes.
    Wolpert began investigating ways to apply ideas from nonequilibrium statistical physics to the theory of computation about a decade ago. Computers, he says, are a system out of equilibrium, and stochastic thermodynamics gives physicists a way to study nonequilibrium systems. “If you put those two together, it seemed like all kinds of fireworks would come out, in an SFI kind of spirit,” he says.
    In recent studies that laid the groundwork for this new paper, Wolpert and colleagues introduced the idea of a “mismatch cost,” or a measure of how much the cost of a computation exceeds Landauer’s bound. Proposed in 1961 by physicist Rolf Landauer, this limit defines the minimum amount of heat required to change information in a computer. Knowing the mismatch cost, Wolpert says, could inform strategies for reducing the overall energy cost of a system.

    Across the Atlantic, co-authors Manzano and Roldán have been developing a tool from the mathematics of finance — the martingale theory — to address the thermodynamic behavior of small fluctuating systems at stopping times. Roldán et. al.’s “Martingales for Physicists” helped pave the way to successful applications of such a martingale approach in thermodynamics.
    Wolpert, Kardes, Roldán, and Manzano extend these tools from stochastic thermodynamics to the calculation of a mismatch cost to common computational problems in their PRX paper.
    Taken together, their research point to a new avenue for finding the lowest energy needed for computation in any system, no matter how it’s implemented. “It’s exposing a vast new set of issues,” Wolpert says.
    It may also have a very practical application, in pointing to new ways to make computing more energy efficient. The National Science Foundation estimates that computers use between 5% and 9% of global generated power, but at current growth rates, that could reach 20% by 2030. But previous work by SFI researchers suggests modern computers are grossly inefficient: Biological systems, by contrast, are about 100,000 times more energy-efficient than human-built computers. Wolpert says that one of the primary motivations for a general thermodynamic theory of computation is to find new ways to reduce the energy consumption of real-world machines.
    For instance, a better understanding of how algorithms and devices use energy to do certain tasks could point to more efficient computer chip architectures. Right now, says Wolpert, there’s no clear way to make physical chips that can carry out computational tasks using less energy.
    “These kinds of techniques might provide a flashlight through the darkness,” he says. More