More stories

  • in

    An easy pill to swallow — new 3D printing research paves way for personalized medication

    A new technique for 3D printing medication has enabled the printing of multiple drugs in a single tablet, paving the way for personalised pills that can deliver timed doses.
    Researchers from the University of Nottingham’s, Centre for Additive Manufacturing have led research alongside the School of Pharmacy that has fabricated personalised medicine using Multi-Material InkJet 3D Printing (MM-IJ3DP). The research has been published in Materials Today Advances.
    The team have developed a cutting-edge method that enables the fabrication of customised pharmaceutical tablets with tailored drug release profiles, ensuring more precise and effective treatment options for patients.
    Using Multi-Material InkJet 3D Printing (MM-IJ3DP), tablets can be printed that release drugs at a controlled rate, determined by the tablet’s design. This is made possible by a novel ink formulation based on molecules that are sensitive to ultraviolet light. When printed, these molecules form a water-soluble structure.
    The drug release rate is controlled by the unique interior structure of the tablet, allowing for timing the dosage release. This method can print multiple drugs in a single tablet, allowing for complex medication regimens to be simplified into a single dose.
    Dr Yinfeng He, Assistant Professor in the Faculty of Engineering’s Centre for Additive Manufacturing led the research, he said: “This is an exciting step forwards in the development of personalised medication. This breakthrough not only highlights the potential of 3D printing in revolutionizing drug delivery but also opens up new avenues for the development of next-generation personalized medicines.”
    “While promising, the technology faces challenges, including the need for more formulations that support a wider range of materials. The ongoing research aims to refine these aspects, enhancing the feasibility of MM-IJ3DP for widespread application.” Professor Ricky Wildman added.
    This technology will be particularly beneficial in creating medication that needs to release drugs at specific times, making it ideal for treating diseases, where timing and dosage accuracy are crucial. The ability to print 56 pills in a single batch demonstrates the scalability of this technology, providing a strong potential for the production of personalised medicines.
    Professor Felicity Rose at the University of Nottingham’s School of Pharmacy was one of the co-authors on the research, she says: “The future of prescribed medication lies in a personalised approach, and we know that up 50% of people in the UK alone don’t take their medicines correctly and this has an impact on poorer health outcomes with conditions not being controlled or properly treated. A single pill approach would simplify taking multiple medications at different times and this research is an exciting step towards that.” More

  • in

    Century of statistical ecology reviewed

    Crunching numbers isn’t exactly how Neil Gilbert, a postdoctoral researcher at Michigan State University, envisioned a career in ecology.
    “I think it’s a little funny that I’m doing this statistical ecology work because I was always OK at math, but never particularly enjoyed it,” he explained. “As an undergrad, I thought, I’ll be an ecologist — that means that I can be outside, looking at birds, that sort of thing.”
    As it turns out,” he chuckled, “ecology is a very quantitative discipline.”
    Now, working in the Zipkin Quantitative Ecology lab, Gilbert is the lead author on a new article in a special collection of the journal Ecology that reviews the past century of statistical ecology.
    Statistical ecology, or the study of ecological systems using mathematical equations, probability and empirical data, has grown over the last century. As increasingly large datasets and complex questions took center stage in ecological research, new tools and approaches were needed to properly address them.
    To better understand how statistical ecology changed over the last century, Gilbert and his fellow authors examined a selection of 36 highly cited papers on statistical ecology — all published in Ecology since its inception in 1920.
    The team’s paper examines work on statistical models across a range of ecological scales from individuals to populations, communities, ecosystems and beyond. The team also reviewed publications providing practical guidance on applying models. Gilbert noted that because, “many practicing ecologists lack extensive quantitative training,” such publications are key to shaping studies.

    Ecology is an advantageous place for such papers, because it is one of, “the first internationally important journals in the field. It has played an outsized role in publishing important work,” said lab leader Elise Zipkin, a Red Cedar Distinguished Associate Professor in the Department of Integrative Biology.
    “It has a reputation of publishing some of the most influential papers on the development and application of analytical techniques from the very beginning of modern ecological research.”
    The team found a persistent evolution of models and concepts in the field, especially over the past few decades, driven by refinements in techniques and exponential increases in computational power.
    “Statistical ecology has exploded in the last 20 to 30 years because of advances in both data availability and the continued improvement of high-performance computing clusters,” Gilbert explained.
    Included among the 36 reviewed papers were a landmark 1945 study by Lee R. Dice on predicting the co-occurrence of species in space — Ecology’s most highly cited paper of all time — and an influential 2002 paper led by Darryl MacKenzie on occupancy models. Ecologists use these models to identify the range and distribution of species in an environment.
    Mackenzie’s work on species detection and sampling, “played an outsized role in the study of species distributions,” says Zipkin. MacKenzie’s paper, which was cited more than 5,400 times, spawned various software packages that are now widely used by ecologists, she explained. More

  • in

    Coming out to a chatbot?

    Today, there are dozens of large language model (LLM) chatbots aimed at mental health care — addressing everything from loneliness among seniors to anxiety and depression in teens.
    But the efficacy of these apps is unclear. Even more unclear is how well these apps work in supporting specific, marginalized groups like LGBTQ+ communities.
    A team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences, Emory University, Vanderbilt University and the University of California Irvine, found that while large language models can offer fast, on-demand support, they frequently fail to grasp the specific challenges that many members of the LGBTQ+ community face.
    That failure could lead the chatbot to give at best unhelpful and at worst dangerous advice.
    The paper is being presented this week at the ACM (Association of Computing Machinery) conference on Human Factors in Computing System in Honolulu, Hawai’i.
    The researchers interviewed 31 participants — 18 identifying as LGBTQ+ and 13 as non-LGBTQ+ — about their usage of LLM-based chatbots for mental health support and how the chatbots supported their individual needs.
    On one hand, many participants reported that the chatbots offered a sense of solidarity and a safe space to explore and express their identities. Some used the chatbots for practice coming out to friends and family, others to practice asking someone out for the first time.

    But many of the participants also noted the programs’ shortfalls.
    One participant wrote, “I don’t think I remember any time that it gave me a solution. It will just be like empathetic. Or maybe, if I would tell it that I’m upset with someone being homophobic. It will suggest, maybe talking to that person. But most of the time it just be like, ‘I’m sorry that happened to you.'”
    “The boilerplate nature of the chatbots’ responses highlights their failure to recognize the complex and nuanced LGBTQ+ identities and experiences, making the chatbots’ suggestions feel emotionally disengaged,” said Zilin Ma, a PhD student at SEAS and co-first author of the paper.
    Because these chatbots tend to be sycophantic, said Ma, they’re actually very bad at simulating hostility, which makes them ill-suited to practice potentially fraught conversations like coming out.
    They also gave some participants staggeringly bad advice — telling one person to quit their job after experiencing workplace homophobia, without considering their financial or personal consequences.
    Ma, who is in the lab of Krzysztof Gajos, the Gordon McKay Professor of Computer Science, stressed that while there are ways to improve these programs, it is not a panacea.

    “There are ways we could improve these limitations by fine tuning the LLMs for contexts relevant to LGBTQ+ users or implementing context-sensitive guardrails or regularly updating feedback loops, but we wonder if this tendency to implement technology at every aspect of social problem is the right approach,” said Ma. “We can optimize all these LLMs all we want but there are aspects of LGBTQ+ mental health that cannot be solved with LLM chatbots — such as discrimination, bullying, the stress of coming out or the lack of representation. For that, we need a holistic support system for LGBTQ+ people.”
    One area where LLM chatbots could be useful is in the training of human counselors or online community moderators.
    “Rather than having teens in crisis talk to the chatbot directly, you could use the chatbot to train counselors,” said Ma. “Then you have a real human to talk to, but it empowers the counselors with technology, which is a socio-technical solution which I think works well in this case.”
    “Research in public health suggests that interventions that directly target the affected individuals — like the chatbots for improving individual well-being — risk leaving the most vulnerable people behind,” said Gajos. “It is harder but potentially more impactful to change the communities themselves through training counselors or online community moderators.”
    The research was co-authored by Yiyang Mei, Yinru Long, Zhaoyuan “Nick” Su and Gajos. More

  • in

    Chatbots tell people what they want to hear

    Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University-led research.
    The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation.
    “Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers,” said lead author Ziang Xiao, an assistant professor of computer science at Johns Hopkins who studies human-AI interactions. “Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear.”
    Xiao and his team share their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems at 5 p.m. ET on Monday, May 13.
    To see how chatbots influence online searches, the team compared how people interacted with different search systems and how they felt about controversial issues before and after using them.
    The researchers asked 272 participants to write out their thoughts about topics including health care, student loans, or sanctuary cities, and then look up more information online about that topic using either a chatbot or a traditional search engine built for the study. After considering the search results, participants wrote a second essay and answered questions about the topic. Researchers also had participants read two opposing articles and questioned them about how much they trusted the information and if they found the viewpoints to be extreme.
    Because chatbots offered a narrower range of information than traditional web searches and provided answers that reflected the participants’ preexisting attitudes, the participants who used them became more invested in their original ideas and had stronger reactions to information that challenged their views, the researchers found.

    “People tend to seek information that aligns with their viewpoints, a behavior that often traps them in an echo chamber of like-minded opinions,” Xiao said. “We found that this echo chamber effect is stronger with the chatbots than traditional web searches.”
    The echo chamber stems, in part, from the way participants interacted with chatbots, Xiao said. Rather than typing in keywords, as people do for traditional search engines, chatbot users tended to type in full questions, such as, What are the benefits of universal health care? or What are the costs of universal health care? A chatbot would answer with a summary that included only benefits or costs.
    “With chatbots, people tend to be more expressive and formulate questions in a more conversational way. It’s a function of how we speak,” Xiao said. “But our language can be used against us.”
    AI developers can train chatbots to extract clues from questions and identify people’s biases, Xiao said. Once a chatbot knows what a person likes or doesn’t like, it can tailor its responses to match.
    In fact, when the researchers created a chatbot with a hidden agenda, designed to agree with people, the echo chamber effect was even stronger.
    To try to counteract the echo chamber effect, researchers trained a chatbot to provide answers that disagreed with participants. People’s opinions didn’t change, Xiao said. The researchers also programmed a chatbot to link to source information to encourage people to fact-check, but only a few participants did.
    “Given AI-based systems are becoming easier to build, there are going to be opportunities for malicious actors to leverage AIs to make a more polarized society,” Xiao said. “Creating agents that always present opinions from the other side is the most obvious intervention, but we found they don’t work.” More

  • in

    Just believing that an AI is helping boosts your performance

    Sometimes it seems like an AI is helping, but the benefit is actually a placebo effect — people performing better simply because they expect to be doing so — according to new research from Aalto University in Finland. The study also shows how difficult it is to shake people’s trust in the capabilities of AI systems.
    In this study, participants were tasked with a simple letter recognition exercise. They performed the task once on their own and once supposedly aided by an AI system. Half of the participants were told the system was reliable and it would enhance their performance, and the other half was told that it was unreliable and would worsen their performance.
    ‘In fact, neither AI system ever existed. Participants were led to believe an AI system was assisting them, when in reality, what the sham-AI was doing was completely random,’ explains doctoral researcher Agnes Kloft.
    The participants had to pair letters that popped up on screen at varying speeds. Surprisingly, both groups performed the exercise more efficiently — more quickly and attentively — when they believed an AI was involved.
    ‘What we discovered is that people have extremely high expectations of these systems, and we can’t make them AI doomers simply by telling them a program doesn’t work,’ says Assistant Professor Robin Welsch.
    Following the initial experiments, the researchers conducted an online replication study that produced similar results. They also introduced a qualitative component, inviting participants to describe their expectations of performing with an AI. Most had a positive outlook toward AI and, surprisingly even skeptical people still had positive expectations about its performance.
    The findings pose a problem for the methods generally used to evaluate emerging AI systems. ‘This is the big realization coming from our study — that it’s hard to evaluate programmes that promise to help you because of this placebo effect’, Welsch says.

    While powerful technologies like large language models undoubtedly streamline certain tasks, subtle differences between versions may be amplified or masked by the placebo effect — and this is effectively harnessed through marketing.
    The results also pose a significant challenge for research on human-computer interaction, since expectations would influence the outcome unless placebo control studies were used.
    ‘These results suggest that many studies in the field may have been skewed in favour of AI systems,’ concludes Welsch.
    The researchers will present the study at the CHI-conference on May 14. More

  • in

    Cats purrfectly demonstrate what it takes to trust robots

    Would you trust a robot to look after your cat? New research suggests it takes more than a carefully designed robot to care for your cat, the environment in which they operate is also vital, as well as human interaction.
    Cat Royale is a unique collaboration between Computer Scientists from the University of Nottingham and artists at Blast Theory who worked together to create a multispecies world centred around a be-spoke enclosure in which three cats and a robot arm coexist for six hours a day during a twelve-day installation as part of an artist-led project. The installation was launched in 2023 at the World Science Festival in Brisbane, Australia and has been touring since, it has just won a Webby award for its creative experience.
    The research paper, “Designing Multispecies Worlds for Robots, Cats, and Humans” has just been presented at the annual Computer-Human Conference (CHI’24) where it won best paper. It outlines how designing the technology and its interactions is not sufficient, but that it is equally important to consider the design of the `world’ in which the technology operates. The research also highlights the necessity of human involvement in areas such as breakdown recovery, animal welfare, and their role as audience.
    Cat Royale centred around a robot arm offering activities to make the cats happier, these included dragging a ‘mouse’ toy along the floor, raising a feather ‘bird’ into the air, and even offering them treats to eat. The team then trained an AI to learn what games the cats liked best so that it could personalise their experiences.
    “At first glance, the project is about designing a robot to enrich the lives of a family of cats by playing with them. ” commented Professor Steve Benford from the University of Nottingham who led the research, “Under the surface, however, it explores the question of what it takes to trust a robot to look after our loved ones and potentially ourselves.”
    Working with Blast Theory to develop and then study Cat Royale, the research team gained important insights into the design of robots and its interactions with the cats. They had to design the robot to pick up toys, deploy them in ways that excited the cats, while it learned which games each cat liked. They also designed the entire world in which the cats and the robot lived, providing safe spaces for the cats to observe the robot and from which to sneak up on it, and decorating it so that the robot had the best chance of spotting the approaching cats.
    The implication is designing robots involves interior design as well as engineering and AI. If you want to introduce robots into your home to look after your loved ones, then you will likely need to redesign your home.
    Research workshops for Cat Royale were held at the Univeraity of Nottingham’s unique Cobotmaker Space where stakeholders were bought together to think about the design of the robot /welfare of cats. Eike Schneiders, Transitional Assistant Professor in the Mixed Reality Lab at the University of Nottingham worked on the design, he said: “As we learned through Cat Royale, creating a multispecies system — where cats, robots, and humans are all accounted for — takes more than just designing the robot. We had to ensure animal wellbeing at all times, while simultaneously ensuring that the interactive installation engaged the (human) audiences around the world. This involved consideration of many elements, including the design of the enclosure, the robot and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats.” More

  • in

    New work extends the thermodynamic theory of computation

    Every computing system, biological or synthetic, from cells to brains to laptops, has a cost. This isn’t the price, which is easy to discern, but an energy cost connected to the work required to run a program and the heat dissipated in the process.
    Researchers at SFI and elsewhere have spent decades developing a thermodynamic theory of computation, but previous work on the energy cost has focused on basic symbolic computations — like the erasure of a single bit — that aren’t readily transferable to less predictable, real-world computing scenarios.
    In a paper published in Physical Review X on May 13, a quartet of physicists and computer scientists expand the modern theory of the thermodynamics of computation. By combining approaches from statistical physics and computer science, the researchers introduce mathematical equations that reveal the minimum and maximum predicted energy cost of computational processes that depend on randomness, which is a powerful tool in modern computers.
    In particular, the framework offers insights into how to compute energy-cost bounds on computational processes with an unpredictable finish. For example: A coin-flipping simulator may be instructed to stop flipping once it achieves 10 heads. In biology, a cell may stop producing a protein once it elicits a certain reaction from another cell. The “stopping times” of these processes, or the time required to achieve the goal for the first time, can vary from trial to trial. The new framework offers a straightforward way to calculate the lower bounds on the energy cost of those situations.
    The research was conducted by SFI Professor David Wolpert, Gonzalo Manzano (Institute for Cross-Disciplinary Physics and Complex Systems, Spain), Édgar Roldán (Institute for Theoretical Physics, Italy), and SFI graduate fellow Gülce Kardes (CU Boulder). The study uncovers a way to lower-bound the energetic costs of arbitrary computational processes. For example: an algorithm that searches for a person’s first or last name in a database might stop running if it finds either, but we don’t know which one it found. “Many computational machines, when viewed as dynamical systems, have this property where if you jump from one state to another you really can’t go back to the original state in just one step,” says Kardes.
    Wolpert began investigating ways to apply ideas from nonequilibrium statistical physics to the theory of computation about a decade ago. Computers, he says, are a system out of equilibrium, and stochastic thermodynamics gives physicists a way to study nonequilibrium systems. “If you put those two together, it seemed like all kinds of fireworks would come out, in an SFI kind of spirit,” he says.
    In recent studies that laid the groundwork for this new paper, Wolpert and colleagues introduced the idea of a “mismatch cost,” or a measure of how much the cost of a computation exceeds Landauer’s bound. Proposed in 1961 by physicist Rolf Landauer, this limit defines the minimum amount of heat required to change information in a computer. Knowing the mismatch cost, Wolpert says, could inform strategies for reducing the overall energy cost of a system.

    Across the Atlantic, co-authors Manzano and Roldán have been developing a tool from the mathematics of finance — the martingale theory — to address the thermodynamic behavior of small fluctuating systems at stopping times. Roldán et. al.’s “Martingales for Physicists” helped pave the way to successful applications of such a martingale approach in thermodynamics.
    Wolpert, Kardes, Roldán, and Manzano extend these tools from stochastic thermodynamics to the calculation of a mismatch cost to common computational problems in their PRX paper.
    Taken together, their research point to a new avenue for finding the lowest energy needed for computation in any system, no matter how it’s implemented. “It’s exposing a vast new set of issues,” Wolpert says.
    It may also have a very practical application, in pointing to new ways to make computing more energy efficient. The National Science Foundation estimates that computers use between 5% and 9% of global generated power, but at current growth rates, that could reach 20% by 2030. But previous work by SFI researchers suggests modern computers are grossly inefficient: Biological systems, by contrast, are about 100,000 times more energy-efficient than human-built computers. Wolpert says that one of the primary motivations for a general thermodynamic theory of computation is to find new ways to reduce the energy consumption of real-world machines.
    For instance, a better understanding of how algorithms and devices use energy to do certain tasks could point to more efficient computer chip architectures. Right now, says Wolpert, there’s no clear way to make physical chips that can carry out computational tasks using less energy.
    “These kinds of techniques might provide a flashlight through the darkness,” he says. More

  • in

    Potential power and pitfalls of harnessing artificial intelligence for sleep medicine

    In a new research commentary, the Artificial Intelligence in Sleep Medicine Committee of the American Academy of Sleep Medicine highlights how artificial intelligence stands on the threshold of making monumental contributions to the field of sleep medicine. Through a strategic analysis, the committee examined advancements in AI within sleep medicine and spotlighted its potential in revolutionizing care in three critical areas: clinical applications, lifestyle management, and population health. The committee also reviewed barriers and challenges associated with using AI-enabled technologies.
    “AI is disrupting all areas of medicine, and the future of sleep medicine is poised at a transformational crossroad,” said lead author Dr. Anuja Bandyopadhyay, chair of the Artificial Intelligence in Sleep Medicine Committee. “This commentary outlines the powerful potential and challenges for sleep medicine physicians to be aware of as they begin leveraging AI to deliver precise, personalized patient care and enhance preventive health strategies on a larger scale while ensuring its ethical deployment.”
    According to the authors, AI has potential uses in the sleep field in three key areas: Clinical Applications:In the clinical realm, AI-driven technologies offer comprehensive data analysis, nuanced pattern recognition and automation in diagnosis, all while addressing chronic problems like sleep-related breathing disorders. Despite understated beginnings, the utilization of AI can offer improvements in efficiency and patient access, which can contribute to a reduction in burnout among health care professionals. Lifestyle Management:Incorporating AI also offers clear benefits for lifestyle management through the use of consumer sleep technology. These devices come in various forms like fitness wristbands, smartphone apps, and smart rings, and they contribute to better sleep health through tracking, assessment and enhancement. Wearable sleep technology and data-driven lifestyle recommendations can empower patients to take an active role in managing their health, as shown in a recent AASM survey, which reported that 68% of adults who have used a sleep tracker said they have changed their behavior based on what they have learned. But, as these AI-driven applications grow ever more intuitive, the importance of ongoing dialogue between patients and clinicians about the potential and limitations of these innovations remains vital. Population Health: Beyond individual care, AI technology reveals a new approach to public health regarding sleep. “AI has the exciting potential to synthesize environmental, behavioral and physiological data, contributing to informed population-level interventions and bridging existing health care gaps,” noted Bandyopadhyay.The paper also offers warnings about the integration of AI into sleep medicine. Issues of data privacy, security, accuracy, and the potential for reinforcing existing biases present new challenges for health care professionals. Additionally, reliance on AI without sufficient clinical judgment could lead to complexities in patient treatment.
    “While AI can significantly strengthen the evaluation and management of sleep disorders, it is intended to complement, not replace, the expertise of a sleep medicine professional,” Bandyopadhyay stated.
    Navigating this emerging landscape requires comprehensive validation and standardization protocols to responsibly and ethically implement AI technologies in health care. It’s critical that AI tools are validated against varied datasets to ensure their reliability and accuracy in all patient populations.
    “Our commentary provides not just a vision, but a roadmap for leveraging the technology to promote better sleep health outcomes,” Bandyopadhyay said. “It lays the foundation for future discussions on the ethical deployment of AI, the importance of clinician education, and the harmonization of this new technology with existing practices to optimize patient care.” More