More stories

  • in

    Medical AI tool gets human thumbs-up

    A new artificial intelligence computer program created by researchers at the University of Florida and NVIDIA can generate doctors’ notes so well that two physicians couldn’t tell the difference, according to an early study from both groups.
    In this proof-of-concept study, physicians reviewed patient notes — some written by actual medical doctors while others were created by the new AI program — and the physicians identified the correct author only 49% of the time.
    A team of 19 researchers from NVIDIA and the University of Florida said their findings, published Nov. 16 in the Nature journal npj Digital Medicine, open the door for AI to support health care workers with groundbreaking efficiencies.
    The researchers trained supercomputers to generate medical records based on a new model, GatorTronGPT, that functions similarly to ChatGPT. The free versions of GatorTron™ models have more than 430,000 downloads from Hugging Face, an open-source AI website. GatorTron™ models are the site’s only models available for clinical research, according to the article’s lead author Yonghui Wu, Ph.D., from the UF College of Medicine’s department of health outcomes and biomedical informatics.
    “In health care, everyone is talking about these models. GatorTron™ and GatorTronGPT are unique AI models that can power many aspects of medical research and health care. Yet, they require massive data and extensive computing power to build. We are grateful to have this supercomputer, HiPerGator, from NVIDIA to explore the potential of AI in health care,” Wu said.
    UF alumnus and NVIDIA co-founder Chris Malachowsky is the namesake of UF’s new Malachowsky Hall for Data Science & Information Technology. A public-private partnership between UF and NVIDIA helped to fund this $150 million structure. In 2021, UF upgraded its HiPerGator supercomputer to elite status with a multimillion-dollar infrastructure package from NVIDIA, the first at a university.
    For this research, Wu and his colleagues developed a large language model that allows computers to mimic natural human language. These models work well with standard writing or conversations, but medical records bring additional hurdles, such as needing to protect patients’ privacy and being highly technical. Digital medical records cannot be Googled or shared on Wikipedia.

    To overcome these obstacles, the researchers stripped UF Health medical records of identifying information from 2 million patients while keeping 82 billion useful medical words. Combining this set with another dataset of 195 billion words, they trained the GatorTronGPT model to analyze the medical data with GPT-3 architecture, or Generative Pre-trained Transformer, a form of neural network architecture. That allowed GatorTronGPT to write clinical text similar to medical doctors’ notes.
    “This GatorTronGPT model is one of the first major products from UF’s initiative to incorporate AI across the university. We are so pleased with how the partnership with NVIDIA is already bearing fruit and setting the stage for the future of medicine,” said Elizabeth Shenkman, Ph.D., a co-author and chair of UF’s department of health outcomes and biomedical informatics.
    Of the many possible uses for a medical GPT, one idea involves replacing the tedium of documentation with notes recorded and transcribed by AI. Wu says that UF has an innovation center that is pursuing a commercial version of the software.
    For an AI tool to reach such parity with human writing, programmers spend weeks programming supercomputers with clinical vocabulary and language usage based on billions upon billions of words. One resource providing the necessary clinical data is the OneFlorida+ Clinical Research Network, coordinated at UF and representing many health care systems.
    “It’s critical to have such massive amounts of UF Health clinical data not only available but ready for AI. Only a supercomputer could handle such a big dataset of 277 billion words. We are excited to implement GatorTron™ and GatorTronGPT models to real-world health care at UF Health,” said Jiang Bian, Ph.D., a co-author and UF Health’s chief data scientist and chief research information officer.
    A cross-section of 14 UF and UF Health faculty contributed to this study, including researchers from Research Computing, Integrated Data Repository Research Services within the Clinical and Translational Science Institute, and from departments and divisions within the College of Medicine, including neurosurgery, endocrinology, diabetes and metabolism, cardiovascular medicine, and health outcomes and biomedical informatics.
    The study was partially funded by grants from the Patient-Centered Outcomes Research Institute, the National Cancer Institute and the National Institute on Aging.
    Here are two paragraphs that reference two patient cases one written by a human and one created by GatorTronGPT — can you tell whether the author was machine or human? More

  • in

    Computer simulation suggests mutant strains of COVID-19 emerged in response to human behavior

    Using artificial intelligence technology and mathematical modeling, a research group led by Nagoya University has revealed that human behavior, such as lockdowns and isolation measures, affect the evolution of new strains of COVID-19. SARS-CoV-2, the virus that causes COVID-19, developed to become more transmissible earlier in its lifecycle. The researcher’s findings, published in Nature Communications, provide new insights into the relationship between how people behave and disease-causing agents.
    As with any other living organism, viruses evolve over time. Those with survival advantages become dominant in the gene pool. Many environmental factors influence this evolution, including human behavior. By isolating sick people and using lockdowns to control outbreaks, humans may alter virus evolution in complicated ways. Predicting how these changes occur is vital to develop adaptive treatments and interventions.
    An important concept in this interaction is viral load, which refers to the amount or concentration of a virus present per ml of a bodily fluid. In SARS-CoV-2, a higher viral load in respiratory secretions increases the risk of transmission through droplets. Viral load relates to the potential to transmit a virus to others. For example, a virus like Ebola has an exceptionally high viral load, whereas the common cold has a low one. However, viruses must perform a careful balancing act, as increasing the maximum viral load can be advantageous, but an excessive viral load may cause individuals to become too sick to transmit the virus to others.
    The research group led by Professor Shingo Iwami at the Nagoya University Graduate School of Science identified trends using mathematical modeling with an artificial intelligence component to investigate previously published clinical data. They found that the SARS-CoV-2 variants that were most successful at spreading had an earlier and higher peak in viral load. However, as the virus evolved from the pre-Alpha to the Delta variants, it had a shorter duration of infection. The researchers also found that the decreased incubation period and the increased proportion of asymptomatic infections recorded as the virus mutated also affected virus evolution.
    The results showed a clear difference. As the virus evolved from the Wuhan strain to the Delta strain, they found a 5-fold increase in the maximum viral load and a 1.5-fold increase in the number of days before the viral load peaked.
    Iwami and his colleagues suggest that human behavioral changes in response to the virus, designed to limit transmission, were increasing the selection pressure on the virus. This caused SARS-CoV-2 to be transmitted mainly during the asymptomatic and presymptomatic periods, which occur earlier in its infectious cycle. As a result, the viral load peak advanced to this period to spread more effectively in the earlier pre-symptomatic stages.
    When evaluating public health strategies in response to COVID-19 and any future potentially pandemic-causing pathogens, it is necessary to consider the impact of changes in human behavior on virus evolution patterns. “We expect that immune pressure from vaccinations and/or previous infections drives the evolution of SARS-CoV-2,” Iwami said. “However, our study found that human behavior can also contribute to the virus’s evolution in a more complicated manner, suggesting the need to reevaluate virus evolution.”
    Their study suggests the possibility that new strains of coronavirus evolved because of a complex interaction between clinical symptoms and human behavior. The group hopes that their research will speed up the establishment of testing regimes for adaptive treatment, effective screening, and isolation strategies. More

  • in

    How we play together

    Intense focus pervades the EEG laboratory at the University of Konstanz on this day of experimentation. In separate labs, two participants, connected by screens, engage in the computer game Pacman. The burning question: Can strangers, unable to communicate directly, synchronize their efforts to conquer the digital realm together?
    Doctoral candidate Karl-Philipp Flösch is leading today’s experiment. He states: “Our research revolves around cooperative behaviour and the adoption of social roles.” However, understanding brain processes underlying cooperative behaviour is still in its infancy, presenting a central challenge for cognitive neuroscience. How can cooperative behaviour be brought into a highly structured EEG laboratory environment without making it feel artificial or boring for study participants?
    Pacman as a scientific “playground”
    The research team, led by Harald Schupp, Professor of Biological Psychology at the University of Konstanz, envisioned using the well-known computer game Pacman as a natural medium to study cooperative behaviour in the EEG laboratory. Conducting the study as part of the Cluster of Excellence Centre for the Advanced Study of Collective Behaviour, they recently published their findings in Psychophysiology.
    “Pacman is a cultural icon. Many have navigated the voracious Pacman through mazes in their youth, aiming to devour fruits and outsmart hostile ghosts,” reminisces Karl-Philipp Flösch. Collaborating with colleagues, co-author Tobias Flaisch adapted the game. In the EEG version, two players instead of one must collaboratively guide Pacman to the goal. Flaisch explains: “Success hinges on cooperative behaviour, as players must seamlessly work together.”
    However, the researchers have built in a special hurdle: the labyrinth’s path is concealed. Only one of the two players can see where Pacman is going next. Flösch elaborates: “The active player can communicate the direction to the partner, but only indirectly using pre-agreed symbols, communicated solely through the computer screen.” If you do not remember quickly enough that a crescent moon on the screen means that Pacman should move right, and that only the banana on the keyboard can make Pacman move to the right, you’re making a mistake. “From the perspective of classical psychological research, the game combines various skills inherent in natural social situations,” notes Harald Schupp.
    EEG measures event-related potentials
    During each game, the players’ brain reactions were measured using EEG. Calculating event-related potentials provides a detailed view of the effects elicited by different game roles with millisecond-level temporal precision. The team hypothesized that the game role significantly influences brain reactions. Therefore, they examined the P3 component, a well-studied brain reaction exhibiting a stronger deflection in the presence of significant and task-relevant stimuli. The results confirmed their assumption: “The P3 was increased not only when the symbol indicated the next move’s direction but also when observing whether the game partner selected the correct symbol,” says Flösch. The team concludes that the role we take on during cooperation determines the informational value of environmental stimuli situationally. EEG measurements allow the brain processes involved to be dynamically mapped.
    “Cooperative role adoption structures our entire society,” summarizes Schupp, providing context for the study. “An individual achieves little alone, but collectively, humanity even reaches the moon. Our technological society hinges on cooperative behavior,” says Flösch, adding that children early on take individual roles, thereby learning the art of complex cooperation. Consequently, this role adoption occurs nearly effortlessly and automatically for us every day. “Our brains are practically ‘built’ for it, as evidenced by the results of our study.” More

  • in

    Long in the Bluetooth: Scientists develop a more efficient way to transmit data between our devices

    University of Sussex researchers have developed a more energy-efficient alternative to transmit data that could potentially replace Bluetooth in mobile phones and other tech devices. With more and more of us owning smart phones and wearable tech, researchers at the University of Sussex have found a more efficient way of connecting our devices and improving battery life. Applied to wearable devices, it could even see us unlocking doors by touch or exchanging phone numbers by shaking hands.
    Professor Robert Prance and Professor Daniel Roggen, of the University of Sussex, have developed the use of electric waves, rather than electromagnetic waves, for a low-power way to transmit data at close range, while maintaining the high throughput needed for multimedia applications.
    Bluetooth, Wifi, and 5G currently rely on electromagnetic modulation, a form of wireless technology which was developed over 125 years ago. In the late 19th Century, the focus was to transmit data over long distances using electromagnetic waves. By contrast, electric field modulation uses short-range electric waves, which consumes much less power than Bluetooth.
    As we tend to be in close proximity to our devices, electric field modulation offers a proven, more efficient method of connecting our devices, enabling longer lasting battery life when streaming music to headphones, taking calls, using fitness trackers, or interacting with smart home tech.
    The development could advance how we use tech in our day to day lives and evolve a wide range of futuristic applications too. For example, a bracelet using this technology could enable phone numbers to be exchanged simply by shaking hands or a door could be unlocked just by touching the handle.
    Daniel Roggen, Professor of Engineering and Design at the University of Sussex, explains:
    “We no longer need to rely on electromagnetic modulation, which is inherently battery hungry. We can improve the battery life of wearable technology and home assistants, for example, by using electric field modulation instead of Bluetooth. This solution will not only make our lives much more efficient, but it also opens novel opportunities to interact with devices in smart homes.
    “The technology is also low cost, meaning it could be rolled out to society quickly and easily. If this were mass produced, the solution can be miniaturised to a single chip and cost just a few pence per device, meaning that it could be used in all devices in the not-too-distant future.”
    The University of Sussex researchers are now seeking industrial partnerships to help further miniaturize the technology for personal devices. More

  • in

    AI can ‘lie and BS’ like its maker, but still not intelligent like humans

    The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public. While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind.
    The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”
    According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour. To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.
    “LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”
    The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,'” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.
    And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”
    The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.
    “This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.
    The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in.” More

  • in

    Creativity in the age of generative AI: A new era of creative partnerships

    Recent advancements in generative artificial intelligence (AI) have showcased its potential in a wide range of creative activities such as to produce works of art, compose symphonies, and even draft legal texts, slide presentations or the like. These developments have raised concerns that AI will outperform humans in creativity tasks and make knowledge workers redundant. These comments are most recently underlined by a Fortune article entitled ‘Elon Musk says AI will create a future where ‘no job is needed’: ‘The AI will be able to do everything’.
    In a new paper in a Nature Human Behavior special issue on AI, researcher Janet Rafner from Aarhus Institute of Advanced Studies and Center for Hybrid Intelligence at Aarhus University and Prof. Jacob Sherson, Director of the Center for Hybrid Intelligence, together with international collaborators discuss research and societal implications of creativity and AI.
    The team of researchers argue that we should direct our attention to understanding and nurturing co-creativity, the interaction between humans and machines towards what is termed a ‘human-centered AI’ and ‘hybrid intelligence.’ In this way we will be able to develop interfaces that at the same time ensure both high degrees of automatization through AI and human control and hereby supporting a relationship that optimally empower each other.
    Rafner comments: To date, most studies on human-AI co-creativity come from the field of human-computer interaction and focus on the abilities of the AI, and the interaction design and dynamics. While these advances are key for understanding the dynamics between humans and algorithms and human attitudes towards the co-creative process and product, there is an urgent need to enrich these applications with the insights about creativity obtained over the past decades in the psychological sciences.
    “Right now, we need to move the conversation away from questions like Can AI be creative? One reason for this is that defining creativity is not cut and dry. When investigating human only, machine only, and human-AI co-creativity, we need to consider the type and level of creativity under question, from everyday creative activities (e.g. making new recipes, artwork or music) that are perhaps more amenable to machine automatization to paradigm-shifting contributions that may require higher-level human intervention. Additionally, it is much more meaningful to consider nuanced questions like, What are the similarities and differences in human cognition, behavior, motivation and self-efficacy between human-AI co-creativity and human creativity?” explains Rafner.
    Currently, we do not have sufficient knowledge of co-creativity between human-machines as the delineation between human and AI contributions (and processes) are not always clear. Looking ahead, researchers should balance predictive accuracy with theoretical understanding (i.e., explainability), towards the goal of developing intelligent systems to both measure and enhance human creativity. When designing co-creative systems such as virtual assistants, it will be essential to balance psychometric rigor with ecological validity. That is, co-creativity tasks should combine precise psychological measurement with state-of-the-art intuitive and engaging interface design.
    Interdisciplinary collaborations are needed
    The challenge of understanding and properly developing human-AI co-creative systems is not to be faced by a single discipline. Business and management scholars should be included to ensure that tasks sufficiently capture real-world professional challenges and to understand the implications of co-creativity for the future of work at macro and micro organizational scales, such as creativity in team dynamics with blended teams of humans and AI. Linguistics and learning scientists are needed to help us understand the impact and nuances of prompt engineering in text-to-x systems. Developmental psychologists will have to study the impact on human learning processes.

    Ethical and meaningful developments
    Is not only seen as more ethical to keep humans closely in-the-loop when working and developing AI, but also in most cases it is the most efficient long-term choice, the team of researchers argue.
    Beyond this, ethics and legal scholars will have to consider the costs and benefits of co-creativity in terms of intellectual property rights, human sense of purpose, and environmental impact. More

  • in

    Study reveals bias in AI tools when diagnosing women’s health issue

    Machine learning algorithms designed to diagnose a common infection that affects women showed a diagnostic bias among ethnic groups, University of Florida researchers found.
    While artificial intelligence tools offer great potential for improving health care delivery, practitioners and scientists warn of their risk for perpetuating racial inequities. Published Friday in the Nature journal Digital Medicine, this is the first paper to evaluate fairness among these tools in connection to a women’s health issue.
    “Machine learning can be a great tool in medical diagnostics, but we found it can show bias toward different ethnic groups,” said Ruogu Fang, an associate professor in the J. Crayton Pruitt Family Department of Biomedical Engineering and the study’s author. “This is alarming for women’s health as there already are existing disparities that vary by ethnicity.”
    The researchers evaluated the fairness of machine learning in diagnosing bacterial vaginosis, or BV, a common condition affecting women of reproductive age, which has clear diagnostic differences among ethnic groups.
    Fang and co-corresponding author Ivana Parker, both faculty members in the Herbert Wertheim College of Engineering, pulled data from 400 women, comprising 100 from each of the ethnic groups represented — white, Black, Asian, and Hispanic.
    In investigating the ability of four machine learning models to predict BV in women with no symptoms, researchers say the accuracy varied among ethnicities. Hispanic women had the most false-positive diagnoses, and Asian women received the most false-negative. Algorithm
    “The models performed highest for white women and lowest for Asian women,” said the Parker, an assistant professor of bioengineering. “This tells us machine learning methods are not treating ethnic groups equally well.”
    Parker said that while they were interested in understanding how AI tools predict disease for specific ethnicities, their study also helps medical scientists understand the factors associated with bacteria in women of varying ethnic backgrounds, which can lead to improved treatments.

    BV, one of the most common vaginal infections, can cause discomfort and pain and happens when natural bacteria levels are out of balance. While there are symptoms associate with BV, many people have no symptoms, making it difficult to diagnose.
    It doesn’t often cause complications, but in some cases, BV can increase the risk of sexually transmitted infections, miscarriage, and premature births.
    The researchers said their findings demonstrate the need for improved methods for building the AI tools to mitigate health care bias. More

  • in

    Personalized cancer medicine: Humans make better treatment decisions than AI

    Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité — Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.
    If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor. The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes — genes with the potential to cause cancer — for example. Precision oncology, a specialized field of personalized medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.
    The first step in identifying which genetic mutations are potential targets for treatment is to analyze the genetic makeup of the tumor tissue. The molecular variants of the tumor DNA that are necessary for precision diagnosis and treatment are determined. Then the doctors use this information to craft individual treatment recommendations. In especially complex cases, this requires knowledge from various fields of medicine. At Charité, this is when the “molecular tumor board” (MTB) meets: Experts from the fields of pathology, molecular pathology, oncology, human genetics, and bioinformatics work together to analyze which treatments seem most promising based on the latest studies. It is a very involved process, ultimately culminating in a personalized treatment recommendation.
    Can artificial intelligence help with treatment decisions?
    Dr. Damian Rieke, a doctor at Charité, Prof. Ulf Leser and Xing David Wang of Humboldt-Universität zu Berlin, and Dr. Manuela Benary, a bioinformatics specialist at Charité, wondered whether artificial intelligence might be able to help at this juncture. In a study just recently published in the journal JAMA Network Open, they worked with other researchers to examine the possibilities and limitations of large language models such as ChatGPT in automatically scanning scientific literature with an eye to selecting personalized treatments.
    “We prompted the models to identify personalized treatment options for fictitious cancer patients and then compared the results with the recommendations made by experts,” Rieke explains. His conclusion: “AI models were able to identify personalized treatment options in principle — but they weren’t even close to the abilities of human experts.”
    The team created ten molecular tumor profiles of fictitious patients for the experiment. A human physician specialist and four large language models were then tasked with identifying a personalized treatment option. These results were presented to the members of the MTB for assessment, without them knowing where which recommendation came from.

    Improved AI models hold promise for future uses
    “There were some surprisingly good treatment options identified by AI in isolated cases,” Benary reports. “But large language models perform much worse than human experts.” Beyond that, data protection, privacy, and reproducibility pose particular challenges in relation to the use of artificial intelligence with real-world patients, she notes.
    Still, Rieke is fundamentally optimistic about the potential uses of AI in medicine: “In the study, we also showed that the performance of AI models is continuing to improve as the models advance. This could mean that AI can provide more support for even complex diagnostic and treatment processes in the future — as long as humans are the ones to check the results generated by AI and have the final say about treatment.”
    AI projects at Charité aim to improve patient care
    Prof. Felix Balzer, Director of the Institute of Medical Informatics, is also certain medicine will benefit from AI. In his role as Chief Medical Information Officer (CMIO) within IT, he is responsible for the digital transformation of patient care at Charité. “One special area of focus when it comes to greater efficiency in patient care is digitalization, which also means the use of automation and artificial intelligence,” Balzer explains.
    His institute is working on AI models to help with fall prevention in long-term care, for example. Other areas at Charité are also conducting extensive research on AI: The Charité Lab for Artificial Intelligence in Medicine is working to develop tools for AI-based prognosis following strokes, and the TEF-Health project, led by Prof. Petra Ritter of the Berlin Institute of Health at Charité (BIH), is working to facilitate the validation and certification of AI and robotics in medical devices. More