More stories

  • in

    ChatGPT is an effective tool for planning field work, school trips and even holidays

    Researchers exploring ways to utilise ChatGPT for work, say it could save organisations and individuals a lot of time and money when it comes to planning trips.
    A new study, published in Innovations in Education and Teaching International (IETI), has tested whether ChatGPT can be used to design University field studies. It found that the free-to-use AI model is an effective tool for not only planning educational trips around the world, but also could be used by other industries.
    The research, led by scientists from the University of Portsmouth and University of Plymouth, specifically focused on marine biology courses. It involved the creation of a brand new field course using ChatGPT, and the integration of the AI-planned activities into an existing university module.
    The team developed a comprehensive guide for using the chatbot, and successfully organised a single-day trip in the UK using the AI’s suggestion of a beach clean-up activity to raise awareness about marine pollution and its impact on marine ecosystems.
    They say the established workflow could also be easily adapted to support other projects and professions outside of education, including environmental impact studies, travel itineraries, and business trips.
    Dr Mark Tupper, from the University of Portsmouth’s School of Biological Sciences, said: “It’s well known that universities and schools across the UK are stretched thin when it comes to resources. We set out to find a way to utilise ChatGPT for planning field work, because of the considerable amount of effort that goes into organising these trips. There’s a lot to consider, including safety procedures, risks, and design logistics. This process can take several days, but we found ChatGPT effectively does most of the leg work in just a few hours. The simple framework we’ve created can be used across the whole education sector, not just by universities. With many facing budget constraints and staffing limitations, this could save a lot of time and money.”
    Chatbots like ChatGPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context.

    Since OpenAI launched the 3.0 model in November 2022, millions of users have used the technology to improve their personal lives and boost productivity. Some workers have used it to write papers, make music, develop code, and create lesson plans.
    “If you’re a school teacher and want to plan a class with 40 kids, our ChatGPT roadmap will be a game changer,” said Dr Reuben Shipway, Lecturer in Marine Biology at the University of Plymouth. “All a person needs to do is input some basic data, and the AI model will be able to design a course or trip based on their needs and requirements. It can competently handle various tasks, from setting learning objectives to outlining assessment criteria. For businesses, ChatGPT is like having a personal planning assistant at your fingertips. Imagine trips with itineraries that unfold effortlessly, or fieldwork logistics handled with the ease of conversation.”
    The paper says while the AI model is adaptable and user-friendly, there are limitations when it comes to field course planning, including risk assessments.
    Dr Ian Hendy, from the University of Portsmouth, explained: “We asked ChatGPT to identify the potential hazards of this course and assess the overall risk of this activity from low to high, and the results were mixed. In some instances, ChatGPT was able to identify hazards specific to the activity — like the increased risk of slipping on seaweed-covered rocks exposed at low tide — but in other instances, ChatGPT exaggerated threats. For example, we find the risk of students suffering from physical strain and fatigue from carrying bags of collected litter to be low. That’s why there still needs to be a human element in the planning stages, to iron out any issues. It’s also important that the individual sifting through the results understands the nuances of successful field courses so they can recognise these discrepancies.”
    The paper concludes with a series of recommendations for best practices in using ChatGPT for field course design, underscoring the need for thoughtful human input, logical prompt sequencing, critical evaluation, and adaptive management to refine course designs.
    Top tips to help potential users get the most out of ChatGPT: Get the ball rolling with ChatGPT: Ask what details it thrives on for crafting the perfect assignment plan. By understanding the key information it needs, you’ll be well-equipped to structure your prompts effectively and ensure ChatGPT provides tailored and insightful assistance; Time Management Made Easy: Share your preferred schedule, and let ChatGPT handle the logistics. Whether you’re a back-to-back meetings person or prefer a more relaxed pace, ChatGPT creates an itinerary that suits your working style; Flexible Contingency Plans: Anticipate the unexpected. ChatGPT can help you create contingency plans in case of unforeseen events, ensuring that the trip remains adaptable to changing circumstances without compromising the educational goals; Cultural Etiquette Guidance: Familiarise yourself with local cultural norms and business etiquette. ChatGPT can provide tips on appropriate greetings, gift-giving customs, and other cultural considerations, ensuring smooth interactions with local business partners; Become a proficient Prompt Engineer: There are many quality, low-cost courses in the field of ChatGPT prompt engineering. These are available from online learning platforms such as Udemy, Coursera, and LinkedIn Learning. Poor input leads to poor ChatGPT output, so improving your prompt engineering will always lead to better results; Use your unique experiences to improve ChatGPT output: Remember that AI knowledge cannot replace personal experience, but AI can learn from your experiences and use them to improve its recommendations; Remember, planning is a two-way street! Engage in feedback with ChatGPT. Don’t hesitate to tweak and refine the itinerary until it feels just right. It’s your trip, after all. More

  • in

    AI-based app can help physicians find skin melanoma

    A mobile app that uses artificial intelligence, AI, to analyse images of suspected skin lesions can diagnose melanoma with very high precision. This is shown in a study led from Linköping University in Sweden where the app has been tested in primary care. The results have been published in the British Journal of Dermatology.
    “Our study is the first in the world to test an AI-based mobile app for melanoma in primary care in this way. A great many studies have been done on previously collected images of skin lesions and those studies relatively agree that AI is good at distinguishing dangerous from harmless ones. We were quite surprised by the fact that no one had done a study on primary care patients,” says Magnus Falk, senior associate professor at the Department of Health, Medicine and Caring Sciences at Linköping University, specialist in general practice at Region Östergötland, who led the current study.
    Melanoma can be difficult to differentiate from other skin changes, even for experienced physicians. However, it is important to detect melanoma as early as possible, as it is a serious type of skin cancer.
    There is currently no established AI-based support for assessing skin lesions in Swedish healthcare.
    “Primary care physicians encounter many skin lesions every day and with limited resources need to make decisions about treatment in cases of suspected skin melanoma. This often results in an abundance of referrals to specialists or the removal of skin lesions, which in the majority of cases turn out to be harmless. We wanted to see if the AI support tool in the app could perform better than primary care physicians when it comes to identifying pigmented skin lesions as dangerous or not, in comparison with the final diagnosis,” says Panos Papachristou, researcher affiliated with Karolinska Institutet and specialist in general practice, main author of the study and co-founder of the company that developed the app.
    And the results are promising.
    “First of all, the app missed no melanoma. This disease is so dangerous that it’s essential not to miss it. But it’s almost equally important that the AI decision support tool could acquit many suspected skin lesions and determine that they were harmless,” says Magnus Falk.

    In the study, primary care physicians followed the usual procedure for diagnosing suspected skin tumours. If the physicians suspected melanoma, they either referred the patient to a dermatologist for diagnosis, or the skin lesion was cut away for tissue analysis and diagnosis.
    Only after the physician decided how to handle the suspected melanoma did they use the AI-based app. This involves the physician taking a picture of the skin lesion with a mobile phone equipped with an enlargement lens called a dermatoscope. The app analyses the image and provides guidance on whether or not the skin lesion appears to be melanoma.
    To find out how well the AI-based app worked as a decision support tool, the researchers compared the app’s response to the diagnoses made by the regular diagnostic procedure.
    Of the more than 250 skin lesions examined, physicians found 11 melanomas and 10 precursors of cancer, known as in situ melanoma. The app found all the melanomas, and missed only one precursor. In cases where the app responded that a suspected lesion was not a melanoma, including in situ melanoma, there was a 99.5 percent probability that this was correct.
    “It seems that this method could be useful. But in this study, physicians weren’t allowed to let their decision be influenced by the app’s response, so we don’t know what happens in practice if you use an AI-based decision support tool. So even if this is a very positive result, there is uncertainty and we need to continue to evaluate the usefulness of this tool with scientific studies,” says Magnus Falk.
    The researchers now plan to proceed with a large follow-up primary care study in several countries, where use of the app as an active decision support tool will be compared to not using it at all.
    The study was funded with support from Region Östergötland and the Analytic Imaging Diagnostics Arena, AIDA, in Linköping, which is funded by the strategic innovation programme Medtech4Health. More

  • in

    AI ethics are ignoring children, say researchers

    Researchers from the Oxford Martin Programme on Ethical Web and Data Architectures (EWADA), University of Oxford, have called for a more considered approach when embedding ethical principles in the development and governance of AI for children.
    In a perspective paper published today in Nature Machine Intelligence, the authors highlight that although there is a growing consensus around what high-level AI ethical principles should look like, too little is known about how to effectively apply them in principle for children. The study mapped the global landscape of existing ethics guidelines for AI and identified four main challenges in adapting such principles for children’s benefit: A lack of consideration for the developmental side of childhood, especially the complex and individual needs of children, age ranges, development stages, backgrounds, and characters. Minimal consideration for the role of guardians (e.g. parents) in childhood. For example, parents are often portrayed as having superior experience to children, when the digital world may need to reflect on this traditional role of parents. Too few child-centred evaluations that consider children’s best interests and rights. Quantitative assessments are the norm when assessing issues like safety and safeguarding in AI systems, but these tend to fall short when considering factors like the developmental needs and long-term wellbeing of children. Absence of a coordinated, cross-sectoral, and cross-disciplinary approach to formulating ethical AI principles for children that are necessary to effect impactful practice changes.The researchers also drew on real-life examples and experiences when identifying these challenges. They found that although AI is being used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to incorporate safeguarding principles into AI innovations including those supported by Large Language Models (LLMs). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, especially for vulnerable groups, and the evaluation of such methods should go beyond mere quantitative metrics such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully considering their needs and designing interfaces to support their sharing of data with AI-related algorithms, in ways that are aligned with their daily routes, digital literacy skills, and need for simple yet effective interfaces.
    In response to these challenges, the researchers recommended: increasing the involvement of key stakeholders, including parents and guardians, AI developers, and children themselves; providing more direct support for industry designers and developers of AI systems, especially by involving them more in the implementation of ethical AI principles; establishing legal and professional accountability mechanisms that are child-centred; and increasing multidisciplinary collaboration around a child-centred approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law, and education.Dr Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University’s Department of Computer Science, and lead author of the paper, said:
    “The incorporation of AI in children’s lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape.”
    ‘This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space.’
    The authors outlined several ethical AI principles that would especially need to be considered for children. They include ensuring fair, equal, and inclusive digital access, delivering transparency and accountability when developing AI systems, safeguarding privacy and preventing manipulation and exploitation, guaranteeing the safety of children, and creating age-appropriate systems while actively involving children in their development.
    Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Programme, Principal of Jesus College Oxford and a Professor of Computing Science at the Department of Computer Science, said:
    “In an era of AI powered algorithms children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially critical during childhood.” More

  • in

    Powerful new AI can predict people’s attitudes to vaccines

    A powerful new tool in artificial intelligence is able to predict whether someone is willing to be vaccinated against COVID-19.
    The predictive system uses a small set of data from demographics and personal judgments such as aversion to risk or loss.
    The findings frame a new technology that could have broad applications for predicting mental health and result in more effective public health campaigns.
    A team led by researchers at the University of Cincinnati and Northwestern University created a predictive model using an integrated system of mathematical equations describing the lawful patterns in reward and aversion judgment with machine learning.
    “We used a small number of variables and minimal computational resources to make predictions,” said lead author Nicole Vike, a senior research associate in UC’s College of Engineering and Applied Science.
    “COVID-19 is unlikely to be the last pandemic we see in the next decades. Having a new form of AI for prediction in public health provides a valuable tool that could help prepare hospitals for predicting vaccination rates and consequential infection rates.”
    The study was published in the Journal of Medical Internet Research Public Health and Surveillance.

    Researchers surveyed 3,476 adults across the United States in 2021 during the COVID-19 pandemic. At the time of the survey, the first vaccines had been available for more than a year.
    Respondents provided information such as where they live, income, highest education level completed, ethnicity and access to the internet. The respondents’ demographics mirrored those of the United States based on U.S. Census Bureau figures.
    Participants were asked if they had received either of the available COVID-19 vaccines. About 73% of respondents said they were vaccinated, slightly more than the 70% of the nation’s population that had been vaccinated in 2021.
    Further, they were asked if they routinely followed four recommendations designed to prevent the spread of the virus: wearing a mask, social distancing, washing their hands and not gathering in large groups.
    Participants were asked to rate how much they liked or disliked a randomly sequenced set of 48 pictures on a seven-point scale of 3 to -3. The pictures were from the International Affective Picture Set, a large set of emotionally evocative color photographs, in six categories: sports, disasters, cute animals, aggressive animals, nature and food.
    Vike said the goal of this exercise is to quantify mathematical features of people’s judgments as they observe mildly emotional stimuli. Measures from this task include concepts familiar to behavioral economists — or even people who gamble — such aversion to risk (the point at which someone is willing to accept potential loss for a potential reward) and aversion to loss. This is the willingness to avoid risk by, for example, obtaining insurance.

    “The framework by which we judge what is rewarding or aversive is fundamental to how we make medical decisions,” said co-senior author Hans Breiter, a professor of computer science at UC. “A seminal paper in 2017 hypothesized the existence of a standard model of the mind. Using a small set of variables from mathematical psychology to predict medical behavior would support such a model. The work of this collaborative team has provided such support and argues that the mind is a set of equations akin to what is used in particle physics.”
    The judgment variables and demographics were compared between respondents who were vaccinated and those who were not. Three machine learning approaches were used to test how well the respondents’ judgment, demographics and attitudes toward COVID-19 precautions predicted whether they would get the vaccine.
    The study demonstrates that artificial intelligence can make accurate predictions about human attitudes with surprisingly little data or reliance on expensive and time-consuming clinical assessments.
    “We found that a small set of demographic variables and 15 judgment variables predict vaccine uptake with moderate to high accuracy and high precision,” the study said. “In an age of big-data machine learning approaches, the current work provides an argument for using fewer but more interpretable variables.”
    “The study is anti-big-data,” said co-senior author Aggelos Katsaggelos, an endowed professor of electrical engineering and computer science at Northwestern University. “It can work very simply. It doesn’t need super-computation, it’s inexpensive and can be applied with anyone who has a smartphone. We refer to it as computational cognition AI. It is likely you will be seeing other applications regarding alterations in judgment in the very near future.” More

  • in

    Bendable energy storage materials by cool science

    Imaging being able to wear your smartphone on your wrist, not as a watch, but literally as a flexible band that wraps around your arm. How about clothes that charge your gadgets just by wearing them?
    Recently, a collaborative team led by Professor Jin Kon Kim and Dr. Keon-Woo Kim of Pohang University of Science and Technology (POSTECH), Professor Taesung Kim and M.S./Ph.D. student Hyunho Seok of Sungkyunkwan University (SKKU), and Professor Hong Chul Moon of University of Seoul (UOS) has brought us a step closer to achieving this reality. This research work was published in Advanced Materials.
    Mesoporous metal oxides (MMOs) are characterized by pores ranging from 2 to 50 nanometers (nm) in size. Due to their extensive surface area, MMOs have various applications, such as high-performance energy storage and efficient catalysis, semiconductors, and sensors. However, the integration of MMOs on wearable and flexible devices remains a great challenge, because plastic substrates could not maintain their integrity at elevated temperatures (350°C or above) where MMOs could be synthesized.
    The research team tackled this problem by using synergetic effect of heat and plasma to synthesize various MMOs including vanadium oxide (V2O5), renowned high-performance energy storage materials, V6O13, TiO2, Nb2O5, and WO3, on flexible materials at much lower temperatures (150 ~ 200 oC). The high reactive plasma chemical moieties provide enough energy that could be compensated by high temperature. The fabricated devices could be bent thousands of times without losing the energy storage performance.
    Professor Jin Kon Kim, the leading researcher, expressed his opinion, stating: “We’re on the brink of a revolution in wearable tech.”
    “Our breakthrough could lead to gadgets that are not only more flexible but also much more adaptable to our daily needs.”
    This research was supported by National Creative Initiative Research Program, the Basic Research in Science & Engineering Program, and the Nano & Material Technology Development Program. More

  • in

    Brain-inspired wireless system to gather data from salt-sized sensors

    Tiny chips may equal a big breakthrough for a team of scientists led by Brown University engineers.
    Writing in Nature Electronics, the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt.
    The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. The sensors detect specific events as spikes and then transmit that data wirelessly in real time using radio waves, saving both energy and bandwidth.
    “Our brain works in a very sparse way,” said Jihun Lee, a postdoctoral researcher at Brown and study lead author. “Neurons do not fire all the time. They compress data and fire sparsely so that they are very efficient. We are mimicking that structure here in our wireless telecommunication approach. The sensors would not be sending out data all the time — they’d just be sending relevant data as needed as short bursts of electrical spikes, and they would be able to do so independently of the other sensors and without coordinating with a central receiver. By doing this, we would manage to save a lot of energy and avoid flooding our central receiver hub with less meaningful data.”
    This radiofrequency transmission scheme also makes the system scalable and tackles a common problem with current sensor communication networks: they all need to be perfectly synced to work well.
    The researchers say the work marks a significant step forward in large-scale wireless sensor technology and may one day help shape how scientists collect and interpret information from these little silicon devices, especially since electronic sensors have become ubiquitous as a result of modern technology.
    “We live in a world of sensors,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “They are all over the place. They’re certainly in our automobiles, they are in so many places of work and increasingly getting into our homes. The most demanding environment for these sensors will always be inside the human body.”
    That’s why the researchers believe the system can help lay the foundation for the next generation of implantable and wearable biomedical sensors. There is a growing need in medicine for microdevices that are efficient, unobtrusive and unnoticeable but that also operate as part of a large ensembles to map physiological activity across an entire area of interest.

    “This is a milestone in terms of actually developing this type of spike-based wireless microsensor,” Lee said. “If we continue to use conventional methods, we cannot collect the high channel data these applications will require in these kinds of next-generation systems.”
    The events the sensors identify and transmit can be specific occurrences such as changes in the environment they are monitoring, including temperature fluctuations or the presence of certain substances.
    The sensors are able to use as little energy as they do because external transceivers supply wireless power to the sensors as they transmit their data — meaning they just need to be within range of the energy waves sent out by the transceiver to get a charge. This ability to operate without needing to be plugged into a power source or battery make them convenient and versatile for use in many different situations.
    The team designed and simulated the complex electronics on a computer and has worked through several fabrication iterations to create the sensors. The work builds on previous research from Nurmikko’s lab at Brown that introduced a new kind of neural interface system called “neurograins.” This system used a coordinated network of tiny wireless sensors to record and stimulate brain activity.
    “These chips are pretty sophisticated as miniature microelectronic devices, and it took us a while to get here,” said Nurmikko, who is also affiliated with Brown’s Carney Institute for Brain Science. “The amount of work and effort that is required in customizing the several different functions in manipulating the electronic nature of these sensors — that being basically squeezed to a fraction of a millimeter space of silicon — is not trivial.”
    The researchers demonstrated the efficiency of their system as well as just how much it could potentially be scaled up. They tested the system using 78 sensors in the lab and found they were able to collect and send data with few errors, even when the sensors were transmitting at different times. Through simulations, they were able to show how to decode data collected from the brains of primates using about 8,000 hypothetically implanted sensors.
    The researchers say next steps include optimizing the system for reduced power consumption and exploring broader applications beyond neurotechnology.
    “The current work provides a methodology we can further build on,” Lee said. More

  • in

    Artificial nanofluidic synapses can store computational memory

    Memory, or the ability to store information in a readily accessible way, is an essential operation in computers and human brains. A key difference is that while brain information processing involves performing computations directly on stored data, computers shuttle data back and forth between a memory unit and a central processing unit (CPU). This inefficient separation (the von Neumann bottleneck) contributes to the rising energy cost of computers.
    Since the 1970s, researchers have been working on the concept of a memristor (memory resistor); an electronic component that can, like a synapse, both compute and store data. But Aleksandra Radenovic in the Laboratory of Nanoscale Biology (LBEN) in EPFL’s School of Engineering set her sight on something even more ambitious: a functional nanofluidic memristive device that relies on ions, rather than electrons and their oppositely charged counterparts (holes). Such an approach would more closely mimic the brain’s own — much more energy efficient — way of processing information.
    “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living organisms,” Radenovic says.
    “We have fabricated a new nanofluidic device for memory applications that is significantly more scalable and much more performant than previous attempts,” says LBEN postdoctoral researcher Théo Emmerich. “This has enabled us, for the very first time, to connect two such ‘artificial synapses’, paving the way for the design of brain-inspired liquid hardware.”
    The research has recently been published in Nature Electronics.
    Just add water
    Memristors can switch between two conductance states — on and off — through manipulation of an applied voltage. While electronic memristors rely on electrons and holes to process digital information, LBEN’s memristor can take advantage of a range of different ions. For their study, the researchers immersed their device in an electrolyte water solution containing potassium ions, but others could be used, including sodium and calcium.

    “We can tune the memory of our device by changing the ions we use, which affects how it switches from on to off, or how much memory it stores,” Emmerich explains.
    The device was fabricated on a chip at EPFL’s Center of MicroNanoTechnology by creating a nanopore at the center of a silicon nitride membrane. The researchers added palladium and graphite layers to create nano-channels for ions. As a current flows through the chip, the ions percolate through the channels and converge at the pore, where their pressure creates a blister between the chip surface and the graphite. As the graphite layer is forced up by the blister, the device becomes more conductive, switching its memory state to ‘on’. Since the graphite layer stays lifted, even without a current, the device ‘remembers’ its previous state. A negative voltage puts the layers back into contact, resetting the memory to the ‘off’ state.
    “Ion channels in the brain undergo structural changes inside a synapse, so this also mimics biology,” says LBEN PhD student Yunfei Teng, who worked on fabricating the devices — dubbed highly asymmetric channels (HACs) in reference to the shape of the ion flow toward the central pores.
    LBEN PhD student Nathan Ronceray adds that the team’s observation of the HAC’s memory action in real time is also a novel achievement in the field. “Because we were dealing with a completely new memory phenomenon, we built a microscope to watch it in action.”
    By collaborating with Riccardo Chiesa and Edoardo Lopriore of the Laboratory of Nanoscale Electronics and Structures, led by Andras Kis, the researchers succeeded in connecting two HACs with an electrode to form a logic circuit based on ion flow. This achievement represents the first demonstration of digital logic operations based on synapse-like ionic devices. But the researchers aren’t stopping there: their next goal is to connect a network of HACs with water channels to create fully liquid circuits. In addition to providing an in-built cooling mechanism, the use of water would facilitate the development of bio-compatible devices with potential applications in brain-computer interfaces or neuromedicine. More

  • in

    Researchers develop deep learning model to predict breast cancer

    Researchers have developed a new, interpretable artificial intelligence (AI) model to predict 5-year breast cancer risk from mammograms, according to a new study published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    One in 8 women, or approximately 13% of the female population in the U.S., will develop invasive breast cancer in their lifetime and 1 in 39 women (3%) will die from the disease, according to the American Cancer Society. Breast cancer screening with mammography, for many women, is the best way to find breast cancer early when treatment is most effective. Having regularly scheduled mammograms can significantly lower the risk of dying from breast cancer. However, it remains unclear how to precisely predict which women will develop breast cancer through screening alone.
    Mirai, a state-of-the-art, deep learning-based algorithm, has demonstrated proficiency as a tool to help predict breast cancer but, because little is known about its reasoning process, the algorithm has the potential for overreliance by radiologists and incorrect diagnoses.
    “Mirai is a black box — a very large and complex neural network, similar in construction to ChatGPT — and no one knew how it made its decisions,” said the study’s lead author, Jon Donnelly, B.S., a Ph.D. student in the Department of Computer Science at Duke University in Durham, North Carolina. “We developed an interpretable AI method that allows us to predict breast cancer from mammograms 1 to 5 years in advance. AsymMirai is much simpler and much easier to understand than Mirai.”
    For the study, Donnelly and colleagues in the Department of Computer Science and Department of Radiology compared their newly developed mammography-based deep learning model called AsymMirai to Mirai’s 1- to 5-year breast cancer risk predictions. AsymMirai was built on the “front end” deep learning portion of Mirai, while replacing the rest of that complicated method with an interpretable module: local bilateral dissimilarity, which looks at tissue differences between the left and right breasts.
    “Previously, differences between the left and right breast tissue were used only to help detect cancer, not to predict it in advance,” Donnelly said. “We discovered that Mirai uses comparisons between the left and right sides, which is how we were able to design a substantially simpler network that also performs comparisons between the sides.”
    For the study, the researchers compared 210,067 mammograms from 81,824 patients in the EMory BrEast imaging Dataset (EMBED) from January 2013 to December 2020 using both Mirai and AsymMirai models. The researchers found that their simplified deep learning model performed almost as well as the state-of-the-art Mirai for 1- to 5-year breast cancer risk prediction.
    The results also supported the clinical importance of breast asymmetry and, as a result, highlights the potential of bilateral dissimilarity as a future imaging marker for breast cancer risk.
    Since the reasoning behind AsymMirai’s predictions is easy to understand, it could be a valuable adjunct to human radiologists in breast cancer diagnoses and risk prediction, Donnelly said.
    “We can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1 to 5 years based solely on localized differences between her left and right breast tissue,” he said. “This could have public impact because it could, in the not-too-distant future, affect how often women receive mammograms.” More