More stories

  • in

    Artificial Intelligence beats doctors in accurately assessing eye problems

    The clinical knowledge and reasoning skills of GPT-4 are approaching the level of specialist eye doctors, a study led by the University of Cambridge has found.
    GPT-4 — a ‘large language model’ — was tested against doctors at different stages in their careers, including unspecialised junior doctors, and trainee and expert eye doctors. Each was presented with a series of 87 patient scenarios involving a specific eye problem, and asked to give a diagnosis or advise on treatment by selecting from four options.
    GPT-4 scored significantly better in the test than unspecialised junior doctors, who are comparable to general practitioners in their level of specialist eye knowledge.
    GPT-4 gained similar scores to trainee and expert eye doctors — although the top performing doctors scored higher.
    The researchers say that large language models aren’t likely to replace healthcare professionals, but have the potential to improve healthcare as part of the clinical workflow.
    They say state-of-the-art large language models like GPT-4 could be useful for providing eye-related advice, diagnosis, and management suggestions in well-controlled contexts, like triaging patients, or where access to specialist healthcare professionals is limited.
    “We could realistically deploy AI in triaging patients with eye issues to decide which cases are emergencies that need to be seen by a specialist immediately, which can be seen by a GP, and which don’t need treatment,” said Dr Arun Thirunavukarasu, lead author of the study, which he carried out while a student at the University of Cambridge’s School of Clinical Medicine.

    He added: “The models could follow clear algorithms already in use, and we’ve found that GPT-4 is as good as expert clinicians at processing eye symptoms and signs to answer more complicated questions.
    “With further development, large language models could also advise GPs who are struggling to get prompt advice from eye doctors. People in the UK are waiting longer than ever for eye care.
    Large volumes of clinical text are needed to help fine-tune and develop these models, and work is ongoing around the world to facilitate this.
    The researchers say that their study is superior to similar, previous studies because they compared the abilities of AI to practicing doctors, rather than to sets of examination results.
    “Doctors aren’t revising for exams for their whole career. We wanted to see how AI fared when pitted against to the on-the-spot knowledge and abilities of practicing doctors, to provide a fair comparison,” said Thirunavukarasu, who is now an Academic Foundation Doctor at Oxford University Hospitals NHS Foundation Trust.
    He added: “We also need to characterise the capabilities and limitations of commercially available models, as patients may already be using them — rather than the internet — for advice.”
    The test included questions about a huge range of eye problems, including extreme light sensitivity, decreased vision, lesions, itchy and painful eyes, taken from a textbook used to test trainee eye doctors. This textbook is not freely available on the internet, making it unlikely that its content was included in GPT-4’s training datasets.

    The results are published today in the journal PLOS Digital Health.
    “Even taking the future use of AI into account, I think doctors will continue to be in charge of patient care. The most important thing is to empower patients to decide whether they want computer systems to be involved or not. That will be an individual decision for each patient to make,” said Thirunavukarasu.
    GPT-4 and GPT-3.5 — or ‘Generative Pre-trained Transformers’ — are trained on datasets containing hundreds of billions of words from articles, books, and other internet sources. These are two examples of large language models; others in wide use include Pathways Language Model 2 (PaLM 2) and Large Language Model Meta AI 2 (LLaMA 2).
    The study also tested GPT-3.5, PaLM2, and LLaMA with the same set of questions. GPT-4 gave more accurate responses than all of them.
    GPT-4 powers the online chatbot ChatGPT to provide bespoke responses to human queries. In recent months, ChatGPT has attracted significant attention in medicine for attaining passing level performance in medical school examinations, and providing more accurate and empathetic messages than human doctors in response to patient queries.
    The field of artificially intelligent large language models is moving very rapidly. Since the study was conducted, more advanced models have been released — which may be even closer to the level of expert eye doctors. More

  • in

    AI speeds up drug design for Parkinson’s by ten-fold

    Researchers have used artificial intelligence techniques to massively accelerate the search for Parkinson’s disease treatments.
    The researchers, from the University of Cambridge, designed and used an AI-based strategy to identify compounds that block the clumping, or aggregation, of alpha-synuclein, the protein that characterises Parkinson’s.
    The team used machine learning techniques to quickly screen a chemical library containing millions of entries, and identified five highly potent compounds for further investigation.
    Parkinson’s affects more than six million people worldwide, with that number projected to triple by 2040. No disease-modifying treatments for the condition are currently available. The process of screening large chemical libraries for drug candidates — which needs to happen well before potential treatments can be tested on patients — is enormously time-consuming and expensive, and often unsuccessful.
    Using machine learning, the researchers were able to speed up the initial screening process by ten-fold, and reduce the cost by a thousand-fold, which could mean that potential treatments for Parkinson’s reach patients much faster. The results are reported in the journal Nature Chemical Biology.
    Parkinson’s is the fastest-growing neurological condition worldwide. In the UK, one in 37 people alive today will be diagnosed with Parkinson’s in their lifetime. In addition to motor symptoms, Parkinson’s can also affect the gastrointestinal system, nervous system, sleeping patterns, mood and cognition, and can contribute to a reduced quality of life and significant disability.
    Proteins are responsible for important cell processes, but when people have Parkinson’s, these proteins go rogue and cause the death of nerve cells. When proteins misfold, they can form abnormal clusters called Lewy bodies, which build up within brain cells stopping them from functioning properly.

    “One route to search for potential treatments for Parkinson’s requires the identification of small molecules that can inhibit the aggregation of alpha-synuclein, which is a protein closely associated with the disease,” said Professor Michele Vendruscolo from the Yusuf Hamied Department of Chemistry, who led the research. “But this is an extremely time-consuming process — just identifying a lead candidate for further testing can take months or even years.”
    While there are currently clinical trials for Parkinson’s currently underway, no disease-modifying drug has been approved, reflecting the inability to directly target the molecular species that cause the disease.
    This has been a major obstacle in Parkinson’s research, because of the lack of methods to identify the correct molecular targets and engage with them. This technological gap has severely hampered the development of effective treatments.
    The Cambridge team developed a machine learning method in which chemical libraries containing millions of compounds are screened to identify small molecules that bind to the amyloid aggregates and block their proliferation.
    A small number of top-ranking compounds were then tested experimentally to select the most potent inhibitors of aggregation. The information gained from these experimental assays was fed back into the machine learning model in an iterative manner, so that after few iterations, highly potent compounds were identified.
    “Instead of screening experimentally, we screen computationally,” said Vendruscolo, who is co-Director of the Centre for Misfolding Diseases. “By using the knowledge we gained from the initial screening with our machine learning model, we were able to train the model to identify the specific regions on these small molecules responsible for binding, then we can re-screen and find more potent molecules.”
    Using this method, the Cambridge team developed compounds to target pockets on the surfaces of the aggregates, which are responsible for the exponential proliferation of the aggregates themselves. These compounds are hundreds of times more potent, and far cheaper to develop, than previously reported ones.
    “Machine learning is having a real impact on the drug discovery process — it’s speeding up the whole process of identifying the most promising candidates,” said Vendruscolo. “For us this means we can start work on multiple drug discovery programmes — instead of just one. So much is possible due to the massive reduction in both time and cost — it’s an exciting time.”
    The research was conducted in the Chemistry of Health Laboratory in Cambridge, which was established with the support of the UK Research Partnership Investment Fund (UKRPIF) to promote the translation of academic research into clinical programmes. More

  • in

    Novel robotic training program reduces physician errors placing central lines

    More than five million central lines are placed in patients who need prolonged drug delivery, such as those undergoing cancer treatments, in the United States every year, yet the common procedure can lead to a bevy of complications in almost a million of those cases. To help decrease the rate of infections, blood clots and other complications associated with placing a central line catheter, Penn State researchers developed an online curriculum coupled with a hands-on simulation training to provide trainee physicians with more practice.
    Deployed in 2022 at the Penn State College of Medicine, the researchers recently assessed how the training impacted the prevalence of central line complications by comparing error rates from 2022-23, when the training had been fully implemented, to two prior years, 2016-17 and 2017-18, from before implementing the training. They found that all complication types — mechanical issues, infections and blood clots — were significantly lower after the training was launched.
    They published their results in the Journal of Surgical Education. The researchers hold patents on the technology used in this work. In addition to working to improve the central line placement training, the team is also applying the framework to other common procedures with high complication rates, such as colonoscopies and laparoscopic surgeries.
    “Our approach is focused on reducing preventable errors — this paper is the first significant clinical evidence that we are moving the needle on the gap in clinical education and clinical practice,” said Scarlett Miller, professor of industrial engineering and of mechanical engineering at Penn State and principal investigator on the project. “If we ensure physicians going through residency training are proficient in a skill, like placing central lines, we can minimize the risk on human life.”
    Traditional training for placing a central line and other routine surgical procedures starts with a resident watching a more senior doctor complete the process. Then, the resident is expected to do the procedure themselves, and, finally, they teach someone else to do the procedure.
    “The problem with that approach is that there are very few checks in the process, and the resident only improves by working with patients — who are at risk of complications,” Miller said. “The simulation approach allows someone to try the procedure hundreds, thousands of times without putting anyone at risk.”
    The new approach — the result of interdisciplinary work between engineers and clinicians, Miller said — uses online- and simulation-based training to perform standardized ultrasound-guided internal jugular central venous catheterization (US-IJCVC), which is a central line placed into the internal jugular vein via the neck.

    Residents first complete online training, which includes pre- and post-tests to evaluate knowledge gained. They then take that knowledge and apply in a skills lab, where they practice placing the central line on a novel dynamic haptic robotic trainer that can simulate various conditions and reactions. Residents can use ultrasound to image the line placement, like they would on a real person, on the robotic trainer, which offers automated feedback.
    “We started with 25 surgical residents at the Penn State Health Milton S. Hershey Medical Center, then expanded to all of the residents at Hershey and partnered with Cedars-Sinai Medical Center in Los Angeles to bring the training to their residents,” Miller said. “In total, we have trained about 700 physicians to date, and we train about 200 a year with our current funding.”
    It seems practice may get physicians closer to perfect, without the risk to human life, according to Miller. In this study, Miller and her team compared error rates from 2022, the first year the simulation training was fully deployed, to error rates from 2016 and 2017, when the training was not yet established. They did not use data from 2018-21, as the training was partially implemented but undergoing startup adjustments and challenges related to COVID that could not be controlled for a direct comparison. The researchers found that the range of reported error rates for mechanical complications — such as puncturing an artery or misplacing the catheter — increased from 10.4% in 2016 to 12.4% in 2017 but dropped to 7.3% in 2022. The same trend continued for error rates related to infections, with the 6.6% rate in 2016 increasing to 7.6% in 2017 and dropping to 4.1% in 2022. For blood clots, the error rates decreased from 12.3% in 2016 to 11.4% in 2017 to 8.1% in 2022.
    “We’re very motivated by the results to improve the system and hopefully expand it to other hospitals,” Miller said. “We’re reducing the error rates in a significant way, but we want more. We want zero errors.”
    Miller is also affiliated with the School of Engineering Design in the Penn State College of Engineering, the College of Information Sciences and Technology and the Department of Surgery in the Penn State College of Medicine. Paper co-authors include Jessica M. Gonzalez-Vargas, postdoctoral scholar in industrial engineering at Penn State; Elizabeth Sinz, associate medical director of the West Virginia University Critical Care and Trauma Institute; and Jason Moore, professor of mechanical engineering at Penn State.
    The U.S. National Science Foundation and the National Institutes of Health’s National Heart, Lung, and Blood Institute supported this work. More

  • in

    Quantum electronics: Charge travels like light in bilayer graphene

    An international research team led by the University of Göttingen has demonstrated experimentally that electrons in naturally occurring double-layer graphene move like particles without any mass, in the same way that light travels. Furthermore, they have shown that the current can be “switched” on and off, which has potential for developing tiny, energy-efficient transistors — like the light switch in your house but at a nanoscale. The Massachusetts Institute of Technology (MIT), USA, and the National Institute for Materials Science (NIMS), Japan, were also involved in the research. The results were published in Nature Communications.
    Graphene was identified in 2004 and is a single layer of carbon atoms. Among its many unusual properties, graphene is known for its extraordinarily high electrical conductivity due to the high and constant velocity of electrons travelling through this material. This unique feature has made scientists dream of using graphene for much faster and more energy-efficient transistors. The challenge has been that to make a transistor, the material needs to be controlled to have a highly insulating state in addition to its highly conductive state. In graphene, however, such a “switch” in the speed of the carrier cannot be easily achieved. In fact, graphene usually has no insulating state, which has limited graphene’s potential a transistor.
    The Göttingen University team have now found that two graphene layers, as found in the naturally occurring form of double-layer graphene, combine the best of both worlds: a structure that supports the amazingly fast motion of electrons moving like light as if they had no mass, in addition to an insulating state. The researchers showed that this condition can be changed by the application of an electric field applied perpendicularly to the material, making the double-layer graphene insulating. This property of fast-moving electrons had been theoretically predicted as early as 2009, but it took significantly enhanced sample quality as enabled my materials supplied by NIMS and close collaboration about theory with MIT, before it was possible to identify this experimentally. While these experiments were carried out at cryogenic temperatures — at around 273° below freezing — they show the potential of bilayer graphene to make highly efficient transistors.
    “We were already aware of the theory. However, now we have carried out experiments which actually show the light-like dispersion of electrons in bilayer graphene. It was a very exciting moment for the entire team,” says Professor Thomas Weitz, at Göttingen University’s Faculty of Physics. Dr Anna Seiler, Postdoctoral researcher and first author also at Göttingen University, adds: “Our work is very much a first step but a crucial one. The next step for researchers will be to see if bilayer graphene really can improve transistors or to investigate the potential of this effect in other areas of technology.” More

  • in

    Crucial connection for ‘quantum internet’ made for the first time

    The ability to share quantum information is crucial for developing quantum networks for distributed computing and secure communication. Quantum computing will be useful for solving some important types of problems, such as optimising financial risk, decrypting data, designing molecules, and studying the properties of materials.
    However, this development is being held up because quantum information can be lost when transmitted over long distances. One way to overcome this barrier is to divide the network into smaller segments and link them all up with a shared quantum state.
    To do this requires a means to store the quantum information and retrieve it again: that is, a quantum memory device. This must ‘talk’ to another device that allows the creation of quantum information in the first place.
    For the first time, researchers have created such a system that interfaces these two key components, and uses regular optical fibres to transmit the quantum data.
    The feat was achieved by researchers at Imperial College London, the University of Southampton, and the Universities of Stuttgart and Wurzburg in Germany, with the results published in Science Advances.
    Co-first author Dr Sarah Thomas, from the Department of Physics at Imperial College London, said: “Interfacing two key devices together is a crucial step forward in allowing quantum networking, and we are really excited to be the first team to have been able to demonstrate this.”
    Co-first author Lukas Wagner, from the University of Stuttgart, added: “Allowing long-distance locations, and even to quantum computers, to connect is a critical task for future quantum networks.”
    Long-distance communication

    In regular telecommunications — like the internet or phone lines — information can be lost over large distances. To combat this, these systems use ‘repeaters’ at regular points, which read and re-amplify the signal, ensuring it gets to its destination intact.
    Classical repeaters, however, cannot be used with quantum information, as any attempt to read and copy the information would destroy it. This is an advantage in one way, as quantum connections cannot be ‘tapped’ without destroying the information and alerting the users. But it is a challenge to be tackled for long-distance quantum networking.
    One way to overcome this problem is to share quantum information in the form of entangled particles of light, or photons. Entangled photons share properties in such a way that you cannot understand one without the other. To share entanglement over long distances across a quantum network you need two devices: one to create the entangled photons, and one to store them and allow them to be retrieved later.
    There are several devices used to create quantum information in the form of entangled photons and to store it, but both generating these photons on demand and having a compatible quantum memory in which to store them eluded researchers for a long time.
    Photons have certain wavelengths (which, in visible light, creates different colours), but devices for creating and storing them are often tuned to work with different wavelengths, preventing them from interfacing.
    To make the devices interface, the team created a system where both devices used the same wavelength. A ‘quantum dot’ produced (non-entangled) photons, which were then passed to a quantum memory system that stored the photons within a cloud of rubidium atoms. A laser turned the memory ‘on’ and ‘off’, allowing the photons to be stored and released on demand.

    Not only did the wavelength of these two devices match, but it is at the same wavelength as telecommunications networks used today — allowing it to be transmitted with regular fibre-optic cables familiar in everyday internet connections.
    European collaboration
    The quantum dot light source was created by researchers at the University of Stuttgart with support from the University of Wurzburg, and then brought to the UK to interface with the quantum memory device created by the Imperial and Southampton team. The system was assembled in a basement lab at Imperial College London.
    While independent quantum dots and quantum memories have been created that are more efficient than the new system, this is the first proof that devices can be made to interface, and at telecommunications wavelengths.
    The team will now look to improve the system, including making sure all the photons are produced at the same wavelength, improving how long the photons can be stored, and making the whole system smaller.
    As a proof of concept however, this is an important step forward, says co-author f from the University of Southampton: “Members of the quantum community have been actively attempting this link for some time. This includes us, having tried this experiment twice before with different memory and quantum dot devices, going back more than five years, which just shows how hard it is to do.
    “The breakthrough this time was convening experts to develop and run each part of the experiment with specialist equipment and working together to synchronise the devices.” More

  • in

    AI enhances physician-patient communication

    As one of the first health systems in the country to pilot the use of generative artificial intelligence (GenAI) to draft replies to patient messages inside the Epic Systems electronic health record, UC San Diego Health is a pioneer in shaping the future of digital health.
    The results of a new University of California San Diego School of Medicine study indicate that, although AI-generated replies did not reduce physician response time, they have contributed to relieving cognitive burden by starting an empathetic draft, which physicians can edit rather than starting from scratch.
    The study, published in the April 15, 2024 online edition of the Journal of the American Medical Association’s Network Open, is the first randomized prospective evaluation of AI-drafted physician messaging.
    “We are very interested in using AI to help solve health system challenges, including the increase in patient messages that are contributing to physician burnout,” said study senior author Christopher Longhurst, MD, executive director of the Joan and Irwin Jacobs Center for Health Innovation, chief medical officer and chief digital officer at UC San Diego Health. “The evidence that the messages are longer suggests that that they are higher quality, and the data is clear that physicians appreciated the help, which lowered cognitive burden.”
    This quality improvement study evaluates patient-physician correspondence and suggests that the integration of generative AI into digital health care interactions has the potential to positively impact patient care by improving communication quality, efficiency and engagement. In addition, by alleviating some of the physician workload, the goal is for generative AI to help reduce burnout by allowing doctors to focus on more complex aspects of patient care.
    “This study shows that generative AI can be a collaborative tool,” said study lead author Ming Tai-Seale, PhD, MPH, professor of family medicine at UC San Diego School of Medicine. “Our physicians receive about 200 messages a week. AI could help break ‘writer’s block’ by providing physicians an empathy-infused draft upon which to craft thoughtful responses to patients.”
    The COVID-19 pandemic sparked unprecedented use of digital communications between patients and doctors that have remained in high demand. Portals, such as MyUCSDChart, used by UC San Diego Health, make it simple to email a doctor directly and have created heightened pressure for prompt provider responses that many can no longer efficiently handle.

    Using generative AI to draft patient responses to non-emergency questions has been tested in the pilot program with electronic health record vendor Epic Systems, initiated in April 2023 at UC San Diego Health, to offer virtual physician assistance to help meet the rising demand of patient messages. For full transparency, the replies include a notification that they have been automatically generated by AI before being reviewed and edited by the physician who signs them.
    Time-crunched physicians who may only have time for a brief, facts-only response, found that generative AI is helping to draft longer, compassionate responses that are appreciated and understood by patients.
    “AI doesn’t get tired, so even at the end of a long day, it still has the capacity to help draft an empathetic message while synthesizing the request and relevant data into the response,” said study co-author Marlene Millen, MD, chief medical information officer for ambulatory care at UC San Diego Health. “So, while we were surprised by the study’s findings that AI messaging didn’t save doctors time, we see that it may help prevent burnout by providing a detailed draft as a starting point.”
    The study’s findings suggest a potential paradigm shift in health care communication by leveraging AI, noting that further analysis is needed to gauge how beneficial patients deem the increased empathy and reply length to be.
    UC San Diego Health, in conjunction with the Jacobs Center for Health Innovation, has been extensively testing GenAI models since May 2023. These transformative projects will help explore the safe, effective and novel use of GenAI in health care.
    Co-authors of the study include: Sally L. Baxter, Florin Vaida, Amanda Walker, Amy M. Sitapati, Chad Osborne, Joseph Diaz, Nimit Desai, Sophie Webb, Gregory Polston, Teresa Helsten, Erin Gross, Jessica Thackaberry, Ammar Mandvi, Dustin Lillie, Steve Li, Geneen Gin, Suraj Achar, Heather Hofflick, and Marlene Millen all of UC San Diego; and Christopher Sharp of Stanford. More

  • in

    Are these newly found rare cells a missing link in color perception?

    Scientists have long wondered how the eye’s three cone photoreceptor types work together to allow humans to perceive color. In a new study in the Journal of Neuroscience, researchers at the University of Rochester used adaptive optics to identify rare retinal ganglion cells (RGCs) that could help fill in the gaps in existing theories of color perception.
    The retina has three types of cones to detect color that are sensitive to either short, medium, or long wavelengths of light. Retinal ganglion cells transmit input from these cones to the central nervous system.
    In the 1980s, David Williams, the William G. Allyn Professor of Medical Optics, helped map the “cardinal directions” that explain color detection. However, there are differences in the way the eye detects color and how color appears to humans. Scientists suspected that while most RGCs follow the cardinal directions, they may work in tandem with small numbers of non-cardinal RGCs to create more complex perceptions.
    Recently, a team of researchers from Rochester’s Center for Visual Science, the Institute of Optics, and the Flaum Eye Institute identified some of these elusive non-cardinal RGCs in the fovea that could explain how humans see red, green, blue, and yellow.
    “We don’t really know anything for certain yet about these cells other than that they exist,” says Sara Patterson, a postdoctoral researcher at the Center for Visual Science who led the study. “There’s so much more that we have to learn about how their response properties operate, but they’re a compelling option as a missing link in how our retina processes color.”
    Using adaptive optics to overcome light distortion in the eye
    The team leveraged adaptive optics, which uses a deformable mirror to overcome light distortion and was first developed by astronomers to reduce image blur in ground-based telescopes. In the 1990s, Williams and his colleagues began applying adaptive optics to study the human eye. They created a camera that compensated for distortions caused by the eye’s natural aberrations, producing a clear image of individual photoreceptor cells.

    “The optics of the eye’s lens are imperfect and really reduce the amount of resolution you can get with an ophthalmoscope,” says Patterson. “Adaptive optics detects and corrects for these aberrations and gives us a crystal-clear look into the eye. This gives us unprecedented access to the retinal ganglion cells, which are the sole source of visual information to the brain.”
    Patterson says improving our understanding of the retina’s complex processes could ultimately help lead to better methods for restoring vision for people who have lost it.
    “Humans have more than 20 ganglion cells and our models of human vision are only based on three,” says Patterson. “There’s so much going on in the retina that we don’t know about. This is one of the rare areas where engineering has totally outpaced visual basic science. People are out there with retinal prosthetics in their eyes right now, but if we knew what all those cells do, we could actually have retinal prosthetics drive ganglion cells in accordance with their actual functional roles.”
    The work was supported through funding by the National Institutes of Health, Air Force Office of Scientific Research, and Research to Prevent Blindness. More

  • in

    Millions of gamers advance biomedical research

    Leveraging gamers and video game technology can dramatically boost scientific research according to a new study published today in Nature Biotechnology.
    4.5 million gamers around the world have advanced medical science by helping to reconstruct microbial evolutionary histories using a minigame included inside the critically and commercially successful video game, Borderlands 3. Their playing has led to a significantly refined estimate of the relationships of microbes in the human gut. The results of this collaboration will both substantially advance our knowledge of the microbiome and improve on the AI programs that will be used to carry out this work in future.
    Tracing the evolutionary relationships of bacteria
    By playing Borderlands Science, a mini-game within the looter-shooter video game Borderlands 3, these players have helped trace the evolutionary relationships of more than a million different kinds of bacteria that live in the human gut, some of which play a crucial role in our health. This information represents an exponential increase in what we have discovered about the microbiome up till now. By aligning rows of tiles which represent the genetic building blocks of different microbes, humans have been able to take on tasks that even the best existing computer algorithms have been unable to solve yet.
    The project was led by McGill University researchers, developed in collaboration with Gearbox Entertainment Company, an award-winning interactive entertainment company, and Massively Multiplayer Online Science (MMOS), a Swiss IT company connecting scientists to video games), and supported by the expertise and genomic material from the Microsetta Initiative led by Rob Knight from the Departments of Pediatrics, Bioengineering, and Computer Science & Engineering at the University of California San Diego.
    Humans improve on existing algorithms and lay groundwork for the future
    Not only have the gamers improved on the results produced by the existing programs used to analyze DNA sequences, but they are also helping lay the groundwork for improved AI programs that can be used in future.

    “We didn’t know whether the players of a popular game like Borderlands 3 would be interested or whether the results would be good enough to improve on what was already known about microbial evolution. But we’ve been amazed by the results.” says Jérôme Waldispühl, an associate professor in McGill’s School of Computer Science and senior author on the paper published today. “In half a day, the Borderlands Science players collected five times more data about microbial DNA sequences than our earlier game, Phylo, had collected over a 10-year period.”
    The idea for integrating DNA analysis into a commercial video game with mass market appeal came from Attila Szantner, an adjunct professor in McGill’s School of Computer Science and CEO and co-founder of MMOS. “As almost half of the world population is playing with videogames, it is of utmost importance that we find new creative ways to extract value from all this time and brainpower that we spend gaming,” says Szantner. “Borderlands Science shows how far we can get by teaming up with the game industry and its communities to tackle the big challenges of our times.”
    “Gearbox’s developers were eager to engage millions of Borderlands players globally with our creation of an appealing in-game experience to demonstrate how clever minds playing Borderlands are capable of producing tangible, useful, and valuable scientific data at a level not approachable with non-interactive technology and mediums,” said Randy Pitchford, founder and CEO of Gearbox Entertainment Company. “I’m proud that Borderlands Science has become one of the largest and most accomplished citizen science projects of all time, forecasting the opportunity for similar projects in future video games and pushing the boundaries of the positive effect that video games can make on the world.”
    Relating microbes to disease and lifestyle
    The tens of trillions of microbes that colonize our bodies play a crucial role in maintaining human health. But microbial communities can change over time in response to factors such as diet, medications, and lifestyle habits.
    Because of the sheer number of microbes involved, scientists are still only in the early days of being able to identify which microorganisms are affected by, or can affect, which conditions. Which is why the researchers’ project and the results from the gamers are so important.

    “We expect to be able to use this information to relate specific kinds of microbes to what we eat, to how we age, and to the many diseases ranging from inflammatory bowel disease to Alzheimer’s that we now know microbes to be involved in,” adds Knight, who also directs the Center for Microbiome Innovation at the UC San Diego. “Because evolution is a great guide to function, having a better tree relating our microbes to one another gives us a more precise view of what they are doing within and around us.”
    Building communities to advance knowledge
    “Here we have 4.5 million people who contributed to science. In a sense, this result is theirs too and they should feel proud about it,” says Waldispühl. “It shows that we can fight the fear or misconceptions that members of the public may have about science and start building communities who work collectively to advance knowledge.”
    “Borderlands Science created an incredible opportunity to engage with citizen scientists on a novel and important problem, using data generated by a separate massive citizen science project,” adds Daniel McDonald, the Scientific Director of the Microsetta Initiative. “These results demonstrate the remarkable value of open access data, and the scale of what is possible with inclusive practices in scientific endeavors.” More