More stories

  • in

    Scientists develop artificial intelligence method to predict anti-cancer immunity

    Researchers and data scientists at UT Southwestern Medical Center and MD Anderson Cancer Center have developed an artificial intelligence technique that can identify which cell surface peptides produced by cancer cells called neoantigens are recognized by the immune system.
    The pMTnet technique, detailed online in Nature Machine Intelligence, could lead to new ways to predict cancer prognosis and potential responsiveness to immunotherapies.
    “Determining which neoantigens bind to T cell receptors and which don’t has seemed like an impossible feat. But with machine learning, we’re making progress,” said senior author Dr. Tao Wang, Ph.D., Assistant Professor of Population and Data Sciences, and with the Harold C. Simmons Comprehensive Cancer Center and the Center for Genetics of Host Defense at UT Southwestern.
    Mutations in the genome of cancer cells cause them to display different neoantigens on their surfaces. Some of these neoantigens are recognized by immune T cells that hunt for signs of cancer and foreign invaders, allowing cancer cells to be destroyed by the immune system. However, others seem invisible to T cells, allowing cancers to grow unchecked.
    “For the immune system, the presence of neoantigens is one of the biggest differences between normal and tumor cells,” said Tianshi Lu, first co-author with Ze Zhang, doctoral students in the Tao Wang lab, which uses state-of-the-art bioinformatics and biostatistics approaches to study the implications of tumor immunology for tumorigenesis, metastasis, prognosis, and treatment response in a variety of cancers. “If we can figure out which neoantigens stimulate an immune response, then we may be able to use this knowledge in a variety of different ways to fight cancer,” Ms. Lu said.
    Being able to predict which neoantigens are recognized by T cells could help researchers develop personalized cancer vaccines, engineer better T cell-based therapies, or predict how well patients might respond to other types of immunotherapies. But there are tens of thousands of different neoantigens, and methods to predict which ones trigger a T cell response have proven to be time-consuming, technically challenging, and costly.
    Searching for a better technique with support of grants from the National Institutes of Health (NIH) and Cancer Prevention and Research Institute of Texas (CPRIT), the research team looked to machine learning. They trained a deep learning-based algorithm that they named pMTnet using data from known binding or nonbinding combinations of three different components: neoantigens; proteins called major histocompatibility complexes (MHCs) that present neoantigens on cancer cell surfaces; and the T cell receptors (TCRs) responsible for recognizing the neoantigen-MHC complexes. They then tested the algorithm against a dataset developed from 30 different studies that had experimentally identified binding or nonbinding neoantigen T cell-receptor pairs. This experiment showed that the new algorithms had a high level of accuracy.
    The researchers used this new tool to gather insights on neoantigens cataloged in The Cancer Genome Atlas, a public database that holds information from more than 11,000 primary tumors. pMTnet showed that neoantigens generally trigger a stronger immune response compared with tumor-associated antigens. It also predicted which patients had better responses to immune checkpoint blockade therapies and had better overall survival rates.
    “As an immunologist, the most significant hurdle currently facing immunotherapy is the ability to determine which antigens are recognized by which T cells in order to leverage these pairings for therapeutic purposes,” said corresponding author Alexandre Reuben, Ph.D., Assistant Professor of Thoracic-Head & Neck Medical Oncology at MD Anderson. “pMTnet outperforms its current alternatives and brings us significantly closer to this objective.”
    Other UTSW researchers who contributed to this study include James Zhu, Yunguan Wang, Xue Xiao, and Lin Xu. Other MD Anderson scientists who contributed to this work include Peixin Jiang, Chantale Bernatchez, John V. Heymach, and Don L. Gibbons. Dr. Jun Wang from NYU Langone Health also contributed to this work.
    UT Southwestern’s Simmons Cancer Center and MD Anderson Cancer Center are among the exclusive 51 designated comprehensive centers with the National Cancer Institute, which includes a joint effort with the National Human Genome Research Institute to oversee The Cancer Genome Atlas project. The study was supported by the NIH (grants 5P30CA142543/TW and R01CA258584/TW), CPRIT (RP190208/TW), MD Anderson (Lung Cancer Moon Shot), the University Cancer Foundation at MD Anderson, the Waun Ki Hong Lung Cancer Research Fund, Exon 20 Group, and Rexanna’s Foundation for Fighting Lung Cancer. More

  • in

    Peering into the Moon's shadows with AI

    The Moon’s polar regions are home to craters and other depressions that never receive sunlight. Today, a group of researchers led by the Max Planck Institute for Solar System Research (MPS) in Germany presents the highest-resolution images to date covering 17 such craters in the journal Nature Communications.
    Craters of this type could contain frozen water, making them attractive targets for future lunar missions, and the researchers focused further on relatively small and accessible craters surrounded by gentle slopes. In fact, three of the craters have turned out to lie within the just-announced mission area of NASA’s Volatiles Investigating Polar Exploration Rover (VIPER), which is scheduled to touch down on the Moon in 2023. Imaging the interior of permanently shadowed craters is difficult, and efforts so far have relied on long exposure times resulting in smearing and lower resolution. By taking advantage of reflected sunlight from nearby hills and a novel image processing method, the researchers have now produced images at 1-2 meters per pixel, which is at or very close to the best capability of the cameras.
    The Moon is a cold, dry desert. Unlike the Earth, it is not surrounded by a protective atmosphere and water which existed during the Moon’s formation has long since evaporated under the influence of solar radiation and escaped into space. Nevertheless, craters and depressions in the polar regions give some reason to hope for limited water resources. Scientists from MPS, the University of Oxford and the NASA Ames Research Center have now taken a closer look at some of these regions.
    “Near the lunar north and south poles, the incident sunlight enters the craters and depressions at a very shallow angle and never reaches some of their floors,” MPS-scientist Dr. Valentin Bickel, first author of the new paper, explains. In this “eternal night,” temperatures in some places are so cold that frozen water is expected to have lasted for millions of years. Impacts from comets or asteroids could have delivered it, or it could have been outgassed by volcanic eruptions, or formed by the interaction of the surface with the solar wind. Measurements of neutron flux and infrared radiation obtained by space probes in recent years indicate the presence of water in these regions. Eventually, NASA’s Lunar Crater Observation and Sensing Satellite (LCROSS) provided direct proof: twelve years ago, the probe fired a projectile into the shadowed south pole crater Cabeus. As later analysis showed, the dust cloud emitted into space contained a considerable amount of water.
    However, permanently shadowed regions are not only of scientific interest. If humans are to ever spend extended periods of time on the Moon, naturally occurring water will be a valuable resource — and shadowed craters and depressions will be an important destination. NASA’s uncrewed VIPER rover, for example, will explore the South Pole region in 2023 and enter such craters. In order to get a precise picture of their topography and geology in advance — for mission planning purposes, for example — images from space probes are indispensable. NASA’s Lunar Reconnaissance Orbiter (LRO) has been providing such images since 2009.
    However, capturing images within the deep darkness of permanently shadowed regions is exceptionally difficult; after all, the only sources of light are scattered light, such as that reflecting off the Earth and the surrounding topography, and faint starlight. “Because the spacecraft is in motion, the LRO images are completely blurred at long exposure times,” explains Ben Moseley of the University of Oxford, a co-author of the study. At short exposure times, the spatial resolution is much better. However, due to the small amounts of light available, these images are dominated by noise, making it hard to distinguish real geological features.
    To address this problem, the researchers have developed a machine learning algorithm called HORUS (Hyper-effective nOise Removal U-net Software) that “cleans up” such noisy images. It uses more than 70,000 LRO calibration images taken on the dark side of the Moon as well as information about camera temperature and the spacecraft’s trajectory to distinguish which structures in the image are artifacts and which are real. This way, the researchers can achieve a resolution of about 1-2 meters per pixel, which is five to ten times higher than the resolution of all previously available images.
    Using this method, the researchers have now re-evaluated images of 17 shadowed regions from the lunar south pole region which measure between 0.18 and 54 square kilometers in size. In the resulting images, small geological structures only a few meters across can be discerned much more clearly than before. These structures include boulders or very small craters, which can be found everywhere on the lunar surface. Since the Moon has no atmosphere, very small meteorites repeatedly fall onto its surface and create such mini-craters.
    “With the help of the new HORUS images, it is now possible to understand the geology of lunar shadowed regions much better than before,” explains Moseley. For example, the number and shape of the small craters provide information about the age and composition of the surface. It also makes it easier to identify potential obstacles and hazards for rovers or astronauts. In one of the studied craters, located on the Leibnitz Plateau, the researchers discovered a strikingly bright mini-crater. “Its comparatively bright color may indicate that this crater is relatively young,” says Bickel. Because such a fresh scar provides fairly unhindered insight into deeper layers, this site could be an interesting target for future missions, the researchers suggest.
    The new images do not provide evidence of frozen water on the surface, such as bright patches. “Some of the regions we’ve targeted might be slightly too warm,” Bickel speculates. It is likely that lunar water does not exist as a clearly visible deposit on the surface at all — instead, it could be intermixed with the regolith and dust, or may be hidden underground.
    To address this and other questions, the researchers’ next step is to use HORUS to study as many shadowed regions as possible. “In the current publication, we wanted to show what our algorithm can do. Now we want to apply it as comprehensively as possible,” says Bickel.
    This work has been enabled by the Frontier Development Lab (FDL.ai). FDL is a co-operative agreement between NASA, the SETI Institute and Trillium Technologies Inc, in partnership with the Luxembourg Space Agency and Google Cloud. More

  • in

    Mathematical constructions of COVID virus activity could provide new insight for vaccines, treatment

    Mathematical constructions of the action of SARS-CoV-2 and its multiple spikes, which enable its success at infecting cells, can give vaccine developers and pharmaceutical companies alike a more precise picture of what the virus is doing inside us and help fine tune prevention and treatment, mathematical modelers say.
    Mathematical construction enables examination of the activity of individual virus particles including the emergence of new spikes — and more severe infection potential — that can result when a single virus particle infects a human cell, says Dr. Arni S.R. Rao, director of the Laboratory for Theory and Mathematical Modeling in the Section of Infectious Diseases at the Medical College of Georgia.
    The number of spikes and the way they are distributed on a virus particle are believed to be key in the spread of the virus, Rao and his colleague Dr. Steven G. Krantz, professor of mathematics at Washington University in St. Louis, Missouri, write in the Journal of Mathematical Analysis and Applications.
    Laboratory experiments on virus particles and their bonding, or infection, of cells more typically are done on a group of viruses, they write.
    “Right now, we don’t know when a spike bonds with a cell, what happens with that virus particle’s other spikes,” says Rao, the study’s corresponding author. “How many new infected cells are being produced has never been studied for the coronavirus. We need quantification because ultimately the vaccine or pharmaceutical industry needs to target those infected cells,” he says of the additional insight their mathematical methodology, which also enables the study of the growth of the virus’ spikes over time, provides.
    Viral load is considered one of the strong predictors of the severity of sickness and risk of death and the number of spikes successfully bonding with cells is an indicator of the viral load, Rao says. PCR, or polymerase chain reaction, which is used for COVID testing, for example, provides viral load by assessing the amount of the virus’ RNA present in a test sample. More

  • in

    A computer algorithm called 'Eva' may have saved lives in Greece

    A prescriptive computer program developed by the USC Marshall School of Business and Wharton School of Business of the University of Pennsylvania for Greece to identify asymptomatic, infected travelers may have slowed tCOVID-19’s spread through its borders, a new study in the journal Nature indicates.
    “It was a very high-impact artificial intelligence project, and I believe we saved lives by developing a cutting edge, novel system for targeted testing during the pandemic,” said Kimon Drakopoulos, a USC Marshall assistant professor of Data Sciences and Operations and one of the study’s authors.
    In July 2020, Greece largely reopened its borders to spare its tourism-dependent economy from the devastating impact of long-term shutdowns amid COVID-19.
    Greece collaborated with USC Marshall and Wharton to create “Eva,” an artificial intelligence algorithm that uses real-time data to identify high-risk visitors for testing. Evidence shows the algorithm caught nearly twice as many asymptomatic infected travelers as would have been caught if Greece had relied on only travel restrictions and randomized COVID testing.
    “Our work with Eva proves that carefully integrating real-time data, artificial intelligence and lean operations offers huge benefits over conventional, widely used approaches to managing the pandemic,” said Vishal Gupta, a USC Marshall associate professor of data science another of the study’s authors.
    The joint study was published Wednesday in the journal Nature. More

  • in

    Switching on a superfluid

    We can learn a lot by studying microscopic and macroscopic changes in a material as it crosses from one phase to another, for example from ice to water to steam. A new Australian study examines systems transitioning from ‘normal’ fluid to a quantum state known as a superfluid, which can flow with zero friction, with a view to future, superfluid-based, quantum technologies, such as ultra-low energy electronics.We can learn a lot by studying microscopic and macroscopic changes in a material as it crosses from one phase to another, for example from ice to water to steam.
    But while these phase transitions are well understood in the case of water, much less is known about the dynamics when a system goes from being a normal fluid to a superfluid, which can flow with zero friction, ie without losing any energy.
    A new Swinburne study observing transition of an atomic gas from normal fluid to superfluid provides new insights into the formation of these remarkable states, with a view to future, superfluid-based, quantum technologies, such as ultra-low energy electronics.
    Superfluid formation was seen to involve a number of different timescales, associated with different dynamical processes that take place upon crossing the phase boundary.
    UNDERSTANDING DYNAMIC TRANSITIONS, TOWARDS FUTURE TECHNOLOGIES
    As a nonequilibrium, dynamic process, phase transitions are challenging to understand from a theoretical perspective, inside these fascinating and potentially useful states of matter. More

  • in

    Artificial intelligence may be set to reveal climate-change tipping points

    Researchers are developing artificial intelligence that could assess climate change tipping points. The deep learning algorithm could act as an early warning system against runaway climate change.
    Chris Bauch, a professor of applied mathematics at the University of Waterloo, is co-author of a recent research paper reporting results on the new deep-learning algorithm. The research looks at thresholds beyond which rapid or irreversible change happens in a system, Bauch said.
    “We found that the new algorithm was able to not only predict the tipping points more accurately than existing approaches but also provide information about what type of state lies beyond the tipping point,” Bauch said. “Many of these tipping points are undesirable, and we’d like to prevent them if we can.”
    Some tipping points that are often associated with run-away climate change include melting Arctic permafrost, which could release mass amounts of methane and spur further rapid heating; breakdown of oceanic current systems, which could lead to almost immediate changes in weather patterns; or ice sheet disintegration, which could lead to rapid sea-level change.
    The innovative approach with this AI, according to the researchers, is that it was programmed to learn not just about one type of tipping point but the characteristics of tipping points generally.
    The approach gains its strength from hybridizing AI and mathematical theories of tipping points, accomplishing more than either method could on its own. After training the AI on what they characterize as a “universe of possible tipping points” that included some 500,000 models, the researchers tested it on specific real-world tipping points in various systems, including historical climate core samples.
    “Our improved method could raise red flags when we’re close to a dangerous tipping point,” said Timothy Lenton, director of the Global Systems Institute at the University of Exeter and one of the study’s co-authors. “Providing improved early warning of climate tipping points could help societies adapt and reduce their vulnerability to what is coming, even if they cannot avoid it.”
    Deep learning is making huge strides in pattern recognition and classification, with the researchers having, for the first time, converted tipping-point detection into a pattern-recognition problem. This is done to try and detect the patterns that occur before a tipping point and get a machine-learning algorithm to say whether a tipping point is coming.
    “People are familiar with tipping points in climate systems, but there are tipping points in ecology and epidemiology and even in the stock markets,” said Thomas Bury, a postdoctoral researcher at McGill University and another of the co-authors on the paper. “What we’ve learned is that AI is very good at detecting features of tipping points that are common to a wide variety of complex systems.”
    The new deep learning algorithm is a “game-changer for the ability to anticipate big shifts, including those associated with climate change,” said Madhur Anand, another of the researchers on the project and director of the Guelph Institute for Environmental Research.
    Now that their AI has learned how tipping points function, the team is working on the next stage, which is to give it the data for contemporary trends in climate change. But Anand issued a word of caution of what may happen with such knowledge.
    “It definitely gives us a leg up,” she said. “But of course, it’s up to humanity in terms of what we do with this knowledge. I just hope that these new findings will lead to equitable, positive change.”
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Contact-tracing apps could improve vaccination strategies

    Mathematical modeling of disease spread suggests that herd immunity could be achieved with fewer vaccine doses by using Bluetooth-based contact-tracing apps to identify people who have more exposure to others — and targeting them for vaccination. Mark Penney, Yigit Yargic and their colleagues from the Perimeter Institute for Theoretical Physics in Ontario, Canada, present this approach in the open-access journal PLOS ONE on September 22, 2021.
    The COVID-19 pandemic has raised questions about how to best allocate limited supplies of vaccines for the greatest benefit against a disease. Mathematical models suggest that vaccines like those available for COVID-19 are most effective at reducing transmission when they are targeted to people who have more exposure to others. However, it can be challenging to identify such individuals.
    Penney and colleagues hypothesized that this challenge could be addressed by harnessing existing apps that anonymously alert users to potential COVID-19 exposure. These apps use Bluetooth technology to determine the duration of contact between any pair of individuals who both have the same app downloaded on their smart phones. The researchers wondered whether this technology could also be used to help identify and target vaccines to those with greater exposure — a strategy analogous to a wildfire-fighting practice called “hot-spotting,” which targets sites with intense fires.
    To explore the effectiveness of a hot-spotting approach to vaccination, Penney and colleagues used mathematical modeling to simulate how a disease would spread among a population with such a strategy in place. Specifically, they applied an analytical technique borrowed from statistical physics known as percolation theory.
    The findings suggest that a Bluetooth-based hot-spotting approach to vaccination could reduce the number of vaccine doses needed to achieve herd immunity by up to one half. The researchers found improvements even for simulations in which relatively few people use contact-tracing apps — a situation mirroring reality for COVID-19 in many regions.
    In the future, the modeling approach used for this study could be refined, such as by incorporating the effects of strains on the healthcare system. Nonetheless, the researchers note, the new findings highlight a technically feasible way to implement a strategy that previous research already supports.
    The authors add: “The technology underlying digital contact tracing apps has made it possible to implement novel decentralized and efficient vaccine strategies.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Quantum cryptography Records with Higher-Dimensional Photons

    Quantum cryptography is one of the most promising quantum technologies of our time: Exactly the same information is generated at two different locations, and the laws of quantum physics guarantee that no third party can intercept this information. This creates a code with which information can be perfectly encrypted.
    The team of Prof. Marcus Huber from the Atomic Institute of TU Wien developed a new type of quantum cryptography protocol, which has now been tested in practice in cooperation with Chinese research groups: While up to now one normally used photons that can be in two different states, the situation here is more complicated: Eight different paths can be taken by each of the photons. As the team has now been able to show, this makes the generation of the quantum cryptographic key faster and also significantly more robust against interference. The results have now been published in the scientific journal Physical Review Letters.
    Two states, two dimensions
    “There are many different ways of using photons to transmit information,” says Marcus Huber. “Often, experiments focus on their photons’ polarisation. For example, whether they oscillate horizontally or vertically — or whether they are in a quantum-mechanical superposition state in which, in a sense, they assume both states simultaneously. Similar to how you can describe a point on a two-dimensional plane with two coordinates, the state of the photon can be represented as a point in a two-dimensional space.”
    But a photon can also carry information independently of the direction of polarization. One can, for example, use the information about which path the photon is currently travelling on. This is exactly what has now been exploited: “A laser beam generates photon pairs in a special kind of crystal. There are eight different points in the crystal where this can happen,” explains Marcus Huber. Depending on the point at which the photon pair was created, each of the two photons can move along eight different paths — or along several paths at the same time, which is also permitted according to the laws of quantum theory.
    These two photons can be directed to completely different places and analysed there. One of the eight possibilities is measured, completely at random — but as the two photons are quantum-physically entangled, the same result is always obtained at both places. Whoever is standing at the first measuring device knows what another person is currently detecting at the second measuring device — and no one else in the universe can get hold of this information. More