More stories

  • in

    2020 and 2016 tie for the hottest years on record

    2020 is in a “dead heat” with 2016 for the hottest year on record, scientists with NASA and the National Oceanic and Atmospheric Administration announced January 14.
    Based on ocean temperature data from buoys, floats and ships, as well as temperatures measured over land at weather stations around the globe, the U.S. agencies conducted independent analyses and arrived at a similar conclusion.
    NASA’s analysis showed 2020 to be slightly hotter, while NOAA’s showed that 2016 was still slightly ahead. But the differences in those assessments are within margins of error, “so it’s effectively a statistical tie,” said NASA climatologist Gavin Schmidt of the Goddard Institute for Space Studies in New York City at a Jan. 14 news conference.
    NOAA climate scientist Russell Vose, who is also based in New York City, described in the news conference the extreme warmth that occurred over land last year, including a months-long heat wave in Siberia (SN: 12/21/20). Europe and Asia recorded their hottest average temperatures on record in 2020, with South America recording its second warmest.
    It’s possible that 2020’s temperatures in some areas might have been even higher if not for massive wildfires. Vose noted that smoke lofted high into the stratosphere as a result of Australia’s intense fires in early 2020 may have slightly decreased temperatures in the Northern Hemisphere, though this is not yet known (SN: 12/15/20).
    The ocean-climate pattern known as the El Niño Southern Oscillation can boost or decrease global temperatures, depending on whether it’s in an El Niño or La Niña phase, respectively, Schmidt said (SN: 5/2/16). The El Niño phase was waning at the start of 2020, and a La Niña was starting, so the overall impact of this pattern was muted for the year. 2016, on the other hand, got a large temperature boost from El Niño. Without that, “2020 would have been by far the warmest year on record,” he said.
    But placed in the bigger picture, these rankings “don’t tell the whole story,” Vose said. “The last six to seven years really stand out above the rest of the record, suggesting the kind of rapid warming we’re seeing. [And] each of the past four decades was warmer than the one preceding it.” More

  • in

    Model analyzes how viruses escape the immune system

    One reason it’s so difficult to produce effective vaccines against some viruses, including influenza and HIV, is that these viruses mutate very rapidly. This allows them to evade the antibodies generated by a particular vaccine, through a process known as “viral escape.”
    MIT researchers have now devised a new way to computationally model viral escape, based on models that were originally developed to analyze language. The model can predict which sections of viral surface proteins are more likely to mutate in a way that enables viral escape, and it can also identify sections that are less likely to mutate, making them good targets for new vaccines.
    “Viral escape is a big problem,” says Bonnie Berger, the Simons Professor of Mathematics and head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory. “Viral escape of the surface protein of influenza and the envelope surface protein of HIV are both highly responsible for the fact that we don’t have a universal flu vaccine, nor do we have a vaccine for HIV, both of which cause hundreds of thousands of deaths a year.”
    In a study appearing today in Science, Berger and her colleagues identified possible targets for vaccines against influenza, HIV, and SARS-CoV-2. Since that paper was accepted for publication, the researchers have also applied their model to the new variants of SARS-CoV-2 that recently emerged in the United Kingdom and South Africa. That analysis, which has not yet been peer-reviewed, flagged viral genetic sequences that should be further investigated for their potential to escape the existing vaccines, the researchers say.
    Berger and Bryan Bryson, an assistant professor of biological engineering at MIT and a member of the Ragon Institute of MGH, MIT, and Harvard, are the senior authors of the paper, and the lead author is MIT graduate student Brian Hie.
    The language of proteins
    Different types of viruses acquire genetic mutations at different rates, and HIV and influenza are among those that mutate the fastest. For these mutations to promote viral escape, they must help the virus change the shape of its surface proteins so that antibodies can no longer bind to them. However, the protein can’t change in a way that makes it nonfunctional.

    advertisement

    The MIT team decided to model these criteria using a type of computational model known as a language model, from the field of natural language processing (NLP). These models were originally designed to analyze patterns in language, specifically, the frequency which with certain words occur together. The models can then make predictions of which words could be used to complete a sentence such as “Sally ate eggs for …” The chosen word must be both grammatically correct and have the right meaning. In this example, an NLP model might predict “breakfast,” or “lunch.”
    The researchers’ key insight was that this kind of model could also be applied to biological information such as genetic sequences. In that case, grammar is analogous to the rules that determine whether the protein encoded by a particular sequence is functional or not, and semantic meaning is analogous to whether the protein can take on a new shape that helps it evade antibodies. Therefore, a mutation that enables viral escape must maintain the grammaticality of the sequence but change the protein’s structure in a useful way.
    “If a virus wants to escape the human immune system, it doesn’t want to mutate itself so that it dies or can’t replicate,” Hie says. “It wants to preserve fitness but disguise itself enough so that it’s undetectable by the human immune system.”
    To model this process, the researchers trained an NLP model to analyze patterns found in genetic sequences, which allows it to predict new sequences that have new functions but still follow the biological rules of protein structure. One significant advantage of this kind of modeling is that it requires only sequence information, which is much easier to obtain than protein structures. The model can be trained on a relatively small amount of information — in this study, the researchers used 60,000 HIV sequences, 45,000 influenza sequences, and 4,000 coronavirus sequences.
    “Language models are very powerful because they can learn this complex distributional structure and gain some insight into function just from sequence variation,” Hie says. “We have this big corpus of viral sequence data for each amino acid position, and the model learns these properties of amino acid co-occurrence and co-variation across the training data.”
    Blocking escape

    advertisement

    Once the model was trained, the researchers used it to predict sequences of the coronavirus spike protein, HIV envelope protein, and influenza hemagglutinin (HA) protein that would be more or less likely to generate escape mutations.
    For influenza, the model revealed that the sequences least likely to mutate and produce viral escape were in the stalk of the HA protein. This is consistent with recent studies showing that antibodies that target the HA stalk (which most people infected with the flu or vaccinated against it do not develop) can offer near-universal protection against any flu strain.
    The model’s analysis of coronaviruses suggested that a part of the spike protein called the S2 subunit is least likely to generate escape mutations. The question still remains as to how rapidly the SARS-CoV-2 virus mutates, so it is unknown how long the vaccines now being deployed to combat the Covid-19 pandemic will remain effective. Initial evidence suggests that the virus does not mutate as rapidly as influenza or HIV. However, the researchers recently identified new mutations that have appeared in Singapore, South Africa, and Malaysia, that they believe should be investigated for potential viral escape (these new data are not yet peer-reviewed).
    In their studies of HIV, the researchers found that the V1-V2 hypervariable region of the protein has many possible escape mutations, which is consistent with previous findings, and they also found sequences that would have a lower probability of escape.
    The researchers are now working with others to use their model to identify possible targets for cancer vaccines that stimulate the body’s own immune system to destroy tumors. They say it could also be used to design small-molecule drugs that might be less likely to provoke resistance, for diseases such as tuberculosis.
    “There are so many opportunities, and the beautiful thing is all we need is sequence data, which is easy to produce,” Bryson says.
    The research was funded by a National Defense Science and Engineering Graduate Fellowship from the Department of Defense and a National Science Foundation Graduate Research Fellowship. More

  • in

    New state of matter in one-dimensional quantum gas

    By adding some magnetic flair to an exotic quantum experiment, physicists produced an ultra-stable one-dimensional quantum gas with never-before-seen ‘scar’ states – a feature that could someday be useful for securing quantum information. More

  • in

    Deep learning outperforms standard machine learning in biomedical research applications

    Compared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging, despite being more complex in their architecture, according to a new study in Nature Communications led by Georgia State University.
    Advanced biomedical technologies such as structural and functional magnetic resonance imaging (MRI and fMRI) or genomic sequencing have produced an enormous volume of data about the human body. By extracting patterns from this information, scientists can glean new insights into health and disease. This is a challenging task, however, given the complexity of the data and the fact that the relationships among types of data are poorly understood.
    Deep learning, built on advanced neural networks, can characterize these relationships by combining and analyzing data from many sources. At the Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State researchers are using deep learning to learn more about how mental illness and other disorders affect the brain.
    Although deep learning models have been used to solve problems and answer questions in a number of different fields, some experts remain skeptical. Recent critical commentaries have unfavorably compared deep learning with standard machine learning approaches for analyzing brain imaging data.
    However, as demonstrated in the study, these conclusions are often based on pre-processed input that deprive deep learning of its main advantage — the ability to learn from the data with little to no preprocessing. Anees Abrol, research scientist at TReNDS and the lead author on the paper, compared representative models from classical machine learning and deep learning, and found that if trained properly, the deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.
    “We compared these models side-by-side, observing statistical protocols so everything is apples to apples. And we show that deep learning models perform better, as expected,” said co-author Sergey Plis, director of machine learning at TReNDS and associate professor of computer science.

    advertisement

    Plis said there are some cases where standard machine learning can outperform deep learning. For example, diagnostic algorithms that plug in single-number measurements such as a patient’s body temperature or whether the patient smokes cigarettes would work better using classical machine learning approaches.
    “If your application involves analyzing images or if it involves a large array of data that can’t really be distilled into a simple measurement without losing information, deep learning can help,” Plis said.. “These models are made for really complex problems that require bringing in a lot of experience and intuition.”
    The downside of deep learning models is they are “data hungry” at the outset and must be trained on lots of information. But once these models are trained, said co-author Vince Calhoun, director of TReNDS and Distinguished University Professor of Psychology, they are just as effective at analyzing reams of complex data as they are at answering simple questions.
    “Interestingly, in our study we looked at sample sizes from 100 to 10,000 and in all cases the deep learning approaches were doing better,” he said.
    Another advantage is that scientists can reverse analyze deep-learning models to understand how they are reaching conclusions about the data. As the published study shows, the trained deep learning models learn to identify meaningful brain biomarkers.
    “These models are learning on their own, so we can uncover the defining characteristics that they’re looking into that allows them to be accurate,” Abrol said. “We can check the data points a model is analyzing and then compare it to the literature to see what the model has found outside of where we told it to look.”
    The researchers envision that deep learning models are capable of extracting explanations and representations not already known to the field and act as an aid in growing our knowledge of how the human brain functions. They conclude that although more research is needed to find and address weaknesses of deep-learning models, from a mathematical point of view, it’s clear these models outperform standard machine learning models in many settings.
    “Deep learning’s promise perhaps still outweighs its current usefulness to neuroimaging, but we are seeing a lot of real potential for these techniques,” Plis said. More

  • in

    New way to control electrical charge in 2D materials: Put a flake on it

    Physicists at Washington University in St. Louis have discovered how to locally add electrical charge to an atomically thin graphene device by layering flakes of another thin material, alpha-RuCl3, on top of it.
    A paper published in the journal Nano Letters describes the charge transfer process in detail. Gaining control of the flow of electrical current through atomically thin materials is important to potential future applications in photovoltaics or computing.
    “In my field, where we study van der Waals heterostructures made by custom-stacking atomically thin materials together, we typically control charge by applying electric fields to the devices,” said Erik Henriksen, assistant professor of physics in Arts & Sciences and corresponding author of the new study, along with Ken Burch at Boston College. “But here it now appears we can just add layers of RuCl33. It soaks up a fixed amount of electrons, allowing us to make ‘permanent’ charge transfers that don’t require the external electric field.”
    Jesse Balgley, a graduate student in Henriksen’s laboratory at Washington University, is second author of the study. Li Yang, professor of physics, and his graduate student Xiaobo Lu, also both at Washington University, helped with computational work and calculations, and are also co-authors.
    Physicists who study condensed matter are intrigued by alpha-RuCl3 because they would like to exploit certain of its antiferromagnetic properties for quantum spin liquids.
    In this new study, the scientists report that alpha-RuCl3 is able to transfer charge to several different types of materials — not just graphene, Henriksen’s personal favorite.
    They also found that they only needed to place a single layer of alpha-RuCl3 on top of their devices to create and transfer charge. The process still works, even if the scientists slip a thin sheet of an electrically insulating material between the RuCl3 and the graphene.
    “We can control how much charge flows in by varying the thickness of the insulator,” Henriksen said. “Also, we are able to physically and spatially separate the source of charge from where it goes — this is called modulation doping.”
    Adding charge to a quantum spin liquid is one mechanism thought to underlie the physics of high-temperature superconductivity.
    “Anytime you do this, it could get exciting,” Henriksen said. “And usually you have to add atoms to bulk materials, which causes lots of disorder. But here, the charge flows right in, no need to change the chemical structure, so it’s a ‘clean’ way to add charge.”

    Story Source:
    Materials provided by Washington University in St. Louis. Original written by Talia Ogliore. Note: Content may be edited for style and length. More

  • in

    Drones could help create a quantum internet

    The quantum internet may be coming to you via drone.
    Scientists have now used drones to transmit particles of light, or photons, that share the quantum linkage called entanglement. The photons were sent to two locations a kilometer apart, researchers from Nanjing University in China report in a study to appear in Physical Review Letters.
    Entangled quantum particles can retain their interconnected properties even when separated by long distances. Such counterintuitive behavior can be harnessed to allow new types of communication. Eventually, scientists aim to build a global quantum internet that relies on transmitting quantum particles to enable ultrasecure communications by using the particles to create secret codes to encrypt messages. A quantum internet could also allow distant quantum computers to work together, or perform experiments that test the limits of quantum physics.
    Quantum networks made with fiber-optic cables are already beginning to be used (SN: 9/28/20). And a quantum satellite can transmit photons across China (SN: 6/15/17). Drones could serve as another technology for such networks, with the advantages of being easily movable as well as relatively quick and cheap to deploy.
    The researchers used two drones to transmit the photons. One drone created pairs of entangled particles, sending one particle to a station on the ground while relaying the other to the second drone. That machine then transmitted the particle it received to a second ground station a kilometer away from the first. In the future, fleets of drones could work together to send entangled particles to recipients in a variety of locations. More