More stories

  • in

    From atoms to materials: Algorithmic breakthrough unlocks path to sustainable technologies

    New research by the University of Liverpool could signal a step change in the quest to design the new materials that are needed to meet the challenge of net zero and a sustainable future.
    Publishing in the journal Nature, the Liverpool researchers have shown that a mathematical algorithm can guarantee to predict the structure of any material just based on knowledge of the atoms that make it up.
    Developed by an interdisciplinary team of researchers from the University of Liverpool’s Departments of Chemistry and Computer Science, the algorithm systematically evaluates entire sets of possible structures at once, rather than considering them one at a time, to accelerate identification of the correct solution.
    This breakthrough makes it possible to identify those materials that can be made and, in many cases, to predict their properties. The new method was demonstrated on quantum computers that have the potential to solve many problems faster than classical computers and can therefore speed up the calculations even further.
    Our way of life depends on materials — “everything is made of something.” New materials are needed to meet the challenge of net zero, from batteries and solar absorbers for clean power to providing low-energy computing and the catalysts that will make the clean polymers and chemicals for our sustainable future.
    This search is slow and difficult because there are so many ways that atoms could be combined to make materials, and in particular so many structures that could form. In addition, materials with transformative properties are likely to have structures that are different from those that are known today, and predicting a structure that nothing is known about is a tremendous scientific challenge.
    Professor Matt Rosseinsky, from the University’s Department of Chemistry and Materials Innovation Factory, said: “Having certainty in the prediction of crystal structures now offers the opportunity to identify from the whole of the space of chemistry exactly which materials can be synthesised and the structures that they will adopt, giving us for the first time the ability to define the platform for future technologies.
    “With this new tool, we will be able to define how to use those chemical elements that are widely available and begin to create materials to replace those based on scarce or toxic elements, as well as to find materials that outperform those we rely on today, meeting the future challenges of a sustainable society.”
    Professor Paul Spirakis, from the University’s Department of Computer Science, said: “We managed to provide a general algorithm for crystal structure prediction that can be applied to a diversity of structures. Coupling local minimization to integer programming allowed us to explore the unknown atomic positions in the continuous space using strong optimization methods in a discrete space.
    Our aim is to explore and use more algorithmic ideas in the nice adventure of discovering new and useful materials. Joining efforts of chemists and computer scientists was the key to this success.”
    The research team includes researchers from the University of Liverpool’s Departments of Computer Science and Chemistry, the Materials Innovation Factory and the Leverhulme Research Centre for Functional Materials Design, which was established to develop new approaches to the design of functional materials at the atomic scale through interdisciplinary research.
    This project has received funding from the Leverhulme Trust and the Royal Society. More

  • in

    Deciphering the thermodynamic arrow of time in large-scale complex networks

    Life, from the perspective of thermodynamics, is a system out of equilibrium, resisting tendencies towards increasing their levels of disorder. In such a state, the dynamics are irreversible over time. This link between the tendency toward disorder and irreversibility is expressed as the arrow of time by the English physicist Arthur Eddington in 1927.
    Now, an international team including researchers from Kyoto University, Hokkaido University, and the Basque Center for Applied Mathematics, has developed a solution for temporal asymmetry, furthering our understanding of the behavior of biological systems, machine learning, and AI tools.
    “The study offers, for the first time, an exact mathematical solution of the temporal asymmetry — also known as entropy production — of nonequilibrium disordered Ising networks,” says co-author Miguel Aguilera of the Basque Center for Applied Mathematics.
    The researchers focused on a prototype of large-scale complex networks called the Ising model, a tool used to study recurrently connected neurons. When connections between neurons are symmetric, the Ising model is in a state of equilibrium and presents complex disordered states called spin glasses. The mathematical solution of this state led to the award of the 2021 Nobel Prize in physics to Giorgio Parisi.
    Unlike in living systems, however, spin crystals are in equilibrium and their dynamics are time-reversible. The researchers instead worked on the time-irreversible Ising dynamics caused by asymmetric connections between neurons.
    The exact solutions obtained serve as benchmarks for developing approximate methods for learning artificial neural networks. The development of learning methods used in multiple phases may advance machine learning studies.
    “The Ising model underpins recent advances in deep learning and generative artificial neural networks. So, understanding its behavior offers critical insights into both biological and artificial intelligence in general,” added Hideaki Shimazaki at KyotoU’s Graduate School of Informatics.
    “Our findings are the result of an exciting collaboration involving insights from physics, neuroscience and mathematical modeling,” remarked Aguilera. “The multidisciplinary approach has opened the door to novel ways to understand the organization of large-scale complex networks and perhaps decipher the thermodynamic arrow of time.” More

  • in

    Growing bio-inspired polymer brains for artificial neural networks

    A new method for connecting neurons in neuromorphic wetware has been developed by researchers from Osaka University and Hokkaido University. The wetware comprises conductive polymer wires grown in a three-dimensional configuration, done by applying square-wave voltage to electrodes submerged in a precursor solution. The voltage can modify wire conductance, allowing the network to be trained. This fabricated network is able to perform unsupervised Hebbian learning and spike-based learning.
    The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These ‘neuromorphic’ networks, however, run on hardware that looks nothing like a biological brain, which limits performance. Now, researchers from Osaka University and Hokkaido University plan to change this by creating neuromorphic ‘wetware’.
    While neural-network models have achieved remarkable success in applications such as image generation and cancer diagnosis, they still lag far behind the general processing abilities of the human brain. In part, this is because they are implemented in software using traditional computer hardware that is not optimized for the millions of parameters and connections that these models typically require.
    Neuromorphic wetware, based on memristive devices, could address this problem. A memristive device is a device whose resistance is set by its history of applied voltage and current. In this approach, electropolymerization is used to link electrodes immersed in a precursor solution using wires made of conductive polymer. The resistance of each wire is then tuned using small voltage pulses, resulting in a memristive device.
    “The potential to create fast and energy-efficient networks has been shown using 1D or 2D structures,” says senior author Megumi Akai-Kasaya. “Our aim was to extend this approach to the construction of a 3D network.”
    The researchers were able to grow polymer wires from a common polymer mixture called ‘PEDOT:PSS’, which is highly conductive, transparent, flexible, and stable. A 3D structure of top and bottom electrodes was first immersed in a precursor solution. The PEDOT:PSS wires were then grown between selected electrodes by applying a square-wave voltage on these electrodes, mimicking the formation of synaptic connections through axon guidance in an immature brain.
    Once the wire was formed, the characteristics of the wire, especially the conductance, were controlled using small voltage pulses applied to one electrode, which changes the electrical properties of the film surrounding the wires.
    “The process is continuous and reversible,” explains lead author Naruki Hagiwara, “and this characteristic is what enables the network to be trained, just like software-based neural networks.”
    The fabricated network was used to demonstrate unsupervised Hebbian learning (i.e., when synapses that often fire together strengthen their shared connection over time). What’s more, the researchers were able to precisely control the conductance values of the wires so that the network could complete its tasks. Spike-based learning, another approach to neural networks that more closely mimics the processes of biological neural networks, was also demonstrated by controlling the diameter and conductivity of the wires.
    Next, by fabricating a chip with a larger number of electrodes and using microfluidic channels to supply the precursor solution to each electrode, the researchers hope to build a larger and more powerful network. Overall, the approach determined in this study is a big step toward the realization of neuromorphic wetware and closing the gap between the cognitive abilities of humans and computers. More

  • in

    Antarctic sea ice has been hitting record lows for most of this year

    Something strange is happening to the Antarctic’s sea ice. The areal expanse of floating ice fringing the continent is not only at a record low for this time of year — surpassing a record just set in 2022 — but ice extent has been hitting record lows throughout the year.

    “What’s happened here is unlike the Arctic sea ice expanse,” says Mark Serreze, a climate scientist and the director of the U.S. National Snow and Ice Data Center, or NSIDC, in Boulder, Colo. We’ve come to expect a dramatic decline in sea ice at Earth’s other pole, he says (SN: 9/25/19). “Not much has happened to Antarctica’s sea ice until the last few years. But it’s just plummeted.”

    NSIDC uses satellite-gleaned data, collected daily, to keep an eye on the spread of sea ice at both poles. Throughout most of 2023, the ring of sea ice around Antarctica has repeatedly set new record lows, staying well below the average extent from 1981 to 2010. On February 21 — the height of the Southern Hemisphere’s summer — the sea ice expanse hit an all-time low since record-keeping began in 1978, of 1.79 million square kilometers. That’s 130,000 square kilometers — about the size of the state of New York — smaller than the previous recorded minimum, reached on February 25, 2022.

    Subpar sea ice

    The amount of ocean around Antarctica covered in sea ice in 2023 (red) has stayed well below the average from 1981 to 2010 (black). Sea ice expanse hit a record low in late February — surpassing a record set just in 2022 (blue). The sea ice expanse for every year from 1981 to 2021 is shown in gray.

    Even as the Southern Hemisphere shifted into winter, Antarctic sea ice remained at record low levels. On June 27, the ice was dotted across about 11.7 million square kilometers of ocean. That’s about 2.6 million square kilometers below the 1981–2010 average, and about 1.2 million square kilometers below the previous lowest extent on record for June 27, set in 2022.

    Unlike Arctic ice, whose dwindling is known to be closely tied to global warming, it’s been harder to parse the reasons for changes in Antarctic sea ice extent. That difficulty has made it unclear whether changes are the result of natural variability or whether “something big has changed,” Serreze says.

    As of June 28, the sea ice surrounding Antarctica, as measured by satellite, covered a smaller area of ocean than the average extent from 1981 to 2010 for this time of year. Yellow lines and dots represent missing satellite data.U.S. National Snow and Ice Data Center

    The last few years have given scientists pause (SN: 6/27/17). “We’re kind of dropping off an edge,” Serreze says. It’s not yet clear whether this year’s extent is part of a larger trend, he notes. But “the longer that persists, the more likely it is that something big is happening.”

    The Arctic and the Antarctic regions are polar opposites, so to speak, in their geographic setting. Ice in the Arctic Ocean is confined to a relatively small body of water ringed by land. The Antarctic, by contrast, is a landmass surrounded by ocean, which means the sea ice around the continent is much more mobile than up north, with a larger seasonal range as it expands in the Southern Hemisphere’s winter and shrinks in summer. Climate simulations have, accordingly, consistently predicted that the Arctic would show bigger sea ice losses as the planet warms, at least at first, while Antarctica would be slower to respond.

    As to why the Antarctic ice has tracked so low this year there are a few possible culprits. Regional climate patterns — particularly an air pressure pattern known as the Southern Annular Mode that shifts the direction of winds blowing around the continent — can pack or diffuse the sea ice cover around Antarctica. And other regional patterns, such as the El Niño Southern Oscillation, can affect both ocean and air circulation in the southern high latitudes.

    .subscribe-cta {
    color: black;
    margin-top: 0px;
    background-image: url(“”);
    background-size: cover;
    padding: 20px;
    border: 1px solid #ffcccb;
    border-top: 5px solid #e04821;
    clear: both;
    }

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    Right now, scientists are concerned most with what lies beneath the ice (SN: 12/13/21). “There’s growing evidence that there has been some kind of change in ocean circulation that is bringing more heat” to the region, which affects the ice cover, Serreze says. “There are a bunch of people looking into this; we’re really blitzing to get the data. We need to understand what the heck is going on in the ocean.” More

  • in

    ‘Workplace AI revolution isn’t happening yet,’ survey shows

    The UK risks a growing divide between organisations who have invested in new, artificial intelligence-enabled digital technologies and those who haven’t, new research suggests.
    Only 36% of UK employers have invested in AI-enabled technologies like industrial robots, chat bots, smart assistants and cloud computing over the past five years, according to a nationally representative survey from the Digital Futures at Work Research Centre (Digit). The survey was carried out between November 2021 and June 2022, with a second wave now underway.
    Academics at the University of Leeds, with colleagues at the Universities of Sussex and Cambridge, led the research, finding that just 10% of employers who hadn’t already invested in AI-enabled technologies were planning to invest in the next two years.
    The new data also points to a growing skills problem. Less than 10% of employers anticipated a need to make an investment in digital skills training in the coming years, despite 75% finding it difficult to recruit people with the right skills. Almost 60% of employers reported that none of their employees had received formal digital skills training in the past year.
    Lead researcher Professor Mark Stuart, Pro Dean for Research and Innovation at Leeds University Business School, said: “A mix of hope, speculation, and hype is fuelling a runaway narrative that the adoption of new AI-enabled digital technologies will rapidly transform the UK’s labour market, boosting productivity and growth. These hopes are often accompanied by fears about the consequences for jobs and even of existential risk.
    “However, our findings suggest there is a need to focus on a different policy challenge. The workplace AI revolution is not happening quite yet. Policymakers will need to address both low employer investment in digital technologies and low investment in digital skills, if the UK economy is to realise the potential benefits of digital transformation.”
    Stijn Broecke, Senior Economist at the Organisation for Economic Co-operation and Development (OECD), said: “At a time when AI is shifting digitalisation into a higher gear, it is important to move beyond the hype and have a debate that is driven by evidence rather than fear and anecdote. This new report by the Digital Futures at Work Research Centre (Digit) does exactly this and provides a nuanced picture of the impact of digital technologies on the workplace, highlighting both the risks and the opportunities.”
    The main reasons for investing were improving efficiency, productivity and product and service quality, according to the survey. On the other hand, the key reasons for non-investment were AI being irrelevant to the business activity, wider business risks and the nature of skills demanded.
    There was little evidence in this survey to suggest that investing in AI-enabled technology leads to job losses. In fact, digital adopters were more likely to have increased their employment in the five-year period before the survey.
    As policymakers race to keep up with new developments in technology, the researchers are now urging politicians to focus on the facts of AI in the workplace.
    The Employers’ Digital Practices at Work Survey is a key output of the Digital Futures at Work Research Centre, which is funded by the Economic and Social Research Council (ESRC) and co-led by the Universities of Sussex and Leeds Business Schools. The First Findings report will be available on the Digit website on Tuesday 4 July. More

  • in

    Counting Africa’s largest bat colony

    Once a year, a small forest in Zambia becomes the site of one of the world’s greatest natural spectacles. In November, straw-colored fruit bats migrate from across the African continent to a patch of trees in Kasanka National Park. For reasons not yet known, the bats converge for three months in a small area of the park, forming the largest colony of bats anywhere in Africa. The exact number of bats in this colony, however, has never been known. Estimates range anywhere from 1 to 10 million. A new method developed by the Max Planck Institute of Animal Behavior (MPI-AB) has counted the colony with the greatest accuracy yet. The method uses GoPro cameras to record bats and then applies artificial intelligence (AI) to detect animals without the need for human observers. The method, published in the journal Ecosphere, produced an overall estimate of between 750,000 and 1,000,000 bats in Kasanka — making the colony the largest for bats by biomass anywhere in the world.
    “We’ve shown that cheap cameras, combined with AI, can be used to monitor large animal populations in ways that would otherwise be impossible,” says Ben Koger who is first author on the paper. “This approach will change what we know about the natural world and how we work to maintain it in the face of rapid human development and climate change.”
    Africa’s secret gardeners
    Even amongst the charismatic fauna of the African continent, the straw-colored fruit bat shines bright. By some estimates, it’s the most abundant mammal anywhere on the continent. And, by traveling up to two thousand kilometers every year, it’s also the most extreme long-distance migrant of any flying fox. From an environmental perspective, these merits matter a lot. By dispersing seeds as they fly over vast distances, the fruit bats are cardinal reforesters of degraded land — making them a “keystone” species on the African continent.
    Scientists have long sought to estimate colony sizes of this important species, but the challenges of manually counting very large populations have led to widely fluctuating numbers. That’s always frustrated Dina Dechmann, a biologist from the MPI-AB, who has studied straw-colored fruit bats for over 10 years. Concerned that she has witnessed a decline in numbers of these fruit bats over her career, Dechmann wanted a tool that could accurately reveal if populations were changing. That is, she needed a way of counting bats that was reproducible and comparable across time.
    “Straw-colored fruit bats are the secret gardeners of Africa,” says Dechmann. “They connect the continent in ways that no other seed disperser does. A loss of the species would be devastating for the ecosystem. So, if the population is decreasing at all, we urgently need to know.”
    Dechmann began talking to longtime collaborators Roland Kays from NC State University and Teague O’Mara from Southeastern Louisiana University, as well as Kasanka Trust, the Zambian conservation organization responsible for managing Kasanka National Park and protecting its colony of bats. Together, they wondered if advances in computer vision and artificial intelligence could improve the accuracy and efficiency of counting large and complex bat populations. To find out, they approached Ben Koger, then a doctoral student at the MPI-AB, who was an expert in using automated approaches to create ecological datasets.

    Accurate and comparable bat counts
    Koger worked to devise a method that could be used by scientists and conservation managers to efficiently quantify the complex system. His method comprised two main steps. First, nine GoPro cameras were set up evenly around the colony to record the bats as they left the roost at dusk. Second, Koger trained deep learning models to automatically detect and count bats in the videos. To test the method’s accuracy, the team manually counted bats in a sample of clips and found the AI was 95% accurate — it even worked well in dark conditions.
    “Using more sophisticated technology to monitor a colony as giant as Kasanka’s could be prohibitively expensive because you’d need so much equipment,” says Koger. “But we could show that cheap cameras paired with our custom software algorithms did very well at detecting and counting bats at our study site. This is hugely important for monitoring the site in the future.”
    Recording bats over five nights, the new method counted an average of between around 750,000 and 1,000,000 animals per night. This result falls below previous counts at Kasanka, and the authors state that the study might not have caught the peak of bat migration, and some animals might have arrived after the count period. Even so, the study’s estimate makes Kasanka’s colony the heaviest congregation of bats anywhere in the world.
    Says Dechmann: “This is a game-changer for counting and conserving large populations of animals. Now, we have an efficient and reproducible way of monitoring animals over time. If we use this same method to census animals every year, we can actually say if the population is going up or down.”
    For the Kasanka colony, which is facing threats from agriculture and constriction, Dechmann says that the need for accurate monitoring has never been more urgent than now.
    “It’s easy to assume that losing a few animals here and there from large populations won’t make a dent. But if we are to maintain the ecosystem services provided by these animals, we need to maintain their populations at meaningful levels. The Kasanka colony isn’t just one of many; it’s a sink colony of bats from across the subcontinent. Losing this colony would be devastating for Africa as a whole.” More

  • in

    Limiting loss in leaky fibers

    A theoretical understanding of the relationship between the geometrical structure of hollow-core optical fibres and their leakage loss will inspire the design of novel low-loss fibres.
    Immense progress has been made in recent years to increase the efficiency of optical fibres through the design of cables that allow data to be transmitted both faster and at broader bandwidths. The greatest improvements have been made in the area of hollow-core fibres — a type of fibre that is notoriously ‘leaky’ yet also essential for many applications.
    Now, for the first time, scientists have figured out why some air-filled fibre designs work so much more efficiently than others.
    The puzzle has been solved by recent PhD graduate Dr Leah Murphy and Emeritus Professor David Bird from the Centre for Photonics and Photonic Materials at the University of Bath.
    The researchers’ theoretical and computational analysis gives a clear explanation for a phenomenon that other physicists have observed in practice: that a hollow-centred optical fibre incorporating glass filaments into its design causes ultra-low loss of light as it travels from source to destination.
    Dr Murphy said: “The work is exciting because it adds a new perspective to a 20-year-long conversation about how antiresonant, hollow-core fibres guide light. I’m really optimistic that this will encourage researchers to try out interesting new hollow-core fibre designs where light loss is kept ultra-low.”
    The communication revolution

    Optical fibres have transformed communications in recent years, playing a vital role in enabling the enormous growth of fast data transmission. Specially designed fibres have also become key in the fields of imaging, lasers and sensing (as seen, for instance, in pressure and temperature sensors used in harsh environments).
    The best fibres have some astounding properties — for example, a pulse of light can travel over 50km along a standard silica glass fibre and still retain more than 10% of its original intensity (an equivalent would be the ability to see through 50km of water).
    But the fact that light is guided through a solid material means current fibres have some drawbacks. Silica glass becomes opaque when the light it is attempting to transmit falls within the mid-infrared and ultraviolet ends of the electromagnetic spectrum. This means applications that need light at these wavelengths (such as spectrometry and instruments used by astrophysicists) cannot use standard fibres.
    In addition, high-intensity light pulses are distorted in standard fibres and they can even destroy the fibre itself.
    Researchers have been working hard to find solutions to these drawbacks, putting their efforts into developing optical fibres that guide light through air rather than glass.

    This, however, brings its own set of problems: a fundamental property of light is that it doesn’t like to be confined in a low-density region like air. Optical fibres that use air rather than glass are intrinsically leaky (the way a hosepipe would be if water could seep through the sides).
    The confinement loss (or leakage loss) is a measure of how much light intensity is lost as it moves through the fibres, and a key research goal is to improve the design of the fibre’s structure to minimise this loss.
    Hollow cores
    The most promising designs involve a central hollow core surrounded and confined by a specially designed cladding. Slotted within the cladding are hollow, ultra-thin-walled glass capillaries attached to an outer glass jacket.
    Using this set-up, the loss performance of the hollow-core fibre is close to that of a conventional fibre.
    An intriguing feature of these hollow-core fibres is that a theoretical understanding of how and why they guide light so well has not kept up with experimental progress.
    For around two decades, scientists have had a good physical understanding of how the thin glass capillary walls that face the hollow core (green in the diagram) act to reflect light back into the core and thus prevent leakage. But a theoretical model that includes only this mechanism greatly overestimates the confinement loss, and the question of why real fibres guide light far more effectively than the simple theoretical model would predict has, until now, remained unanswered.
    Dr Murphy and Professor Bird describe their model in a paper published this week in the leading journal Optica.
    The theoretical and computational analysis focuses on the role played by sections of the glass capillary walls (red in the diagram) that face neither the inner core nor the outer wall of the fibre structure.
    As well as supporting the core-facing elements of the cladding, the Bath researchers show that these elements play a crucial role in guiding light, by imposing a structure on the wave fields of the propagating light (grey curved lines in the diagram). The authors have named the effect of these structures ‘azimuthal confinement’.
    Although the basic idea of how azimuthal confinement works is simple, the concept is shown to be remarkably powerful in explaining the relationship between the geometry of the cladding structure and the confinement loss of the fibre.
    Dr Murphy, first author of the paper, said: “We expect the concept of azimuthal confinement to be important to other researchers who are studying the effect of light leakage from hollow-core fibres, as well as those who are involved in developing and fabricating new designs.”
    Professor Bird, who led the project, added: “This was a really rewarding project that needed the time and space to think about things in a different way and then work through all the details.
    “We started working on the problem in the first lockdown and it has now been keeping me busy through the first year of my retirement. The paper provides a new way for researchers to think about leakage of light in hollow-core fibres, and I’m confident it will lead to new designs being tried out.”
    Dr Murphy was funded by the UK Engineering and Physical Sciences Research Council. More

  • in

    AI and CRISPR precisely control gene expression

    Artificial intelligence can predict on- and off-target activity of CRISPR tools that target RNA instead of DNA, according to new research published in Nature Biotechnology.
    The study by researchers at New York University, Columbia Engineering, and the New York Genome Center, combines a deep learning model with CRISPR screens to control the expression of human genes in different ways — such as flicking a light switch to shut them off completely or by using a dimmer knob to partially turn down their activity. These precise gene controls could be used to develop new CRISPR-based therapies.
    CRISPR is a gene editing technology with many uses in biomedicine and beyond, from treating sickle cell anemia to engineering tastier mustard greens. It often works by targeting DNA using an enzyme called Cas9. In recent years, scientists discovered another type of CRISPR that instead targets RNA using an enzyme called Cas13.
    RNA-targeting CRISPRs can be used in a wide range of applications, including RNA editing, knocking down RNA to block expression of a particular gene, and high-throughput screening to determine promising drug candidates. Researchers at NYU and the New York Genome Center created a platform for RNA-targeting CRISPR screens using Cas13 to better understand RNA regulation and to identify the function of non-coding RNAs. Because RNA is the main genetic material in viruses including SARS-CoV-2 and flu, RNA-targeting CRISPRs also hold promise for developing new methods to prevent or treat viral infections. Also, in human cells, when a gene is expressed, one of the first steps is the creation of RNA from the DNA in the genome.
    A key goal of the study is to maximize the activity of RNA-targeting CRISPRs on the intended target RNA and minimize activity on other RNAs which could have detrimental side effects for the cell. Off-target activity includes both mismatches between the guide and target RNA as well as insertion and deletion mutations. Earlier studies of RNA-targeting CRISPRs focused only on on-target activity and mismatches; predicting off-target activity, particularly insertion and deletion mutations, has not been well-studied. In human populations, about one in five mutations are insertions or deletions, so these are important types of potential off-targets to consider for CRISPR design.
    “Similar to DNA-targeting CRISPRs such as Cas9, we anticipate that RNA-targeting CRISPRs such as Cas13 will have an outsized impact in molecular biology and biomedical applications in the coming years,” said Neville Sanjana, associate professor of biology at NYU, associate professor of neuroscience and physiology at NYU Grossman School of Medicine, a core faculty member at New York Genome Center, and the study’s co-senior author. “Accurate guide prediction and off-target identification will be of immense value for this newly developing field and therapeutics.”
    In their study in Nature Biotechnology, Sanjana and his colleagues performed a series of pooled RNA-targeting CRISPR screens in human cells. They measured the activity of 200,000 guide RNAs targeting essential genes in human cells, including both “perfect match” guide RNAs and off-target mismatches, insertions, and deletions.

    Sanjana’s lab teamed up with the lab of machine learning expert David Knowles to engineer a deep learning model they named TIGER (Targeted Inhibition of Gene Expression via guide RNA design) that was trained on the data from the CRISPR screens. Comparing the predictions generated by the deep learning model and laboratory tests in human cells, TIGER was able to predict both on-target and off-target activity, outperforming previous models developed for Cas13 on-target guide design and providing the first tool for predicting off-target activity of RNA-targeting CRISPRs.
    “Machine learning and deep learning are showing their strength in genomics because they can take advantage of the huge datasets that can now be generated by modern high-throughput experiments. Importantly, we were also able to use “interpretable machine learning” to understand why the model predicts that a specific guide will work well,” said Knowles, assistant professor of computer science and systems biology at Columbia Engineering, a core faculty member at New York Genome Center, and the study’s co-senior author.
    “Our earlier research demonstrated how to design Cas13 guides that can knock down a particular RNA. With TIGER, we can now design Cas13 guides that strike a balance between on-target knockdown and avoiding off-target activity,” said Hans-Hermann (Harm) Wessels, the study’s co-first author and a senior scientist at the New York Genome Center, who was previously a postdoctoral fellow in Sanjana’s laboratory.
    The researchers also demonstrated that TIGER’s off-target predictions can be used to precisely modulate gene dosage — the amount of a particular gene that is expressed — by enabling partial inhibition of gene expression in cells with mismatch guides. This may be useful for diseases in which there are too many copies of a gene, such as Down syndrome, certain forms of schizophrenia, Charcot-Marie-Tooth disease (a hereditary nerve disorder), or in cancers where aberrant gene expression can lead to uncontrolled tumor growth.
    “Our deep learning model can tell us not only how to design a guide RNA that knocks down a transcript completely, but can also ‘tune’ it — for instance, having it produce only 70% of the transcript of a specific gene,” said Andrew Stirn, a PhD student at Columbia Engineering and the New York Genome Center, and the study’s co-first author.
    By combining artificial intelligence with an RNA-targeting CRISPR screen, the researchers envision that TIGER’s predictions will help avoid undesired off-target CRISPR activity and further spur development of a new generation of RNA-targeting therapies.
    “As we collect larger datasets from CRISPR screens, the opportunities to apply sophisticated machine learning models are growing rapidly. We are lucky to have David’s lab next door to ours to facilitate this wonderful, cross-disciplinary collaboration. And, with TIGER, we can predict off-targets and precisely modulate gene dosage which enables many exciting new applications for RNA-targeting CRISPRs for biomedicine,” said Sanjana.
    Additional study authors include Alejandro Méndez-Mancilla and Sydney K. Hart of NYU and the New York Genome Center, and Eric J. Kim of Columbia University. The research was supported by grants from the National Institutes of Health (DP2HG010099, R01CA218668, R01GM138635), DARPA (D18AP00053), the Cancer Research Institute, and the Simons Foundation for Autism Research Initiative. More