More stories

  • in

    Better models of atmospheric ‘detergent’ can help predict climate change

    Earth’s atmosphere has a unique ability to cleanse itself by way of invisible molecules in the air that act as minuscule cleanup crews. The most important molecule in that crew is the hydroxyl radical (OH), nicknamed the “detergent of the atmosphere” because of its dominant role in removing pollutants. When the OH molecule chemically interacts with a variety of harmful gases, including the potent greenhouse gas methane, it is able to decompose the pollutants into forms that can be removed from Earth’s atmosphere.
    It is difficult to measure OH, however, and it is not directly emitted. Instead, researchers predict the presence of OH based on its chemical production from other, “precursor” gases. To make these predictions, researchers use computer simulations.
    In a new paper published in the journal PNAS, Lee Murray, an assistant professor of earth and environmental sciences at the University of Rochester, outlines why computer models used to predict future levels of OH — and, therefore, how long air pollutants and reactive greenhouse gases last in the atmosphere — have traditionally produced widely varying forecasts. The study is the latest in Murray’s efforts to develop models of the dynamics and composition of Earth’s atmosphere and has important implications in advancing policies to combat climate change.
    “We need to understand what controls changes in hydroxyl radical in Earth’s atmosphere in order to give us a better idea of the measures we need to take to rid the atmosphere of pollutants and reactive greenhouse gases,” Murray says.
    Building accurate computer models to predict OH levels is similar to baking: just as you must add precise ingredients in the proper amounts and order to make an edible cake, precise data and metrics must be input into computer models to make them more accurate.
    The various existing computer models used to predict OH levels have traditionally been designed with data input involving identical emissions levels of OH precursor gases. Murray and his colleagues, however, demonstrated that OH levels strongly depend on how much of these precursor emissions are lost before they react to produce OH. In this case, different bakers follow the same recipe of ingredients (emissions), but end up with different sizes of cake (OH levels) because some bakers throw out different portions of batter in the middle of the process.
    “Uncertainties in future predictions are primarily driven by uncertainties in how models implement the fate of reactive gases that are directly emitted,” Murray says.
    As Murray and his colleagues show, the computer models used to predict OH levels must evaluate the loss processes of reactive precursor gases, before they may be used for accurate future predictions.
    But more data is needed about these processes, Murray says.
    “Performing new measurements to constrain these processes will allow us to provide more accurate data about the amount of hydroxyl in the atmosphere and how it may change in the future,” he says.
    Story Source:
    Materials provided by University of Rochester. Original written by Lindsey Valich. Note: Content may be edited for style and length. More

  • in

    Scientists build on AI modeling to understand more about protein-sugar structures

    New research building on AI algorithms has enabled scientists to create more complete models of the protein structures in our bodies — paving the way for faster design of therapeutics and vaccines.
    The study — led by the University of York — used artificial intelligence (AI) to help researchers understand more about the sugar that surrounds most proteins in our bodies.
    Up to 70 per cent of human proteins are surrounded or scaffolded with sugar, which plays an important part in how they look and act. Moreover, some viruses like those behind AIDS, Flu, Ebola and COVID-19 are also shielded behind sugars (glycans). The addition of these sugars is known as modification.
    To study the proteins, researchers created software that adds missing sugar components to models created with AlphaFold, which is an artificial intelligence program developed by Google’s DeepMind which performs predictions of protein structures.
    Senior author, Dr Jon Agirre from the Department of Chemistry said: “The proteins of the human body are tiny machines that in their billions, make up our flesh and bones, transport our oxygen, allow us to function, and defend us from pathogens. And just like a hammer relies on a metal head to strike pointy objects including nails, proteins have specialised shapes and compositions to get their jobs done.”
    “The AlphaFold method for protein structure prediction has the potential to revolutionise workflows in biology, allowing scientists to understand a protein and the impact of mutations faster than ever.”
    “However, the algorithm does not account for essential modifications that affect protein structure and function, which gives us only part of the picture. Our research has shown that this can be addressed in a relatively straightforward manner, leading to a more complete structural prediction.”
    The recent introduction of AlphaFold and the accompanying database of protein structures has enabled scientists to have accurate structure predictions for all known human proteins.
    Dr Agirre added: “It is always great to watch an international collaboration grow to bear fruit, but this is just the beginning for us. Our software was used in the glycan structural work that underpinned the mRNA vaccines against SARS-CoV-2, but now there is so much more we can do thanks to the AlphaFold technological leap. It is still early stages, but the objective is to move on from reacting to changes in a glycan shield to anticipating them.”
    The research was conducted with Dr Elisa Fadda and Carl A. Fogarty from Maynooth University. Haroldas Bagdonas, PhD student at the York Structural Biology Laboratory, which is part of the Department of Chemistry, also worked on the study with Dr Agirre.
    Story Source:
    Materials provided by University of York. Note: Content may be edited for style and length. More

  • in

    Lithium-ion batteries made with recycled materials can outlast newer counterparts

    Lithium-ion batteries with recycled cathodes can outperform batteries with cathodes made from pristine materials, lasting for thousands of additional charging cycles, a study finds. Growing demand for these batteries — which power devices from smartphones to electric vehicles — may outstrip the world’s supply of some crucial ingredients, such as cobalt (SN: 5/7/19). Ramping up recycling could help avert a potential shortage. But some manufacturers worry that impurities in recycled materials may cause battery performance to falter.

    “Based on our study, recycled materials can perform as well as, or even better than, virgin materials,” says materials scientist Yan Wang of Worcester Polytechnic Institute in Massachusetts.

    Using shredded spent batteries, Wang and colleagues extracted the electrodes and dissolved the metals from those battery bits in an acidic solution. By tweaking the solution’s pH, the team removed impurities such as iron and copper and recovered over 90 percent of three key metals: nickel, manganese and cobalt. The recovered metals formed the basis for the team’s cathode material.

    In tests of how well batteries maintain their capacity to store energy after repeated use and recharging, batteries with recycled cathodes outperformed ones made with brand-new commercial materials of the same composition. It took 11,600 charging cycles for the batteries with recycled cathodes to lose 30 percent of their initial capacity. That’s about 50 percent better than the respectable 7,600 cycles for the batteries with new cathodes, the team reports October 15 in Joule. Those thousands of extra cycles could translate into years of better battery performance, Wang says. More

  • in

    Researchers move closer to controlling two-dimensional graphene

    The device you are currently reading this article on was born from the silicon revolution. To build modern electrical circuits, researchers control silicon’s current-conducting capabilities via doping, which is a process that introduces either negatively charged electrons or positively charged “holes” where electrons used to be. This allows the flow of electricity to be controlled and for silicon involves injecting other atomic elements that can adjust electrons — known as dopants — into its three-dimensional (3D) atomic lattice.
    Silicon’s 3D lattice, however, is too big for next-generation electronics, which include ultra-thin transistors, new devices for optical communication, and flexible bio-sensors that can be worn or implanted in the human body. To slim things down, researchers are experimenting with materials no thicker than a single sheet of atoms, such as graphene. But the tried-and-true method for doping 3D silicon doesn’t work with 2D graphene, which consists of a single layer of carbon atoms that doesn’t normally conduct a current.
    Rather than injecting dopants, researchers have tried layering on a “charge-transfer layer” intended to add or pull away electrons from the graphene. However, previous methods used “dirty” materials in their charge-transfer layers; impurities in these would leave the graphene unevenly doped and impede its ability to conduct electricity.
    Now, a new study in Nature Electronics proposes a better way. An interdisciplinary team of researchers, led by James Hone and James Teherani at Columbia University, and Won Jong Yoo at Sungkyungkwan University in Korea, describe a clean technique to dope graphene via a charge-transfer layer made of low-impurity tungsten oxyselenide (TOS).
    The team generated the new “clean” layer by oxidizing a single atomic layer of another 2D material, tungsten selenide. When TOS was layered on top of graphene, they found that it left the graphene riddled with electricity-conducting holes. Those holes could be fine-tuned to better control the materials’ electricity-conducting properties by adding a few atomic layers of tungsten selenide in between the TOS and the graphene.
    The researchers found that graphene’s electrical mobility, or how easily charges move through it, was higher with their new doping method than previous attempts. Adding tungsten selenide spacers further increased the mobility to the point where the effect of the TOS becomes negligible, leaving mobility to be determined by the intrinsic properties of graphene itself. This combination of high doping and high mobility gives graphene greater electrical conductivity than that of highly conductive metals like copper and gold.
    As the doped graphene got better at conducting electricity, it also became more transparent, the researchers said. This is due to Pauli blocking, a phenomenon where charges manipulated by doping block the material from absorbing light. At the infrared wavelengths used in telecommunications, the graphene became more than 99 percent transparent. Achieving a high rate of transparency and conductivity is crucial to moving information through light-based photonic devices. If too much light is absorbed, information gets lost. The team found a much smaller loss for TOS-doped graphene than for other conductors, suggesting that this method could hold potential for next-generation ultra-efficient photonic devices.
    “This is a new way to tailor the properties of graphene on demand,” Hone said. “We have just begun to explore the possibilities of this new technique.”
    One promising direction is to alter graphene’s electronic and optical properties by changing the pattern of the TOS, and to imprint electrical circuits directly on the graphene itself. The team is also working to integrate the doped material into novel photonic devices, with potential applications in transparent electronics, telecommunications systems, and quantum computers.
    Story Source:
    Materials provided by Columbia University. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Researchers discover predictable behavior in promising material for computer memory

    In the last few years, a class of materials called antiferroelectrics has been increasingly studied for its potential applications in modern computer memory devices. Research has shown that antiferroelectric-based memories might have greater energy efficiency and faster read and write speeds than conventional memories, among other appealing attributes. Further, the same compounds that can exhibit antiferroelectric behavior are already integrated into existing semiconductor chip manufacturing processes.
    Now, a team led by Georgia Tech researchers has discovered unexpectedly familiar behavior in the antiferroelectric material known as zirconium dioxide, or zirconia. They show that as the microstructure of the material is reduced in size, it behaves similarly to much better understood materials known as ferroelectrics. The findings were recently published in the journal Advanced Electronic Materials.
    Miniaturization of circuits has played a key role in improving memory performance over the last fifty years. Knowing how the properties of an antiferroelectric change with shrinking size should enable the design of more effective memory components.
    The researchers also note that the findings should have implications in many other areas besides memory.
    “Antiferroelectrics have a range of unique properties like high reliability, high voltage endurance, and broad operating temperatures that makes them useful in a wealth of different devices, including high-energy-density capacitors, transducers, and electro-optics circuits.” said Nazanin Bassiri-Gharb, coauthor of the paper and professor in the Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering at Georgia Tech. “But size scaling effects had gone largely under the radar for a long time.”
    “You can design your device and make it smaller knowing exactly how the material is going to perform,” said Asif Khan, coauthor of the paper and assistant professor in the School of Electrical and Computer Engineering and the School of Materials Science and Engineering at Georgia Tech. “From our standpoint, it opens really a new field of research.”
    Lasting Fields More

  • in

    Trapping spins with sound

    The captured electrons typically absorb light in the visible spectrum, so that a transparent material becomes colored under the presence of such centers, for instance in diamond. “Color centers are often coming along with certain magnetic properties, making them promising systems for applications in quantum technologies, like quantum memories — the qubits — or quantum sensors. The challenge here is to develop efficient methods to control the magnetic quantum property of electrons, or, in this case, their spin states,” Dr. Georgy Astakhov from HZDR’s Institute of Ion Beam Physics and Materials Research explains.
    His team colleague Dr. Alberto Hernández-Mínguez from the Paul-Drude-Institut expands on the subject: “This is typically realized by applying electromagnetic fields, but an alternative method is the use of mechanical vibrations like surface acoustic waves. These are sound waves confined to the surface of a solid that resemble water waves on a lake. They are commonly integrated in microchips as radio frequency filters, oscillators and transformers in current electronic devices like mobile phones, tablets and laptops.”
    Tuning the spin to the sound of a surface
    In their paper, the researchers demonstrate the use of surface acoustic waves for on-chip control of electron spins in silicon carbide, a semiconductor, which will replace silicon in many applications requiring high-power electronics, for instance, in electrical vehicles. “You might think of this control like the tuning of a guitar with a regular electronic tuner,” Dr. Alexander Poshakinskiy from the Ioffe Physical-Technical Institute in St. Petersburg weighs in and proceeds: “Only that in our experiment it is a bit more complicated: a magnetic field tunes the resonant frequencies of the electron spin to the frequency of the acoustic wave, while a laser induces transitions between the ground and excited state of the color center.”
    These optical transitions play a fundamental role: they enable the optical detection of the spin state by registering the light quanta emitted when the electron returns to the ground state. Due to a giant interaction between the periodic vibrations of the crystal lattice and the electrons trapped in the color centers, the scientists realize simultaneous control of the electron spin by the acoustic wave, in both its ground and excited state.
    At this point, Hernández-Mínguez calls into play another physical process: precession. “Anybody who played as a child with a spinning top experienced precession as a change in the orientation of the rotational axis while trying to tilt it. An electronic spin can be imagined as a tiny spinning top as well, in our case with a precession axes under the influence of an acoustic wave that changes orientation every time the color center jumps between ground and excited state. Now, since the amount of time spent by the color center in the excited state is random, the large difference in the alignment of the precession axes in the ground and excited states changes the orientation of the electron spin in an uncontrolled way.”
    This change renders the quantum information stored in the electronic spin to be lost after several jumps. In their work, the researchers show a way to prevent this: by appropriately tuning the resonant frequencies of the color center, the precession axes of the spin in the ground and excited states becomes what the scientists call collinear: the spins keep their precession orientation along a well-defined direction even when they jump between the ground and excited states.
    Under this specific condition, the quantum information stored in the electron spin becomes decoupled from the jumps between ground and excited state caused by the laser. This technique of acoustic manipulation provides new opportunities for the processing of quantum information in quantum devices with dimensions similar to those of current microchips. This should have a significant impact on the fabrication cost and, therefore, the availability of quantum technologies to the general public.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Spiders' web secrets unraveled

    Johns Hopkins University researchers discovered precisely how spiders build webs by using night vision and artificial intelligence to track and record every movement of all eight legs as spiders worked in the dark.
    Their creation of a web-building playbook or algorithm brings new understanding of how creatures with brains a fraction of the size of a human’s are able to create structures of such elegance, complexity and geometric precision. The findings, now available online, are set to publish in the November issue of Current Biology.
    “I first got interested in this topic while I was out birding with my son. After seeing a spectacular web I thought, ‘if you went to a zoo and saw a chimpanzee building this you’d think that’s one amazing and impressive chimpanzee.’ Well this is even more amazing because a spider’s brain is so tiny and I was frustrated that we didn’t know more about how this remarkable behavior occurs,” said senior author Andrew Gordus, a Johns Hopkins behavioral biologist. “Now we’ve defined the entire choreography for web building, which has never been done for any animal architecture at this fine of a resolution.”
    Web-weaving spiders that build blindly using only the sense of touch, have fascinated humans for centuries. Not all spiders build webs but those that do are among a subset of animal species known for their architectural creations, like nest-building birds and puffer fish that create elaborate sand circles when mating.
    The first step to understanding how the relatively small brains of these animal architects support their high-level construction projects, is to systematically document and analyze the behaviors and motor skills involved, which until now has never been done, mainly because of the challenges of capturing and recording the actions, Gordus said.
    Here his team studied a hackled orb weaver, a spider native to the western United States that’s small enough to sit comfortably on a fingertip. To observe the spiders during their nighttime web-building work, the lab designed an arena with infrared cameras and infrared lights. With that set-up they monitored and recorded six spiders every night as they constructed webs. They tracked the millions of individual leg actions with machine vision software designed specifically to detect limb movement. More

  • in

    Key to resilient energy-efficient AI/machine learning may reside in human brain

    A clearer understanding of how a type of brain cell known as astrocytes function and can be emulated in the physics of hardware devices, may result in artificial intelligence (AI) and machine learning that autonomously self-repairs and consumes much less energy than the technologies currently do, according to a team of Penn State researchers.
    Astrocytes are named for their star shape and are a type of glial cell, which are support cells for neurons in the brain. They play a crucial role in brain functions such as memory, learning, self-repair and synchronization.
    “This project stemmed from recent observations in computational neuroscience, as there has been a lot of effort and understanding of how the brain works and people are trying to revise the model of simplistic neuron-synapse connections,” said Abhronil Sengupta, assistant professor of electrical engineering and computer science. “It turns out there is a third component in the brain, the astrocytes, which constitutes a significant section of the cells in the brain, but its role in machine learning and neuroscience has kind of been overlooked.”
    At the same time, the AI and machine learning fields are experiencing a boom. According to the analytics firm Burning Glass Technologies, demand for AI and machine learning skills is expected to increase by a compound growth rate of 71% by 2025. However, AI and machine learning faces a challenge as the use of these technologies increase — they use a lot of energy.
    “An often-underestimated issue of AI and machine learning is the amount of power consumption of these systems,” Sengupta said. “A few years back, for instance, IBM tried to simulate the brain activity of a cat, and in doing so ended up consuming around a few megawatts of power. And if we were to just extend this number to simulate brain activity of a human being on the best possible supercomputer we have today, the power consumption would be even higher than megawatts.”
    All this power usage is due to the complex dance of switches, semiconductors and other mechanical and electrical processes that happens in computer processing, which greatly increases when the processes are as complex as what AI and machine learning demand. A potential solution is neuromorphic computing, which is computing that mimics brain functions. Neuromorphic computing is of interest to researchers because the human brain has evolved to use much less energy for its processes than do a computer, so mimicking those functions would make AI and machine learning a more energy-efficient process. More