More stories

  • in

    Scientists build on AI modeling to understand more about protein-sugar structures

    New research building on AI algorithms has enabled scientists to create more complete models of the protein structures in our bodies — paving the way for faster design of therapeutics and vaccines.
    The study — led by the University of York — used artificial intelligence (AI) to help researchers understand more about the sugar that surrounds most proteins in our bodies.
    Up to 70 per cent of human proteins are surrounded or scaffolded with sugar, which plays an important part in how they look and act. Moreover, some viruses like those behind AIDS, Flu, Ebola and COVID-19 are also shielded behind sugars (glycans). The addition of these sugars is known as modification.
    To study the proteins, researchers created software that adds missing sugar components to models created with AlphaFold, which is an artificial intelligence program developed by Google’s DeepMind which performs predictions of protein structures.
    Senior author, Dr Jon Agirre from the Department of Chemistry said: “The proteins of the human body are tiny machines that in their billions, make up our flesh and bones, transport our oxygen, allow us to function, and defend us from pathogens. And just like a hammer relies on a metal head to strike pointy objects including nails, proteins have specialised shapes and compositions to get their jobs done.”
    “The AlphaFold method for protein structure prediction has the potential to revolutionise workflows in biology, allowing scientists to understand a protein and the impact of mutations faster than ever.”
    “However, the algorithm does not account for essential modifications that affect protein structure and function, which gives us only part of the picture. Our research has shown that this can be addressed in a relatively straightforward manner, leading to a more complete structural prediction.”
    The recent introduction of AlphaFold and the accompanying database of protein structures has enabled scientists to have accurate structure predictions for all known human proteins.
    Dr Agirre added: “It is always great to watch an international collaboration grow to bear fruit, but this is just the beginning for us. Our software was used in the glycan structural work that underpinned the mRNA vaccines against SARS-CoV-2, but now there is so much more we can do thanks to the AlphaFold technological leap. It is still early stages, but the objective is to move on from reacting to changes in a glycan shield to anticipating them.”
    The research was conducted with Dr Elisa Fadda and Carl A. Fogarty from Maynooth University. Haroldas Bagdonas, PhD student at the York Structural Biology Laboratory, which is part of the Department of Chemistry, also worked on the study with Dr Agirre.
    Story Source:
    Materials provided by University of York. Note: Content may be edited for style and length. More

  • in

    Researchers move closer to controlling two-dimensional graphene

    The device you are currently reading this article on was born from the silicon revolution. To build modern electrical circuits, researchers control silicon’s current-conducting capabilities via doping, which is a process that introduces either negatively charged electrons or positively charged “holes” where electrons used to be. This allows the flow of electricity to be controlled and for silicon involves injecting other atomic elements that can adjust electrons — known as dopants — into its three-dimensional (3D) atomic lattice.
    Silicon’s 3D lattice, however, is too big for next-generation electronics, which include ultra-thin transistors, new devices for optical communication, and flexible bio-sensors that can be worn or implanted in the human body. To slim things down, researchers are experimenting with materials no thicker than a single sheet of atoms, such as graphene. But the tried-and-true method for doping 3D silicon doesn’t work with 2D graphene, which consists of a single layer of carbon atoms that doesn’t normally conduct a current.
    Rather than injecting dopants, researchers have tried layering on a “charge-transfer layer” intended to add or pull away electrons from the graphene. However, previous methods used “dirty” materials in their charge-transfer layers; impurities in these would leave the graphene unevenly doped and impede its ability to conduct electricity.
    Now, a new study in Nature Electronics proposes a better way. An interdisciplinary team of researchers, led by James Hone and James Teherani at Columbia University, and Won Jong Yoo at Sungkyungkwan University in Korea, describe a clean technique to dope graphene via a charge-transfer layer made of low-impurity tungsten oxyselenide (TOS).
    The team generated the new “clean” layer by oxidizing a single atomic layer of another 2D material, tungsten selenide. When TOS was layered on top of graphene, they found that it left the graphene riddled with electricity-conducting holes. Those holes could be fine-tuned to better control the materials’ electricity-conducting properties by adding a few atomic layers of tungsten selenide in between the TOS and the graphene.
    The researchers found that graphene’s electrical mobility, or how easily charges move through it, was higher with their new doping method than previous attempts. Adding tungsten selenide spacers further increased the mobility to the point where the effect of the TOS becomes negligible, leaving mobility to be determined by the intrinsic properties of graphene itself. This combination of high doping and high mobility gives graphene greater electrical conductivity than that of highly conductive metals like copper and gold.
    As the doped graphene got better at conducting electricity, it also became more transparent, the researchers said. This is due to Pauli blocking, a phenomenon where charges manipulated by doping block the material from absorbing light. At the infrared wavelengths used in telecommunications, the graphene became more than 99 percent transparent. Achieving a high rate of transparency and conductivity is crucial to moving information through light-based photonic devices. If too much light is absorbed, information gets lost. The team found a much smaller loss for TOS-doped graphene than for other conductors, suggesting that this method could hold potential for next-generation ultra-efficient photonic devices.
    “This is a new way to tailor the properties of graphene on demand,” Hone said. “We have just begun to explore the possibilities of this new technique.”
    One promising direction is to alter graphene’s electronic and optical properties by changing the pattern of the TOS, and to imprint electrical circuits directly on the graphene itself. The team is also working to integrate the doped material into novel photonic devices, with potential applications in transparent electronics, telecommunications systems, and quantum computers.
    Story Source:
    Materials provided by Columbia University. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Researchers discover predictable behavior in promising material for computer memory

    In the last few years, a class of materials called antiferroelectrics has been increasingly studied for its potential applications in modern computer memory devices. Research has shown that antiferroelectric-based memories might have greater energy efficiency and faster read and write speeds than conventional memories, among other appealing attributes. Further, the same compounds that can exhibit antiferroelectric behavior are already integrated into existing semiconductor chip manufacturing processes.
    Now, a team led by Georgia Tech researchers has discovered unexpectedly familiar behavior in the antiferroelectric material known as zirconium dioxide, or zirconia. They show that as the microstructure of the material is reduced in size, it behaves similarly to much better understood materials known as ferroelectrics. The findings were recently published in the journal Advanced Electronic Materials.
    Miniaturization of circuits has played a key role in improving memory performance over the last fifty years. Knowing how the properties of an antiferroelectric change with shrinking size should enable the design of more effective memory components.
    The researchers also note that the findings should have implications in many other areas besides memory.
    “Antiferroelectrics have a range of unique properties like high reliability, high voltage endurance, and broad operating temperatures that makes them useful in a wealth of different devices, including high-energy-density capacitors, transducers, and electro-optics circuits.” said Nazanin Bassiri-Gharb, coauthor of the paper and professor in the Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering at Georgia Tech. “But size scaling effects had gone largely under the radar for a long time.”
    “You can design your device and make it smaller knowing exactly how the material is going to perform,” said Asif Khan, coauthor of the paper and assistant professor in the School of Electrical and Computer Engineering and the School of Materials Science and Engineering at Georgia Tech. “From our standpoint, it opens really a new field of research.”
    Lasting Fields More

  • in

    Trapping spins with sound

    The captured electrons typically absorb light in the visible spectrum, so that a transparent material becomes colored under the presence of such centers, for instance in diamond. “Color centers are often coming along with certain magnetic properties, making them promising systems for applications in quantum technologies, like quantum memories — the qubits — or quantum sensors. The challenge here is to develop efficient methods to control the magnetic quantum property of electrons, or, in this case, their spin states,” Dr. Georgy Astakhov from HZDR’s Institute of Ion Beam Physics and Materials Research explains.
    His team colleague Dr. Alberto Hernández-Mínguez from the Paul-Drude-Institut expands on the subject: “This is typically realized by applying electromagnetic fields, but an alternative method is the use of mechanical vibrations like surface acoustic waves. These are sound waves confined to the surface of a solid that resemble water waves on a lake. They are commonly integrated in microchips as radio frequency filters, oscillators and transformers in current electronic devices like mobile phones, tablets and laptops.”
    Tuning the spin to the sound of a surface
    In their paper, the researchers demonstrate the use of surface acoustic waves for on-chip control of electron spins in silicon carbide, a semiconductor, which will replace silicon in many applications requiring high-power electronics, for instance, in electrical vehicles. “You might think of this control like the tuning of a guitar with a regular electronic tuner,” Dr. Alexander Poshakinskiy from the Ioffe Physical-Technical Institute in St. Petersburg weighs in and proceeds: “Only that in our experiment it is a bit more complicated: a magnetic field tunes the resonant frequencies of the electron spin to the frequency of the acoustic wave, while a laser induces transitions between the ground and excited state of the color center.”
    These optical transitions play a fundamental role: they enable the optical detection of the spin state by registering the light quanta emitted when the electron returns to the ground state. Due to a giant interaction between the periodic vibrations of the crystal lattice and the electrons trapped in the color centers, the scientists realize simultaneous control of the electron spin by the acoustic wave, in both its ground and excited state.
    At this point, Hernández-Mínguez calls into play another physical process: precession. “Anybody who played as a child with a spinning top experienced precession as a change in the orientation of the rotational axis while trying to tilt it. An electronic spin can be imagined as a tiny spinning top as well, in our case with a precession axes under the influence of an acoustic wave that changes orientation every time the color center jumps between ground and excited state. Now, since the amount of time spent by the color center in the excited state is random, the large difference in the alignment of the precession axes in the ground and excited states changes the orientation of the electron spin in an uncontrolled way.”
    This change renders the quantum information stored in the electronic spin to be lost after several jumps. In their work, the researchers show a way to prevent this: by appropriately tuning the resonant frequencies of the color center, the precession axes of the spin in the ground and excited states becomes what the scientists call collinear: the spins keep their precession orientation along a well-defined direction even when they jump between the ground and excited states.
    Under this specific condition, the quantum information stored in the electron spin becomes decoupled from the jumps between ground and excited state caused by the laser. This technique of acoustic manipulation provides new opportunities for the processing of quantum information in quantum devices with dimensions similar to those of current microchips. This should have a significant impact on the fabrication cost and, therefore, the availability of quantum technologies to the general public.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Spiders' web secrets unraveled

    Johns Hopkins University researchers discovered precisely how spiders build webs by using night vision and artificial intelligence to track and record every movement of all eight legs as spiders worked in the dark.
    Their creation of a web-building playbook or algorithm brings new understanding of how creatures with brains a fraction of the size of a human’s are able to create structures of such elegance, complexity and geometric precision. The findings, now available online, are set to publish in the November issue of Current Biology.
    “I first got interested in this topic while I was out birding with my son. After seeing a spectacular web I thought, ‘if you went to a zoo and saw a chimpanzee building this you’d think that’s one amazing and impressive chimpanzee.’ Well this is even more amazing because a spider’s brain is so tiny and I was frustrated that we didn’t know more about how this remarkable behavior occurs,” said senior author Andrew Gordus, a Johns Hopkins behavioral biologist. “Now we’ve defined the entire choreography for web building, which has never been done for any animal architecture at this fine of a resolution.”
    Web-weaving spiders that build blindly using only the sense of touch, have fascinated humans for centuries. Not all spiders build webs but those that do are among a subset of animal species known for their architectural creations, like nest-building birds and puffer fish that create elaborate sand circles when mating.
    The first step to understanding how the relatively small brains of these animal architects support their high-level construction projects, is to systematically document and analyze the behaviors and motor skills involved, which until now has never been done, mainly because of the challenges of capturing and recording the actions, Gordus said.
    Here his team studied a hackled orb weaver, a spider native to the western United States that’s small enough to sit comfortably on a fingertip. To observe the spiders during their nighttime web-building work, the lab designed an arena with infrared cameras and infrared lights. With that set-up they monitored and recorded six spiders every night as they constructed webs. They tracked the millions of individual leg actions with machine vision software designed specifically to detect limb movement. More

  • in

    Key to resilient energy-efficient AI/machine learning may reside in human brain

    A clearer understanding of how a type of brain cell known as astrocytes function and can be emulated in the physics of hardware devices, may result in artificial intelligence (AI) and machine learning that autonomously self-repairs and consumes much less energy than the technologies currently do, according to a team of Penn State researchers.
    Astrocytes are named for their star shape and are a type of glial cell, which are support cells for neurons in the brain. They play a crucial role in brain functions such as memory, learning, self-repair and synchronization.
    “This project stemmed from recent observations in computational neuroscience, as there has been a lot of effort and understanding of how the brain works and people are trying to revise the model of simplistic neuron-synapse connections,” said Abhronil Sengupta, assistant professor of electrical engineering and computer science. “It turns out there is a third component in the brain, the astrocytes, which constitutes a significant section of the cells in the brain, but its role in machine learning and neuroscience has kind of been overlooked.”
    At the same time, the AI and machine learning fields are experiencing a boom. According to the analytics firm Burning Glass Technologies, demand for AI and machine learning skills is expected to increase by a compound growth rate of 71% by 2025. However, AI and machine learning faces a challenge as the use of these technologies increase — they use a lot of energy.
    “An often-underestimated issue of AI and machine learning is the amount of power consumption of these systems,” Sengupta said. “A few years back, for instance, IBM tried to simulate the brain activity of a cat, and in doing so ended up consuming around a few megawatts of power. And if we were to just extend this number to simulate brain activity of a human being on the best possible supercomputer we have today, the power consumption would be even higher than megawatts.”
    All this power usage is due to the complex dance of switches, semiconductors and other mechanical and electrical processes that happens in computer processing, which greatly increases when the processes are as complex as what AI and machine learning demand. A potential solution is neuromorphic computing, which is computing that mimics brain functions. Neuromorphic computing is of interest to researchers because the human brain has evolved to use much less energy for its processes than do a computer, so mimicking those functions would make AI and machine learning a more energy-efficient process. More

  • in

    Innovative chip resolves quantum headache

    Quantum physicists at the University of Copenhagen are reporting an international achievement for Denmark in the field of quantum technology. By simultaneously operating multiple spin qubits on the same quantum chip, they surmounted a key obstacle on the road to the supercomputer of the future. The result bodes well for the use of semiconductor materials as a platform for solid-state quantum computers.
    One of the engineering headaches in the global marathon towards a large functional quantum computer is the control of many basic memory devices — qubits — simultaneously. This is because the control of one qubit is typically negatively affected by simultaneous control pulses applied to another qubit. Now, a pair of young quantum physicists at the University of Copenhagen’s Niels Bohr Institute -PhD student, now Postdoc, Federico Fedele, 29 and Asst. Prof. Anasua Chatterjee, 32,- working in the group of Assoc. Prof. Ferdinand Kuemmeth, have managed to overcome this obstacle.
    Global qubit research is based on various technologies. While Google and IBM have come far with quantum processors based on superconductor technology, the UCPH research group is betting on semiconductor qubits — known as spin qubits.
    “Broadly speaking, they consist of electron spins trapped in semiconducting nanostructures called quantum dots, such that individual spin states can be controlled and entangled with each other,” explains Federico Fedele.
    Spin qubits have the advantage of maintaining their quantum states for a long time. This potentially allows them to perform faster and more flawless computations than other platform types. And, they are so miniscule that far more of them can be squeezed onto a chip than with other qubit approaches. The more qubits, the greater a computer’s processing power. The UCPH team has extended the state of the art by fabricating and operating four qubits in a 2×2 array on a single chip.
    Circuitry is ‘the name of the game’
    Thus far, the greatest focus of quantum technology has been on producing better and better qubits. Now it’s about getting them to communicate with each other, explains Anasua Chatterjee: More

  • in

    Researchers set ‘ultrabroadband’ record with entangled photons

    Quantum entanglement — or what Albert Einstein once referred to as “spooky action at a distance” — occurs when two quantum particles are connected to each other, even when millions of miles apart. Any observation of one particle affects the other as if they were communicating with each other. When this entanglement involves photons, interesting possibilities emerge, including entangling the photons’ frequencies, the bandwidth of which can be controlled.
    Researchers at the University of Rochester have taken advantage of this phenomenon to generate an incredibly large bandwidth by using a thin-film nanophotonic device they describe in Physical Review Letters.
    The breakthrough could lead to: Enhanced sensitivity and resolution for experiments in metrology and sensing, including spectroscopy, nonlinear microscopy, and quantum optical coherence tomography Higher dimensional encoding of information in quantum networks for information processing and communications”This work represents a major leap forward in producing ultrabroadband quantum entanglement on a nanophotonic chip,” says Qiang Lin, professor of electrical and computer engineering. “And it demonstrates the power of nanotechnology for developing future quantum devices for communication, computing, and sensing,”
    No more tradeoff between bandwidth and brightness
    To date, most devices used to generate broadband entanglement of light have resorted to dividing up a bulk crystal into small sections, each with slightly varying optical properties and each generating different frequencies of the photon pairs. The frequencies are then added together to give a larger bandwidth.
    “This is quite inefficient and comes at a cost of reduced brightness and purity of the photons,” says lead author Usman Javid, a PhD student in Lin’s lab. In those devices, “there will always be a tradeoff between the bandwidth and the brightness of the generated photon pairs, and one has to make a choice between the two. We have completely circumvented this tradeoff with our dispersion engineering technique to get both: a record-high bandwidth at a record-high brightness.”
    The thin-film lithium niobate nanophotonic device created by Lin’s lab uses a single waveguide with electrodes on both sides. Whereas a bulk device can be millimeters across, the thin-film device has a thickness of 600 nanometers — more than a million times smaller in its cross-sectional area than a bulk crystal, according to Javid. This makes the propagation of light extremely sensitive to the dimensions of the waveguide.
    Indeed, even a variation of a few nanometers can cause significant changes to the phase and group velocity of the light propagating through it. As a result, the researchers’ thin-film device allows precise control over the bandwidth in which the pair-generation process is momentum-matched. “We can then solve a parameter optimization problem to find the geometry that maximizes this bandwidth,” Javid says.
    The device is ready to be deployed in experiments, but only in a lab setting, Javid says. In order to be used commercially, a more efficient and cost-effective fabrication process is needed. And although lithium niobate is an important material for light-based technologies, lithium niobate fabrication is “still in its infancy, and it will take some time to mature enough to make financial sense,” he says.
    Other collaborators include coauthors Jingwei Ling, Mingxiao Li, and Yang He of the Department of Electrical and Computer Engineering, and Jeremy Staffa of the Institute of Optics, all of whom are graduate students. Yang He is a postdoctoral researcher.
    The National Science Foundation, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency helped fund the research.
    Story Source:
    Materials provided by University of Rochester. Original written by Bob Marcotte. Note: Content may be edited for style and length. More