More stories

  • in

    Researchers learn to control electron spin at room temperature to make devices more efficient and faster

    As our devices become smaller, faster, more energy efficient, and capable of holding larger amounts of data, spintronics may continue that trajectory. Whereas electronics is based on the flow of electrons, spintronics is based on the spin of electrons.
    An electron has a spin degree of freedom, meaning that it not only holds a charge but also acts like a little magnet. In spintronics, a key task is to use an electric field to control electron spin and rotate the north pole of the magnet in any given direction.
    The spintronic field effect transistor harnesses the so-called Rashba or Dresselhaus spin-orbit coupling effect, which suggests that one can control electron spin by electric field. Although the method holds promise for efficient and high-speed computing, certain challenges must be overcome before the technology reaches its true, miniature but powerful, and eco-friendly, potential.
    For decades, scientists have been attempting to use electric fields to control spin at room temperature but achieving effective control has been elusive. In research recently published in Nature Photonics, a research team led by Jian Shi and Ravishankar Sundararaman of Rensselaer Polytechnic Institute and Yuan Ping of the University of California at Santa Cruz took a step forward in solving the dilemma.
    “You want the Rashba or Dresselhaus magnetic field to be large to make the electron spin precess quickly,” said Dr. Shi, associate professor of materials science and engineering. “If it’s weak, the electron spin precesses slowly and it would take too much time to turn the spin transistor on or off. However, often a larger internal magnetic field, if not arranged well, leads to poor control of electron spin.”
    The team demonstrated that a ferroelectric van der Waals layered perovskite crystal carrying unique crystal symmetry and strong spin-orbit coupling was a promising model material to understand the Rashba-Dresselhaus spin physics at room temperature. Its nonvolatile and reconfigurable spin-related room temperature optoelectronic properties may inspire the development of important design principles in enabling a room-temperature spin field effect transistor.
    Simulations revealed that this material was particularly exciting, according to Dr. Sundararaman, associate professor of materials science and engineering. “The internal magnetic field is simultaneously large and perfectly distributed in a single direction, which allows the spins to rotate predictably and in perfect concert,” he said. “This is a key requirement to use spins for reliably transmitting information.”
    “It’s a step forward toward the practical realization of a spintronic transistor,” Dr. Shi said.
    The first authors of this article include graduate student Lifu Zhang and postdoctoral associate Jie Jiang from Dr. Shi’s group, as well as graduate student Christian Multunas from Dr. Sundararaman’s group.
    This work was supported by the United States Army Research Office (Physical Properties of Materials program by Dr. Pani Varanasi), the Air Force Office of Scientific Research, and the National Science Foundation.
    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Katie Malatino. Note: Content may be edited for style and length. More

  • in

    Most complex protein knots

    Theoretical physicists at Johannes Gutenberg University Mainz have put Google’s artificial intelligence AlphaFold to the test and have found the most complex protein knots so far.
    The question of how the chemical composition of a protein, the amino acid sequence, determines its 3D structure has been one of the biggest challenges in biophysics for more than half a century. This knowledge about the so-called “folding” of proteins is in great demand, as it contributes significantly to the understanding of various diseases and their treatment, among other things. For these reasons, Google’s DeepMind research team has developed AlphaFold, an artificial intelligence that predicts 3D structures.
    A team consisting of researchers from Johannes Gutenberg University Mainz (JGU) and the University of California, Los Angeles, has now taken a closer look at these structures and examined them with respect to knots. We know knots primarily from shoelaces and cables, but they also occur on the nanoscale in our cells. Knotted proteins can not only be used to assess the quality of structure predictions but also raise important questions about folding mechanisms and the evolution of proteins.
    The most complex knots as a test for AlphaFold
    “We investigated numerically all — that is some 100,000 — predictions of AlphaFold for new protein knots,” said Maarten A. Brems, a PhD student in the group of Dr. Peter Virnau at Mainz University. The goal was to identify rare, high-quality structures containing complex and previously unknown protein knots to provide a basis for experimental verification of AlphaFold’s predictions. The study not only discovered the most complex knotted protein to date but also the first composite knots in proteins. The latter can be thought of as two separate knots on the same string. “These new discoveries also provide insight into the evolutionary mechanisms behind such rare proteins,” added Robert Runkel, a theoretical physicist also involved in the project. The results of this study were recently published in Protein Science.
    Dr. Peter Virnau is pleased with the results: “We have already established a collaboration with our colleague Todd Yeates from UCLA to confirm these structures experimentally. This line of research will shape the biophysics community’s view of artificial intelligence — and we are fortunate to have an expert like Dr. Yeates involved.”
    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    Virtual reality app trial shown to reduce common phobias

    Results from a University of Otago, Christchurch trial suggest fresh hope for the estimated one-in-twelve people worldwide suffering from a fear of flying, needles, heights, spiders and dogs.
    The trial, led by Associate Professor Cameron Lacey, from the Department of Psychological Medicine, studied phobia patients using a headset and a smartphone app treatment programme — a combination of Virtual Reality (VR) 360-degree video exposure therapy and cognitive behavioural therapy (CBT).
    Participants downloaded a fully self-guided smartphone app called “oVRcome,” developed by Christchurch tech entrepreneur Adam Hutchinson, aimed at treating patients with phobia and anxiety.
    The app was paired with a headset to immerse participants in virtual environments to help treat their phobia.
    The results from the trial, just published in the Australian and New Zealand Journal of Psychiatry, showed a 75 per cent reduction in phobia symptoms after six weeks of the treatment programme.
    “The improvements they reported suggests there’s great potential for the use of VR and mobile phone apps as a means of self-guided treatment for people struggling with often-crippling phobias,” Associate Professor Lacey says. More

  • in

    In the battle of human vs. water, ‘Water Always Wins’

    Water Always WinsErica Gies Univ. of Chicago, $26

    Humans have long tried to wrangle water. We’ve straightened once-meandering rivers for shipping purposes. We’ve constructed levees along rivers and lakes to protect people from flooding. We’ve erected entire cities on drained and filled-in wetlands. We’ve built dams on rivers to hoard water for later use.

    “Water seems malleable, cooperative, willing to flow where we direct it,” environmental journalist Erica Gies writes in Water Always Wins. But it’s not, she argues.

    Levees, which narrow channels causing water to flow higher and faster, nearly always break. Cities on former wetlands flood regularly — often catastrophically. Dams starve downstream environs of sediment needed to protect coastal areas against rising seas. Straightened streams flow faster than meandering ones, scouring away riverbed ecosystems and giving water less time to seep downward and replenish groundwater supplies.

    In addition to laying out this damage done by supposed water control, Gies takes readers on a hopeful global tour of solutions to these woes. Along the way, she introduces “water detectives”— scientists, engineers, urban planners and many others who, instead of trying to control water, ask: What does water want?

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    These water detectives have found ways to give the slippery substance the time and space it needs to trickle underground. Around Seattle’s Thornton Creek, for instance, reclaimed land now allows for regular flooding, which has rejuvenated depleted riverbed habitat and created an urban oasis. In California’s Central Valley, scientists want to find ways to shunt unpolluted stormwater into ancient, sediment-filled subsurface canyons that make ideal aquifers. Feeding groundwater supplies will in turn nourish rivers from below, helping to maintain water levels and ecosystems.

    While some people are exploring new ways to manage water, others are leaning on ancestral knowledge. Without the use of hydrologic mapping tools, Indigenous peoples of the Andes have a detailed understanding of the plumbing that links surface waters with underground storage. Researchers in Peru are now studying Indigenous methods of water storage, which don’t require dams, in hopes of ensuring a steady flow of water to Lima — Peru’s populous capital that’s periodically afflicted by water scarcity. These studies may help convince those steeped in concrete-centric solutions to try something new. “Decision makers come from a culture of concrete,” Gies writes, in which dams, pipes and desalination plants are standard.Understanding how to work with, not against, water will help humankind weather this age of drought and deluge that’s being exacerbated by climate change. Controlling water, Gies convincingly argues, is an illusion. Instead, we must learn to live within our water means because water will undoubtedly win.

    Buy Water Always Wins from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article. More

  • in

    Machine learning identifies gun purchasers at risk of suicide

    A new study from the Violence Prevention Research Program (VPRP) at UC Davis suggests machine learning, a type of artificial intelligence, may help identify handgun purchasers who are at high risk of suicide. It also identified individual and community characteristics that are predictive of firearm suicide. The study was published in JAMA Network Open.
    Previous research has shown the risk of suicide is particularly high immediately after purchase, suggesting that acquisition itself is an indicator of elevated suicide risk.
    Risk factors identified by the algorithm to be predictive of firearm suicide included: older age first-time firearm purchaser white race living in close proximity to the gun dealer purchasing a revolver”While limiting access to firearms among individuals at increased risk for suicide presents a critical opportunity to save lives, accurately identifying those at risk remains a key challenge. Our results suggest the potential utility of handgun records in identifying high-risk individuals to aid suicide prevention,” said Hannah S. Laqueur, an assistant professor in the Department of Emergency Medicine and lead author of the study.
    In 2020, almost 48,000 Americans died by suicide, of which more than 24,000 were firearm suicides. Firearms are by far the most lethal method of suicide. Access to firearms has been identified as a major risk factor for suicideand is a potential focus for suicide prevention.
    Methodology
    To see if an algorithm could identify gun purchasers at risk of firearm suicide, the researchers looked at data from almost five million firearm transactions from the California Dealer Record of Sale database (DROS). The records, which spanned from 1996 to 2015, represented almost two million individuals. They also looked at firearm suicide data from California death records between 1996 and 2016. More

  • in

    Researchers use quantum-inspired approach to increase lidar resolution

    Researchers have shown that a quantum-inspired technique can be used to perform lidar imaging with a much higher depth resolution than is possible with conventional approaches. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is usually best suited for imaging large objects such as topographical features or built structures due to its limited depth resolution.
    “Although lidar can be used to image the overall shape of a person, it typically doesn’t capture finer details such as facial features,” said research team leader Ashley Lyons from the University of Glasgow in the United Kingdom. “By adding extra depth resolution, our approach could capture enough detail to not only see facial features but even someone’s fingerprints.”
    In the Optica Publishing Group journal Optics Express, Lyons and first author Robbie Murray describe the new technique, which they call imaging two-photon interference lidar. They show that it can distinguish reflective surfaces less than 2 millimeters apart and create high-resolution 3D images with micron-scale resolution.
    “This work could lead to much higher resolution 3D imaging than is possible now, which could be useful for facial recognition and tracking applications that involve small features,” said Lyons. “For practical use, conventional lidar could be used to get a rough idea of where an object might be and then the object could be carefully measured with our method.”
    Using classically entangled light
    The new technique uses “quantum inspired” interferometry, which extracts information from the way that two light beams interfere with each other. Entangled pairs of photons — or quantum light — are often used for this type of interferometry, but approaches based on photon entanglement tend to perform poorly in situations with high levels of light loss, which is almost always the case for lidar. To overcome this problem, the researchers applied what they’ve learned from quantum sensing to classical (non-quantum) light. More

  • in

    Researchers develop computer model to predict whether a pesticide will harm bees

    Researchers in the Oregon State University College of Engineering have harnessed the power of artificial intelligence to help protect bees from pesticides.
    Cory Simon, assistant professor of chemical engineering, and Xiaoli Fern, associate professor of computer science, led the project, which involved training a machine learning model to predict whether any proposed new herbicide, fungicide or insecticide would be toxic to honey bees based on the compound’s molecular structure.
    The findings, featured on the cover of The Journal of Chemical Physics in a special issue, “Chemical Design by Artificial Intelligence,” are important because many fruit, nut, vegetable and seed crops rely on bee pollination.
    Without bees to transfer the pollen needed for reproduction, almost 100 commercial crops in the United States would vanish. Bees’ global economic impact is annually estimated to exceed $100 billion.
    “Pesticides are widely used in agriculture, which increase crop yield and provide food security, but pesticides can harm off-target species like bees,” Simon said. “And since insects, weeds, etc. eventually evolve resistance, new pesticides must continually be developed, ones that don’t harm bees.”
    Graduate students Ping Yang and Adrian Henle used honey bee toxicity data from pesticide exposure experiments, involving nearly 400 different pesticide molecules, to train an algorithm to predict if a new pesticide molecule would be toxic to honey bees.
    “The model represents pesticide molecules by the set of random walks on their molecular graphs,” Yang said.
    A random walk is a mathematical concept that describes any meandering path, such as on the complicated chemical structure of a pesticide, where each step along the path is decided by chance, as if by coin tosses.
    Imagine, Yang explains, that you’re out for an aimless stroll along a pesticide’s chemical structure, making your way from atom to atom via the bonds that hold the compound together. You travel in random directions but keep track of your route, the sequence of atoms and bonds that you visit. Then you go out on a different molecule, comparing the series of twists and turns to what you’ve done before.
    “The algorithm declares two molecules similar if they share many walks with the same sequence of atoms and bonds,” Yang said. “Our model serves as a surrogate for a bee toxicity experiment and can be used to quickly screen proposed pesticide molecules for their toxicity.”
    The National Science Foundation supported this research.
    Story Source:
    Materials provided by Oregon State University. Original written by Steve Lundeberg. Note: Content may be edited for style and length. More

  • in

    A robot learns to imagine itself

    As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.
    We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
    Robot watches itself like an an infant exploring itself in a hall of mirrors
    The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.
    “We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.
    Self-modeling robots will lead to more self-reliant autonomous systems
    The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.
    “We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”
    Self-awareness in robots
    The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness. “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”
    The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.” More