More stories

  • in

    Neuromorphic memory device simulates neurons and synapses

    Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.
    Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.
    Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency.
    The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.
    Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Superconductivity and charge density waves caught intertwining at the nanoscale

    Room-temperature superconductors could transform everything from electrical grids to particle accelerators to computers — but before they can be realized, researchers need to better understand how existing high-temperature superconductors work.
    Now, researchers from the Department of Energy’s SLAC National Accelerator Laboratory, the University of British Columbia, Yale University and others have taken a step in that direction by studying the fast dynamics of a material called yttrium barium copper oxide, or YBCO.
    The team reports May 20 in Science that YBCO’s superconductivity is intertwined in unexpected ways with another phenomenon known as charge density waves (CDWs), or ripples in the density of electrons in the material. As the researchers expected, CDWs get stronger when they turned off YBCO’s superconductivity. However, they were surprised to find the CDWs also suddenly became more spatially organized, suggesting superconductivity somehow fundamentally shapes the form of the CDWs at the nanoscale.
    “A big part of what we don’t know is the relationship between charge density waves and superconductivity,” said Giacomo Coslovich, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory, who led the study. “As one of the cleanest high-temperature superconductors that can be grown, YBCO offers us the opportunity to understand this physics in a very direct way, minimizing the effects of disorder.”
    He added, “If we can better understand these materials, we can make new superconductors that work at higher temperatures, enabling many more applications and potentially addressing a lot of societal challenges — from climate change to energy efficiency to availability of fresh water.”
    Observing fast dynamics
    The researchers studied YBCO’s dynamics at SLAC’s Linac Coherent Light Source (LCLS) X-ray laser. They switched off superconductivity in the YBCO samples with infrared laser pulses, and then bounced X-ray pulses off those samples. For each shot of X-rays, the team pieced together a kind of snapshot of the CDWs’ electron ripples. By pasting those together, they recreated the CDWs rapid evolution. More

  • in

    A century ago, Alexander Friedmann envisioned the universe’s expansion

    For millennia, the universe did a pretty good job of keeping its secrets from science.

    Ancient Greeks thought the universe was a sphere of fixed stars surrounding smaller spheres carrying planets around the central Earth. Even Copernicus, who in the 16th century correctly replaced the Earth with the sun, viewed the universe as a single solar system encased by the star-studded outer sphere.

    But in the centuries that followed, the universe revealed some of its vastness. It contained countless stars agglomerated in huge clusters, now called galaxies.

    Then, at the end of the 1920s, the cosmos disclosed its most closely held secret of all: It was getting bigger. Rather than static and stable, an everlasting and ever-the-same entity encompassing all of reality, the universe continually expanded. Observations of distant galaxies showed them flying apart from each other, suggesting the current cosmos to be just the adult phase of a universe born long ago in the burst of a tiny blotch of energy.

    It was a surprise that shook science at its foundations, undercutting philosophical preconceptions about existence and launching a new era in cosmology, the study of the universe. But even more surprising, in retrospect, is that such a deep secret had already been suspected by a mathematician whose specialty was predicting the weather.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    A century ago this month (May 1922), Russian mathematician-meteorologist Alexander Friedmann composed a paper, based on Einstein’s general theory of relativity, that outlined multiple possible histories of the universe. One such possibility described cosmic expansion, starting from a singular point. In essence, even without considering any astronomical evidence, Friedmann had anticipated the modern Big Bang theory of the birth and evolution of the universe.

    “The new vision of the universe opened by Friedmann,” writes Russian physicist Vladimir Soloviev in a recent paper, “has become a foundation of modern cosmology.”

    Friedmann was not well known at the time. He had graduated in 1910 from St. Petersburg University in Russia, having studied math along with some physics. In graduate school he investigated the use of math in meteorology and atmospheric dynamics. He applied that expertise in aiding the Russian air force during World War I, using math to predict the optimum release point for dropping bombs on enemy targets.

    After the war, Friedmann learned of Einstein’s general theory of relativity, which describes gravity as a manifestation of the geometry of space (or more accurately, spacetime). In Einstein’s theory, mass distorts spacetime, producing spacetime “curvature,” which makes masses appear to attract each other.

    Friedmann was especially intrigued by Einstein’s 1917 paper (and a similar paper by Willem de Sitter) applying general relativity to the universe as a whole. Einstein found that his original equations allowed the universe to grow or shrink. But he considered that unthinkable, so he added a term representing a repulsive force that (he thought) would keep the size of the cosmos constant. Einstein concluded that space had a positive spatial curvature (like the surface of a ball), implying a “closed,” or finite universe.

    Friedmann accepted the new term, called the cosmological constant, but pointed out that for various values of that constant, along with other assumptions, the universe might exhibit very different behaviors. Einstein’s static universe was a special case; the universe might also expand forever, or expand for a while, then contract to a point and then begin expanding again.

    Friedmann’s paper describing dynamic universes, titled “On the Curvature of Space,” was accepted for publication in the prestigious Zeitschrift für Physik on June 29, 1922.

    Einstein objected. He wrote a note to the journal contending that Friedmann had committed a mathematical error. But the error was Einstein’s. He later acknowledged that Friedmann’s math was correct, while still denying that it had any physical validity.

    Friedmann insisted otherwise.

    He was not just a pure mathematician, oblivious to the physical meanings of his symbols on paper. His in-depth appreciation of the relationship between equations and the atmosphere persuaded him that the math meant something physical. He even wrote a book (The World as Space and Time) delving deeply into the connection between the math of spatial geometry and the motion of physical bodies. Physical bodies “interpret” the “geometrical world,” he declared, enabling scientists to test which of the various possible geometrical worlds humans actually inhabit. Because of the physics-math connection, he averred, “it becomes possible to determine the geometry of the geometrical world through experimental studies of the physical world.”

    So when Friedmann derived solutions to Einstein’s equations, he translated them into the possible physical meanings for the universe. Depending on various factors, the universe could be expanding from a point, or from a finite but smaller initial state, for instance. In one case he envisioned, the universe began to expand at a decelerating rate, but then reached an inflection point, whereupon it began expanding at a faster and faster rate. At the end of the 20th century, astronomers measuring the brightness of distant supernovas concluded that the universe had taken just such a course, a shock almost as surprising as the expansion of the universe itself. But Friedmann’s math had already forecast such a possibility.

    In 1929, Edwin Hubble (shown) reported that distant galaxies appear to be flying away from us faster than nearby galaxies, key evidence that the universe is expanding.PICTORIAL PRESS LTD/ALAMY STOCK PHOTO

    No doubt Friedmann’s deep appreciation for the synergy of abstract math and concrete physics prepared his mind to consider the notion that the universe could be expanding. But maybe he had some additional help. Although he was the first scientist to seriously propose an expanding universe, he wasn’t the first person. Almost 75 years before Friedmann’s paper, the poet Edgar Allan Poe had published an essay (or “prose poem”) called Eureka. In that essay Poe described the history of the universe as expanding from the explosion of a “primordial particle.” Poe even described the universe as growing and then contracting back to a point again, just as envisioned in one of Friedmann’s scenarios.

    Although Poe had studied math during his brief time as a student at West Point, he had used no equations in Eureka, and his essay was not recognized as a contribution to science. At least not directly. It turns out, though, that Friedmann was an avid reader, and among his favorite authors were Dostoevsky and Poe. So perhaps that’s why Friedmann was more receptive to an expanding universe than other scientists of his day.

    Today Friedmann’s math remains at the core of modern cosmological theory. “The fundamental equations he derived still provide the basis for the current cosmological theories of the Big Bang and the accelerating universe,” Israeli mathematician and historian Ari Belenkiy noted in a 2013 paper. “He introduced the fundamental idea of modern cosmology — that the universe is dynamic and may evolve in different manners.”

    Friedmann emphasized that astronomical knowledge in his day was insufficient to reveal which of the possible mathematical histories the universe has chosen. Now scientists have much more data, and have narrowed the possibilities in a way that confirms the prescience of Friedmann’s math.

    Friedmann did not live to see the triumphs of his insights, though, or even the early evidence that the universe really does expand. He died in 1925 from typhoid fever, at the age of 37. But he died knowing that he had deciphered a secret about the universe deeper than any suspected by any scientist before him. As his wife remembered, he liked to quote a passage from Dante: “The waters I am entering, no one yet has crossed.” More

  • in

    Interplay between charge order and superconductivity at nanoscale

    Scientists have been relentlessly working on understanding the fundamental mechanisms at the base of high-temperature superconductivity with the ultimate goal to design and engineer new quantum materials superconducting close to room temperature.
    High temperature superconductivity is something of a holy grail for researchers studying quantum materials. Superconductors, which conduct electricity without dissipating energy, promise to revolutionize our energy and telecommunication power systems. However, superconductors typically work at extremely low temperatures, requiring elaborate freezers or expensive coolants. For this reason, scientist have been relentlessly working on understanding the fundamental mechanisms at the base of high-temperature superconductivity with the ultimate goal to design and engineer new quantum materials superconducting close to room temperature.
    Fabio Boschini, Professor at the Institut national de la recherche scientifique (INRS), and North American scientists studied the dynamics of the superconductor yttrium barium copper oxide (YBCO), which offers superconductivity at higher-than-normal temperatures, via time-resolved resonant x-ray scattering at the Linac Coherent Light Source (LCLS) free-electron laser, SLAC (US). The research was published on May 19 in the journal Science. In this new study, researchers have been able to track how charge density waves in YBCO react to a sudden “quenching” of the superconductivity, induced by an intense laser pulse.
    “We are learning that charge density waves — self-organized electrons behaving like ripples in water — and superconductivity are interacting at the nanoscale on ultrafast timescales. There is a very deep connection between superconductivity emergence and charge density waves,” says Fabio Boschini, co-investigator on this project and affiliate investigator at the Stewart Blusson Quantum Matter Institute (Blusson QMI).
    “Up until a few years ago, researchers underestimated the importance of the dynamics inside these materials,” said Giacomo Coslovich, lead investigator and Staff Scientist at the SLAC National Accelerator Laboratory in California. “Until this collaboration came together, we really didn’t have the tools to assess the charge density wave dynamics in these materials. The opportunity to look at the evolution of charge order is only possible thanks to teams like ours sharing resources, and by the use of a free-electron laser to offer new insight into the dynamical properties of matter.”
    Owing to a better picture of the dynamical interactions underlying high-temperature superconductors, the researchers are optimistic that they can work with theoretical physicists to develop a framework for a more nuanced understanding of how high-temperature superconductivity emerges.
    Collaboration is key
    The present work came about from a collaboration of researchers from several leading research centres and beamlines. “We began running our first experiments at the end of 2015 with the first characterization of the material at the Canadian Light Source, says Boschini. Over time, the project came to involve many Blusson QMI researchers, such as MengXing Na who I mentored and introduced to this work. She was integral to the data analysis.”
    “This work is meaningful for a number of reasons, but it also really showcases the importance of forming long-lasting, meaningful collaborations and relationships,” said Na. “Some projects take a really long time, and it’s a credit to Giacomo’s leadership and perseverance that we got here.”
    The project has linked at least three generations of scientists, following some as they progressed through their postdoctoral careers and into faculty positions. The researchers are excited to expand upon this work, by using light as an optical knob to control the on-off state of superconductivity.
    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Virtual immune system roadmap unveiled

    An article published May 20 in Nature’s npj Digital Medicine provides a step-by-step plan for an international effort to create a digital twin of the human immune system.
    “This paper outlines a road map that the scientific community should take in building, developing and applying a digital twin of the immune system,” said Tomas Helikar, a University of Nebraska-Lincoln biochemist who is one of 10 co-authors from six universities from around the world. Earlier this year, the National Institutes of Health renewed a five-year $1.8 million grant for Helikar to continue his work in the area.
    “This is an effort that will require the collaboration of computational biologists, immunologists, clinicians, mathematicians and computer scientists,” he said. “Trying to break down this complexity down into measurable and achievable steps has been a challenge. This paper is addressing that.”
    A digital twin of the immune system would be a breakthrough that could offer precision medicine for a wide array of ailments, including cancer, autoimmune disease and viral infections like COVID-19.
    Helikar’s involvement has been inspired in part by his 7-year-old son, who required a lung transplant as an infant. This has resulted in a life-long careful balancing of his immune system through powerful immunosuppression drugs to prevent organ rejection while keeping infections and other diseases at bay.
    While the first step is to create a generic model that reflects common biological mechanisms, the eventual goal is to make virtual models at the individual level. That would enable doctors to deliver treatments precisely designed for the individual. More

  • in

    Using everyday WiFi to help robots see and navigate better indoors

    Engineers at the University of California San Diego have developed a low cost, low power technology to help robots accurately map their way indoors, even in poor lighting and without recognizable landmarks or features.
    The technology consists of sensors that use WiFi signals to help the robot map where it’s going. It’s a new approach to indoor robot navigation. Most systems rely on optical light sensors such as cameras and LiDARs. In this case, the so-called “WiFi sensors” use radio frequency signals rather than light or visual cues to see, so they can work in conditions where cameras and LiDARs struggle — in low light, changing light, and repetitive environments such as long corridors and warehouses.
    And by using WiFi, the technology could offer an economical alternative to expensive and power hungry LiDARs, the researchers noted.
    A team of researchers from the Wireless Communication Sensing and Networking Group, led by UC San Diego electrical and computer engineering professor Dinesh Bharadia, will present their work at the 2022 International Conference on Robotics and Automation (ICRA), which will take place from May 23 to 27 in Philadelphia.
    “We are surrounded by wireless signals almost everywhere we go. The beauty of this work is that we can use these everyday signals to do indoor localization and mapping with robots,” said Bharadia.
    “Using WiFi, we have built a new kind of sensing modality that fills in the gaps left behind by today’s light-based sensors, and it can enable robots to navigate in scenarios where they currently cannot,” added Aditya Arun, who is an electrical and computer engineering Ph.D. student in Bharadia’s lab and the first author of the study. More

  • in

    Is it topological? A new materials database has the answer

    What will it take to make our electronics smarter, faster, and more resilient? One idea is to build them from materials that are topological.
    Topology stems from a branch of mathematics that studies shapes that can be manipulated or deformed without losing certain core properties. A donut is a common example: If it were made of rubber, a donut could be twisted and squeezed into a completely new shape, such as a coffee mug, while retaining a key trait — namely, its center hole, which takes the form of the cup’s handle. The hole, in this case, is a topological trait, robust against certain deformations.
    In recent years, scientists have applied concepts of topology to the discovery of materials with similarly robust electronic properties. In 2007, researchers predicted the first electronic topological insulators — materials in which electrons that behave in ways that are “topologically protected,” or persistent in the face of certain disruptions.
    Since then, scientists have searched for more topological materials with the aim of building better, more robust electronic devices. Until recently, only a handful of such materials were identified, and were therefore assumed to be a rarity.
    Now researchers at MIT and elsewhere have discovered that, in fact, topological materials are everywhere, if you know how to look for them.
    In a paper published in Science, the team, led by Nicolas Regnault of Princeton University and the École Normale Supérieure Paris, reports harnessing the power of multiple supercomputers to map the electronic structure of more than 96,000 natural and synthetic crystalline materials. They applied sophisticated filters to determine whether and what kind of topological traits exist in each structure. More

  • in

    Human behavior is key to building a better long-term COVID forecast

    From extreme weather to another wave of COVID-19, forecasts give decision-makers valuable time to prepare. When it comes to COVID, though, long-term forecasting is a challenge, because it involves human behavior.
    While it can sometimes seem like there is no logic to human behavior, new research is working to improve COVID forecasts by incorporating that behavior into prediction models.
    UConn College of Agriculture, Health and Natural Resources Allied Health researcher Ran Xu, along with collaborators Hazhir Rahmandad from the Massachusetts Institute of Technology, and Navid Ghaffarzadegan from Virginia Tech, have a paper out today in PLOS Computational Biology where they detail how they applied relatively simple but nuanced variables to enhance modelling capabilities, with the result that their approach out-performed a majority of the models currently used to inform decisions made by the federal Centers for Disease Control and Prevention (CDC).
    Xu explains that he and his collaborators are methodologists, and they were interested in examining which parameters impacted the forecasting accuracy of the COVID prediction models. To begin, they turned to the CDC prediction hub, which serves as a repository of models from across the United States.
    “Currently there are over 70 different models, mostly from universities and some from companies, that are updated weekly,” says Xu. “Each week, these models give predictions for cases and number of deaths in the next couple of weeks. The CDC uses this information to inform their decisions; for example, where to strategically focus their efforts or whether to advise people to do social distancing.”
    The Human Factor
    The data was a culmination of over 490,000 point forecasts for weekly death incidents across 57 US locations over the course of one year. The researchers analyzed the length of prediction and how relatively accurate the predictions were across a period of 14 weeks. On further analysis, Xu says they noticed something interesting when they categorized the models based on their methodologies: More