More stories

  • in

    Big step with small whirls

    Many of us may still be familiar with the simple physical principles of magnetism from school. However, this general knowledge about north and south poles quickly becomes very complex when looking at what happens down to the atomic level. The magnetic interactions between atoms at such minute scales can create unique states, such as skyrmions.
    Skyrmions have very special properties and can exist in certain material systems, such as a “stack” of different sub-nanometer-thick metal layers. Modern computer technology based on skyrmions — which are only a few nanometers in size — promises to enable an extremely compact and ultrafast way of storing and processing data. As an example, one concept for data storage with skyrmions could be that the bits “1” and “0” are represented by the presence and absence of a given skyrmion. This concept could thus be used in “racetrack” memories (see info box). However, it is a prerequisite that the distance between the skyrmion for the value “1” and the skyrmion gap for the value “0” remains constant when moving during the data transport, otherwise large errors could occur.
    As a better alternative, skyrmions having different sizes can be used for the representation of “0” and “1.” These could then be transported like pearls on a string without the distances between the pearls playing a big role. The existence of two different types of skyrmions (skyrmion and skyrmion bobber) has so far only been predicted theoretically and has only be shown experimentally in a specially-grown monocrystalline material. In these experiments, however, the skyrmions exist only at extremely low temperatures. These limitations make this material unsuitable for practical applications.
    Experience with ferromagnetic multilayer systems and magnetic force microscopy
    The research group led by Hans Josef Hug at Empa has now succeeded in solving this problem: “We have produced a multilayer system consisting of various sub-nanometer-thick ferromagnetic, noble metal and rare-earth metal layers, in which two different skyrmion states can coexist at room temperature,” says Hug. His team had been studying skyrmion properties in ultra-thin ferromagnetic multilayer systems using the magnetic force microscope that they developed at Empa. For their latest experiments, they fabricated material layers made from the following metals: iridium (Ir), iron (Fe), cobalt (Co), platinum (Pt) and the rare-earth metals terbium (Tb) and gadolinium (Gd).
    Between the two ferromagnetic multilayers that generate skyrmions — in which the combination of Ir/Fe/Co/Pt layers is overlaid five times — the researchers inserted a ferrimagnetic multilayer consisting of a TbGd alloy layer and a Co layer. The special feature of this layer is that it cannot generate skyrmions on its own. The outer two layers, on the other hand, generate skyrmions in large numbers.

    advertisement

    The researchers adjusted the mixing ratio of the two metals Tb and Gd and the thicknesses of the TbGd and Co layers in the central layer in such a way that its magnetic properties can be influenced by the outer layers: the ferromagnetic layers “force” skyrmions into the central ferrimagnetic layer. This results in a multilayer system where two different types of skyrmions exist.
    Experimental and theoretical evidence
    The two types of skyrmions can easily be distinguished from each other with the magnetic force microscope due to their different sizes and intensities. The larger skyrmion, which also creates a stronger magnetic field, penetrates the entire multilayer system, i.e. also the middle ferrimagnetic multilayer. The smaller, weaker skyrmion, on the other hand only exists in the two outer multilayers. This is the great significance of the latest results with regard to a possible use of skyrmions in data processing: if binary data — 0 and 1 — are to be stored and read, they must be clearly distinguishable, which would be possible here by means of the two different types of skyrmions.
    Using the magnetic force microscope, individual parts of these multilayers were compared with each other. This allowed Hug’s team to determine in which layers the different skyrmions occur. Furthermore, micromagnetic computer simulations confirmed the experimental results. These simulations were carried out in collaboration with theoreticians from the universities of Vienna and Messina.
    Empa researcher Andrada-Oana Mandru, the first author of the study, is hopeful that a major challenge towards practical applications has been overcome: “The multilayers we have developed using sputtering technology can in principle also be produced on an industrial scale,” she said. In addition, similar systems could possibly be used in the future to build three-dimensional data storage devices with even greater storage density. The team recently published their work in the renowned journal Nature Communications.
    Racetrack Memory
    The concept of such a memory was designed in 2004 at IBM. It consists of writing information in one place by means of magnetic domains — i.e. magnetically aligned areas — and then moving them quickly within the device by means of currents. One bit corresponds to such a magnetic domain. This task could be performed by a skyrmion, for example. The carrier material of these magnetic information units are nanowires, which are more than a thousand times thinner than a human hair and thus promise an extremely compact form of data storage. The transport of data along the wires also works extremely fast, about 100,000 times faster than in a conventional flash memory and with a much lower energy consumption. More

  • in

    Developing smarter, faster machine intelligence with light

    Researchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the order of petabytes, per second. This innovation, which harnesses the massive parallelism of light, heralds a new era of optical signal processing for machine learning with numerous applications, including in self-driving cars, 5G networks, data-centers, biomedical diagnostics, data-security and more.

    advertisement

    Global demand for machine learning hardware is dramatically outpacing current computing power supplies. State-of-the-art electronic hardware, such as graphics processing units and tensor processing unit accelerators, help mitigate this, but are intrinsically challenged by serial data processing that requires iterative data processing and encounters delays from wiring and circuit constraints. Optical alternatives to electronic hardware could help speed up machine learning processes by simplifying the way information is processed in a non-iterative way. However, photonic-based machine learning is typically limited by the number of components that can be placed on photonic integrated circuits, limiting the interconnectivity, while free-space spatial-light-modulators are restricted to slow programming speeds.
    To achieve a breakthrough in this optical machine learning system, the researchers replaced spatial light modulators with digital mirror-based technology, thus developing a system over 100 times faster. The non-iterative timing of this processor, in combination with rapid programmability and massive parallelization, enables this optical machine learning system to outperform even the top-of-the-line graphics processing units by over one order of magnitude, with room for further optimization beyond the initial prototype.
    Unlike the current paradigm in electronic machine learning hardware that processes information sequentially, this processor uses the Fourier optics, a concept of frequency filtering which allows for performing the required convolutions of the neural network as much simpler element-wise multiplications using the digital mirror technology. 
    “Optics allows for processing large-scale matrices in a single time-step, which allows for new scaling vectors of performing convolutions optically. This can have significant potential for machine learning applications as demonstrated here.”  said Puneet Gupta, professor & vice chair of computer engineering at UCLA.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by George Washington University. Note: Content may be edited for style and length.

    Journal Reference:
    Mario Miscuglio, Zibo Hu, Shurui Li, Jonathan K. George, Roberto Capanna, Hamed Dalir, Philippe M. Bardet, Puneet Gupta, Volker J. Sorger. Massively parallel amplitude-only Fourier neural network. Optica, 2020; 7 (12): 1812 DOI: 10.1364/OPTICA.408659

    Cite This Page:

    George Washington University. “Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning.” ScienceDaily. ScienceDaily, 18 December 2020. .
    George Washington University. (2020, December 18). Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning. ScienceDaily. Retrieved December 19, 2020 from www.sciencedaily.com/releases/2020/12/201218131856.htm
    George Washington University. “Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning.” ScienceDaily. www.sciencedaily.com/releases/2020/12/201218131856.htm (accessed December 19, 2020). More

  • in

    New curriculum improves students' understanding of electric circuits in schools

    The topic of electricity often poses difficulties for many secondary school students in physics lessons. Researchers have now developed and empirically evaluated a new, intuitive curriculum as part of a major comparative study. The result: not only do secondary school students gain a better conceptual understanding of electric circuits, but teachers also perceive the curriculum as a significant improvement in their teaching. More

  • in

    Artificial Intelligence that can run a simulation faithful to physical laws

    A research group led by Associate Professor YAGUCHI Takaharu (Graduate School of System Informatics) and Associate Professor MATSUBARA Takashi (Graduate School of Engineering Science, Osaka University) have succeeded in developing technology to simulate phenomena for which the detailed mechanism or formula are unexplained. They did this by using artificial intelligence (AI) to create a model, which is faithful to the laws of physics, from observational data.
    It is hoped that this development will make it possible to predict phenomena that have been difficult to simulate up until now because their detailed underlying mechanisms were unknown. It is also expected to increase the speed of the simulations themselves.
    These research achievements were presented on December 7 at the Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), a meeting on artificial intelligence technology-related topics. 9454 papers were posted to NeurIPS 2020 and of the 1900 that were selected, this research paper was in the top 1.1% and was one of only 105 selected for oral presentation at the conference.
    Main Points
    Being able to apply artificial intelligence to the prediction of physical phenomena could result in extremely precise, high-speed simulations.
    Prediction methods up until now have been prone to generating overestimated or underestimated results because the difficulty of digitizing phenomena means that the laws of physics (such as the energy conservation law) are not preserved.
    This research group developed AI-based technology that can run simulations while preserving the laws of physics. They used digital analysis to replicate physics that the computer can recognize in the digital world.
    It is expected that this technology will enable phenomena for which the detailed mechanism or formula is unclear (e.g. wave motion, fracture mechanics (such as crack growth) and the growth of crystal structures) to be simulated as long as there is sufficient observational data.
    Research Background
    Ordinarily, it is possible to carry out predictions of physical phenomena via simulations using supercomputers, and these simulations use equations based on the laws of physics. Even though these equations are highly versatile, this does not always mean that they are capable of perfectly replicating the distinct characteristics of individual phenomena. For example, many people learn about the physics behind the motion of a pendulum in high school. However, if you were to actually make a pendulum and try swinging it, a slight manufacturing defect in the pendulum could cause it not to move in accordance with the theory and this would result in an error in the simulation’s prediction. Consequently, research into applying observational data of phenomena to simulations via artificial intelligence has been advancing in recent years. If this can be fully realized, it will be possible to develop custom simulations of real phenomena, which should improve the accuracy of simulations’ predictions.

    advertisement

    However, it is difficult to introduce the laws of physics that govern real world phenomena to prediction technology using current AI because computers are digital. It has been hard to perfectly replicate physical laws such as the energy conservation law. Consequently, unnatural increases or decreases in energy may occur in long-term predictions. This can cause phenomena such as the object speed or wave height to be overestimated or underestimated, and results in uncertainty regarding the prediction’s reliability.
    Research Findings
    This research group developed a new artificial intelligence-based technology that can be utilized to predict various phenomena by strictly preserving physical laws such as the energy conservation law.
    This newly developed approach was born from the notion ‘if the world were digital’. Based on this way of thinking, physical laws that must be preserved in such a digital world were introduced. Focusing on the fact that physical laws are written in calculus terms such as ‘differentiation’ and ‘integration’, the researchers rewrote them using digital calculus.
    To do this technically, the researchers developed a new digital version of backpropagation (*1), which is utilized in machine learning, using automatic differentiation. It is possible to preserve physical laws such as the energy conservation law in the digital world with this new approach. Furthermore, this enables the energy conservation law to be correctly realized by AI-based technology even in simulations. Using this new methodology will make highly reliable predications possible and prevent the occurrence of unnatural increases and decreases in energy that are seen in conventional models.

    advertisement

    In the technique developed in this study, the AI learns the energy function from observational data of the physical phenomena and then generates equations of motion in the digital world. These equations of motion can be utilized as-is by the simulation program, and it is expected that the application of such equations will result in new scientific discoveries (Figure 1). In addition, it is not necessary for these equations of motion to be rewritten for the computer simulation, so physical laws such as the energy conservation law can be replicated.
    To introduce physical laws into the digital world, geometric approaches such as those of symplectic geometry (*2) and Riemannian geometry (*3) were also utilized. This makes it possible to apply this technique to the prediction of a wider range of phenomena. For example, the phenomenon of two droplets becoming one can be explained in terms of the loss of energy that occurs when they become a single droplet. This kind of phenomenon can be described well using Riemannian geometry. In fact, both energy conservation and energy dissipation phenomena can be shown in a similar equation from a geometrical aspect, which could enable the creation of a unified system that can handle both types of phenomenon. By incorporating this way of thinking, the model developed through this research was expanded to handle energy dissipation phenomena as well, making it possible to accurately estimate the reduction in energy.
    Examples of such phenomena include the structural organization of materials, crystal growth and crack extension mechanics, and it is hoped that further developments in AI technology will enable these kinds of phenomena to be predicted.
    Moreover, the research group also successfully increased the efficiency of the AI’s learning and experiments showed that this was 10 times faster than current methods.
    Further Research
    The approach developed by this research suggests that it would be possible, when predicting physical phenomena, to produce custom simulations that imitate detailed aspects of these phenomena that are difficult for humans to coordinate. This would make it possible to increase the accuracy of the simulation while also making more efficient predictions possible, leading to improvements in calculation time for various physics simulations.
    Furthermore, using AI to extract physical laws from observational data will make it possible to predict phenomena that were previously difficult to simulate due to their detailed mechanisms being unknown.
    Predictions made by AI have often been termed ‘black boxes’ and they are prone to reliability issues. However, the approach developed through this research is highly reliable because it can accurately replicate phenomena while adhering to physical laws such as the energy conversion law, meaning that over predictions and under predictions are unlikely to occur.
    This technique can also develop backpropagation, which is commonly utilized in AI learning. Therefore, it could improve the speed of various types of machine learning beyond the technology in this research study.
    Glossary
    1. Backpropagation (BP): Backpropagation is an algorithm used in machine learning. It is used to calculate how best to correct incorrect responses given by the AI during the learning period (based on differentiation calculations).
    2. Symplectic geometry: The geometry behind mechanical theories such as Newton’s Laws. It is considered to be able to describe physical laws such as mechanics without coordinates as the laws exist regardless of specific coordinates. Therefore, it is possible to describe and analyze equations of motion using symplectic geometry.
    3. Riemannian geometry: Riemannian geometry is used to study curved surfaces. It enables the concepts of length and angle to be introduced to a variety of subjects. By using this geometric approach, phenomena such as the dissipation of energy can be modelled in terms of a point moving down a slope. More

  • in

    Researchers use artificial intelligence to ID mosquitoes

    Rapid and accurate identification of mosquitoes that transmit human pathogens such as malaria is an essential part of mosquito-borne disease surveillance. Now, researchers have shown the effectiveness of an artificial intelligence system — known as a Convoluted Neural Network — to classify mosquito sex, genus, species and strain. More