More stories

  • in

    Developing smarter, faster machine intelligence with light

    Researchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the order of petabytes, per second. This innovation, which harnesses the massive parallelism of light, heralds a new era of optical signal processing for machine learning with numerous applications, including in self-driving cars, 5G networks, data-centers, biomedical diagnostics, data-security and more.

    advertisement

    Global demand for machine learning hardware is dramatically outpacing current computing power supplies. State-of-the-art electronic hardware, such as graphics processing units and tensor processing unit accelerators, help mitigate this, but are intrinsically challenged by serial data processing that requires iterative data processing and encounters delays from wiring and circuit constraints. Optical alternatives to electronic hardware could help speed up machine learning processes by simplifying the way information is processed in a non-iterative way. However, photonic-based machine learning is typically limited by the number of components that can be placed on photonic integrated circuits, limiting the interconnectivity, while free-space spatial-light-modulators are restricted to slow programming speeds.
    To achieve a breakthrough in this optical machine learning system, the researchers replaced spatial light modulators with digital mirror-based technology, thus developing a system over 100 times faster. The non-iterative timing of this processor, in combination with rapid programmability and massive parallelization, enables this optical machine learning system to outperform even the top-of-the-line graphics processing units by over one order of magnitude, with room for further optimization beyond the initial prototype.
    Unlike the current paradigm in electronic machine learning hardware that processes information sequentially, this processor uses the Fourier optics, a concept of frequency filtering which allows for performing the required convolutions of the neural network as much simpler element-wise multiplications using the digital mirror technology. 
    “Optics allows for processing large-scale matrices in a single time-step, which allows for new scaling vectors of performing convolutions optically. This can have significant potential for machine learning applications as demonstrated here.”  said Puneet Gupta, professor & vice chair of computer engineering at UCLA.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by George Washington University. Note: Content may be edited for style and length.

    Journal Reference:
    Mario Miscuglio, Zibo Hu, Shurui Li, Jonathan K. George, Roberto Capanna, Hamed Dalir, Philippe M. Bardet, Puneet Gupta, Volker J. Sorger. Massively parallel amplitude-only Fourier neural network. Optica, 2020; 7 (12): 1812 DOI: 10.1364/OPTICA.408659

    Cite This Page:

    George Washington University. “Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning.” ScienceDaily. ScienceDaily, 18 December 2020. .
    George Washington University. (2020, December 18). Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning. ScienceDaily. Retrieved December 19, 2020 from www.sciencedaily.com/releases/2020/12/201218131856.htm
    George Washington University. “Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning.” ScienceDaily. www.sciencedaily.com/releases/2020/12/201218131856.htm (accessed December 19, 2020). More

  • in

    New curriculum improves students' understanding of electric circuits in schools

    The topic of electricity often poses difficulties for many secondary school students in physics lessons. Researchers have now developed and empirically evaluated a new, intuitive curriculum as part of a major comparative study. The result: not only do secondary school students gain a better conceptual understanding of electric circuits, but teachers also perceive the curriculum as a significant improvement in their teaching. More

  • in

    Artificial Intelligence that can run a simulation faithful to physical laws

    A research group led by Associate Professor YAGUCHI Takaharu (Graduate School of System Informatics) and Associate Professor MATSUBARA Takashi (Graduate School of Engineering Science, Osaka University) have succeeded in developing technology to simulate phenomena for which the detailed mechanism or formula are unexplained. They did this by using artificial intelligence (AI) to create a model, which is faithful to the laws of physics, from observational data.
    It is hoped that this development will make it possible to predict phenomena that have been difficult to simulate up until now because their detailed underlying mechanisms were unknown. It is also expected to increase the speed of the simulations themselves.
    These research achievements were presented on December 7 at the Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), a meeting on artificial intelligence technology-related topics. 9454 papers were posted to NeurIPS 2020 and of the 1900 that were selected, this research paper was in the top 1.1% and was one of only 105 selected for oral presentation at the conference.
    Main Points
    Being able to apply artificial intelligence to the prediction of physical phenomena could result in extremely precise, high-speed simulations.
    Prediction methods up until now have been prone to generating overestimated or underestimated results because the difficulty of digitizing phenomena means that the laws of physics (such as the energy conservation law) are not preserved.
    This research group developed AI-based technology that can run simulations while preserving the laws of physics. They used digital analysis to replicate physics that the computer can recognize in the digital world.
    It is expected that this technology will enable phenomena for which the detailed mechanism or formula is unclear (e.g. wave motion, fracture mechanics (such as crack growth) and the growth of crystal structures) to be simulated as long as there is sufficient observational data.
    Research Background
    Ordinarily, it is possible to carry out predictions of physical phenomena via simulations using supercomputers, and these simulations use equations based on the laws of physics. Even though these equations are highly versatile, this does not always mean that they are capable of perfectly replicating the distinct characteristics of individual phenomena. For example, many people learn about the physics behind the motion of a pendulum in high school. However, if you were to actually make a pendulum and try swinging it, a slight manufacturing defect in the pendulum could cause it not to move in accordance with the theory and this would result in an error in the simulation’s prediction. Consequently, research into applying observational data of phenomena to simulations via artificial intelligence has been advancing in recent years. If this can be fully realized, it will be possible to develop custom simulations of real phenomena, which should improve the accuracy of simulations’ predictions.

    advertisement

    However, it is difficult to introduce the laws of physics that govern real world phenomena to prediction technology using current AI because computers are digital. It has been hard to perfectly replicate physical laws such as the energy conservation law. Consequently, unnatural increases or decreases in energy may occur in long-term predictions. This can cause phenomena such as the object speed or wave height to be overestimated or underestimated, and results in uncertainty regarding the prediction’s reliability.
    Research Findings
    This research group developed a new artificial intelligence-based technology that can be utilized to predict various phenomena by strictly preserving physical laws such as the energy conservation law.
    This newly developed approach was born from the notion ‘if the world were digital’. Based on this way of thinking, physical laws that must be preserved in such a digital world were introduced. Focusing on the fact that physical laws are written in calculus terms such as ‘differentiation’ and ‘integration’, the researchers rewrote them using digital calculus.
    To do this technically, the researchers developed a new digital version of backpropagation (*1), which is utilized in machine learning, using automatic differentiation. It is possible to preserve physical laws such as the energy conservation law in the digital world with this new approach. Furthermore, this enables the energy conservation law to be correctly realized by AI-based technology even in simulations. Using this new methodology will make highly reliable predications possible and prevent the occurrence of unnatural increases and decreases in energy that are seen in conventional models.

    advertisement

    In the technique developed in this study, the AI learns the energy function from observational data of the physical phenomena and then generates equations of motion in the digital world. These equations of motion can be utilized as-is by the simulation program, and it is expected that the application of such equations will result in new scientific discoveries (Figure 1). In addition, it is not necessary for these equations of motion to be rewritten for the computer simulation, so physical laws such as the energy conservation law can be replicated.
    To introduce physical laws into the digital world, geometric approaches such as those of symplectic geometry (*2) and Riemannian geometry (*3) were also utilized. This makes it possible to apply this technique to the prediction of a wider range of phenomena. For example, the phenomenon of two droplets becoming one can be explained in terms of the loss of energy that occurs when they become a single droplet. This kind of phenomenon can be described well using Riemannian geometry. In fact, both energy conservation and energy dissipation phenomena can be shown in a similar equation from a geometrical aspect, which could enable the creation of a unified system that can handle both types of phenomenon. By incorporating this way of thinking, the model developed through this research was expanded to handle energy dissipation phenomena as well, making it possible to accurately estimate the reduction in energy.
    Examples of such phenomena include the structural organization of materials, crystal growth and crack extension mechanics, and it is hoped that further developments in AI technology will enable these kinds of phenomena to be predicted.
    Moreover, the research group also successfully increased the efficiency of the AI’s learning and experiments showed that this was 10 times faster than current methods.
    Further Research
    The approach developed by this research suggests that it would be possible, when predicting physical phenomena, to produce custom simulations that imitate detailed aspects of these phenomena that are difficult for humans to coordinate. This would make it possible to increase the accuracy of the simulation while also making more efficient predictions possible, leading to improvements in calculation time for various physics simulations.
    Furthermore, using AI to extract physical laws from observational data will make it possible to predict phenomena that were previously difficult to simulate due to their detailed mechanisms being unknown.
    Predictions made by AI have often been termed ‘black boxes’ and they are prone to reliability issues. However, the approach developed through this research is highly reliable because it can accurately replicate phenomena while adhering to physical laws such as the energy conversion law, meaning that over predictions and under predictions are unlikely to occur.
    This technique can also develop backpropagation, which is commonly utilized in AI learning. Therefore, it could improve the speed of various types of machine learning beyond the technology in this research study.
    Glossary
    1. Backpropagation (BP): Backpropagation is an algorithm used in machine learning. It is used to calculate how best to correct incorrect responses given by the AI during the learning period (based on differentiation calculations).
    2. Symplectic geometry: The geometry behind mechanical theories such as Newton’s Laws. It is considered to be able to describe physical laws such as mechanics without coordinates as the laws exist regardless of specific coordinates. Therefore, it is possible to describe and analyze equations of motion using symplectic geometry.
    3. Riemannian geometry: Riemannian geometry is used to study curved surfaces. It enables the concepts of length and angle to be introduced to a variety of subjects. By using this geometric approach, phenomena such as the dissipation of energy can be modelled in terms of a point moving down a slope. More

  • in

    Researchers use artificial intelligence to ID mosquitoes

    Rapid and accurate identification of mosquitoes that transmit human pathogens such as malaria is an essential part of mosquito-borne disease surveillance. Now, researchers have shown the effectiveness of an artificial intelligence system — known as a Convoluted Neural Network — to classify mosquito sex, genus, species and strain. More

  • in

    Scientists create entangled photons 100 times more efficiently than previously possible

    Super-fast quantum computers and communication devices could revolutionize countless aspects of our lives — but first, researchers need a fast, efficient source of the entangled pairs of photons such systems use to transmit and manipulate information. Researchers at Stevens Institute of Technology have done just that, not only creating a chip-based photon source 100 times more efficient that previously possible, but bringing massive quantum device integration within reach.
    “It’s long been suspected that this was possible in theory, but we’re the first to show it in practice,” said Yuping Huang, Gallagher associate professor of physics and director of the Center for Quantum Science and Engineering.
    To create photon pairs, researchers trap light in carefully sculpted nanoscale microcavities; as light circulates in the cavity, its photons resonate and split into entangled pairs. But there’s a catch: at present, such systems are extremely inefficient, requiring a torrent of incoming laser light comprising hundreds of millions of photons before a single entangled photon pair will grudgingly drip out at the other end.
    Huang and colleagues at Stevens have now developed a new chip-based photon source that’s 100 times more efficient than any previous device, allowing the creation of tens of millions of entangled photon pairs per second from a single microwatt-powered laser beam.
    “This is a huge milestone for quantum communications,” said Huang, whose work will appear in the Dec. 17 issue of Physical Review Letters.
    Working with Stevens graduate students Zhaohui Ma and Jiayang Chen, Huang built on his laboratory’s previous research to carve extremely high-quality microcavities into flakes of lithium niobate crystal. The racetrack-shaped cavities internally reflect photons with very little loss of energy, enabling light to circulate longer and interact with greater efficiency.
    By fine-tuning additional factors such as temperature, the team was able to create an unprecedentedly bright source of entangled photon pairs. In practice, that allows photon pairs to be produced in far greater quantities for a given amount of incoming light, dramatically reducing the energy needed to power quantum components.
    The team is already working on ways to further refine their process, and say they expect to soon attain the true Holy Grail of quantum optics: a system with that can turn a single incoming photon into an entangled pair of outgoing photons, with virtually no waste energy along the way. “It’s definitely achievable,” said Chen. “At this point we just need incremental improvements.”
    Until then, the team plans to continue refining their technology, and seeking ways to use their photon source to drive logic gates and other quantum computing or communication components. “Because this technology is already chip-based, we’re ready to start scaling up by integrating other passive or active optical components,” explained Huang.
    The ultimate goal, Huang said, is to make quantum devices so efficient and cheap to operate that they can be integrated into mainstream electronic devices. “We want to bring quantum technology out of the lab, so that it can benefit every single one of us,” he explained. “Someday soon we want kids to have quantum laptops in their backpacks, and we’re pushing hard to make that a reality.”

    Story Source:
    Materials provided by Stevens Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Scientists simulate a large-scale virus, M13

    Scientists have developed a procedure that combines various resolution levels in a computer simulation of a biological virus. Their procedure maps a large-scale model that includes features, such as a virus structure, nanoparticles, etc, to its corresponding coarse-grained molecular model. This approach opens the prospects to a whole-length virus simulation at the molecular level. More