More stories

  • in

    Engineering roboticists discover alternative physics

    A precursor step to understanding physics is identifying relevant variables. Columbia Engineers developed an AI program to tackle a longstanding problem: whether it is possible to identify state variables from only high-dimensional observational data. Using video recordings of a variety of physical dynamical systems, the algorithm discovered the intrinsic dimension of the observed dynamics and identified candidate sets of state variables — without prior knowledge of the underlying physics.
    Energy, Mass, Velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.
    This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.
    The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double-pendulum known to have exactly four “state variables” — the angle and angular velocity of each of the two arms. After a few hours of analysis, the AI outputted the answer: 4.7.
    “We thought this answer was close enough,” said Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering, where the work was primarily done. “Especially since all the AI had access to was raw video footage, without any knowledge of physics or geometry. But we wanted to know what the variables actually were, not just their number.”
    The researchers then proceeded to visualize the actual variables that the program identified. Extracting the variables themselves was not easy, since the program cannot describe them in any intuitive way that would be understandable to humans. After some probing, it appeared that two of the variables the program chose loosely corresponded to the angles of the arms, but the other two remain a mystery. “We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities,” explained Boyuan Chen PhD ’22, now an assistant professor at Duke University, who led the work. “But nothing seemed to match perfectly.” The team was confident that the AI had found a valid set of four variables, since it was making good predictions, “but we don’t yet understand the mathematical language it is speaking,” he explained. More

  • in

    Seeing the light: Researchers develop new AI system using light to learn associatively

    Researchers at Oxford University’s Department of Materials, working in collaboration with colleagues from Exeter and Munster have developed an on-chip optical processor capable of detecting similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.
    The new research published in Optica took its inspiration from Nobel Prize laureate Ivan Pavlov’s discovery of classical conditioning. In his experiments, Pavlov found that by providing another stimulus during feeding, such as the sound of a bell or metronome, his dogs began to link the two experiences and would salivate at the sound alone. The repeated associations of two unrelated events paired together could produce a learned response — a conditional reflex.
    Co-first author Dr James Tan You Sian, who did this work as part of his DPhil in the Department of Materials, University of Oxford said: ‘Pavlovian associative learning is regarded as a basic form of learning that shapes the behaviour of humans and animals — but adoption in AI systems is largely unheard of. Our research on Pavlovian learning in tandem with optical parallel processing demonstrates the exciting potential for a variety of AI tasks.’
    The neural networks used in most AI systems often require a substantial number of data examples during a learning process — training a model to reliably recognise a cat could use up to 10,000 cat/non-cat images — at a computational and processing cost.
    Rather than relying on backpropagation favoured by neural networks to ‘fine-tune’ results, the Associative Monadic Learning Element (AMLE) uses a memory material that learns patterns to associate together similar features in datasets — mimicking the conditional reflex observed by Pavlov in the case of a ‘match’.
    The AMLE inputs are paired with the correct outputs to supervise the learning process, and the memory material can be reset using light signals. In testing, the AMLE could correctly identify cat/non-cat images after being trained with just five pairs of images.
    The considerable performance capabilities of the new optical chip over a conventional electronic chip are down to two key differences in design: a unique network architecture incorporating associative learning as a building block rather than using neurons and a neural network the use of ‘wavelength-division multiplexing’to send multiple optical signals on different wavelengths on a single channel to increase computational speed.The chip hardware uses light to send and retrieve data to maximise information density — several signals on different wavelengths are sent simultaneously for parallel processing which increases the detection speed of recognition tasks. Each wavelength increases the computational speed.
    Professor Wolfram Pernice, co-author from Münster University explained: ‘The device naturally captures similarities in datasets while doing so in parallel using light to increase the overall computation speed — which can far exceed the capabilities of conventional electronic chips.’
    An associative learning approach could complement neural networks rather than replace them clarified co-first author Professor Zengguang Cheng, now at Fudan University.
    ‘It is more efficient for problems that don’t need substantial analysis of highly complex features in the datasets’ said Professor Cheng. ‘Many learning tasks are volume based and don’t have that level of complexity — in these cases, associative learning can complete the tasks more quickly and at a lower computational cost.’
    ‘It is increasingly evident that AI will be at the centre of many innovations we will witness in the coming phase of human history. This work paves the way towards realising fast optical processors that capture data associations for particular types of AI computations, although there are still many exciting challenges ahead.’ said Professor Harish Bhaskaran, who led the study.
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Improving image sensors for machine vision

    Image sensors measure light intensity, but angle, spectrum, and other aspects of light must also be extracted to significantly advance machine vision.
    In Applied Physics Letters, published by AIP Publishing, researchers at the University of Wisconsin-Madison, Washington University in St. Louis, and OmniVision Technologies highlight the latest nanostructured components integrated on image sensor chips that are most likely to make the biggest impact in multimodal imaging.
    The developments could enable autonomous vehicles to see around corners instead of just a straight line, biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.
    “Image sensors will gradually undergo a transition to become the ideal artificial eyes of machines,” co-author Yurui Qu, from the University of Wisconsin-Madison, said. “An evolution leveraging the remarkable achievement of existing imaging sensors is likely to generate more immediate impacts.”
    Image sensors, which converts light into electrical signals, are composed of millions of pixels on a single chip. The challenge is how to combine and miniaturize multifunctional components as part of the sensor.
    In their own work, the researchers detailed a promising approach to detect multiple-band spectra by fabricating an on-chip spectrometer. They deposited photonic crystal filters made up of silicondirectly on top of the pixels to create complex interactions between incident light and the sensor.
    The pixels beneath the films record the distribution of light energy, from which light spectral information can be inferred. The device — less than a hundredth of a square inch in size — is programmable to meet various dynamic ranges, resolution levels, and almost any spectral regime from visible to infrared.
    The researchers built a component that detects angular information to measure depth and construct 3D shapes at subcellular scales. Their work was inspired by directional hearing sensors found in animals, like geckos, whose heads are too small to determine where sound is coming from in the same way humans and other animals can. Instead, they use coupled eardrums to measure the direction of sound within a size that is orders of magnitude smaller than the corresponding acoustic wavelength.
    Similarly, pairs of silicon nanowires were constructed as resonators to support optical resonance. The optical energy stored in two resonators is sensitive to the incident angle. The wire closest to the light sends the strongest current. By comparing the strongest and weakest currents from both wires, the angle of the incoming light waves can be determined.
    Millions of these nanowires can be placed on a 1-square-millimeter chip. The research could support advances in lensless cameras, augmented reality, and robotic vision.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    'IcePic' algorithm outperforms humans in predicting ice crystal formation

    Cambridge scientists have developed an artificially intelligent algorithm capable of beating scientists at predicting how and when different materials form ice crystals.
    The program — IcePic — could help atmospheric scientists improve climate change models in the future. Details are published today in the journal PNAS.
    Water has some unusual properties, such as expanding when it turns into ice. Understanding water and how it freezes around different molecules has wide-reaching implications in a broad range of areas, from weather systems that can affect whole continents to storing biological tissue samples in a hospital.
    The Celsius temperature scale was designed based on the premise that it is the transition temperature between water and ice; however, whilst ice always melts at 0°C, water doesn’t necessarily freeze at 0°C. Water can still be in liquid form at -40°C, and it is impurities in wate that enable ice to freeze at higher temperatures. One of the biggest aims of the field has been to predict the ability of different materials to promote the formation of ice — known as a material’s “ice nucleation ability.”
    Researchers at the University of Cambridge, have developed a ‘deep learning’ tool able to predict the ice nucleation ability of different materials — and which was able to beat scientists in an online ‘quiz’ in which they were asked to predict when ice crystals would form.
    Deep learning is how artificial intelligence (AI) learns to draw insights from raw data. It finds its own patterns in the data, freeing it of the need for human input so that it can process results faster and more precisely. In the case of IcePic, it can infer different ice crystal formation properties around different materials. IcePic has been trained on thousands of images so that it can look at completely new systems and infer accurate predictions from them.
    The team set up a quiz in which scientists were asked to predict when ice crystals would form in different conditions shown by 15 different images. These results were then measured against IcePic’s performance. When put to the test, IcePic was far more accurate in determining a material’s ice nucleation ability than over 50 researchers from across the globe. Moreover, it helped identify where humans were going wrong.
    Michael Davies, a PhD student in the ICE lab at the Yusuf Hamied Department of Chemistry, Cambridge, and University College London, London, first author of the study, said: “It was fascinating to learn that the images of water we showed IcePic contain enough information to actually predict ice nucleation.
    “Despite us — that is, human scientists — having a 75 year head start in terms of the science, IcePic was still able to do something we couldn’t.”
    Determining the formation of ice has become especially relevant in climate change research.
    Water continuously moves within the Earth and its atmosphere, condensing to form clouds, and precipitating in the form of rain and snow. Different foreign particles affect how ice forms in these clouds, for example, smoke particles from pollution compared to smoke particles from a volcano. Understanding how different conditions affect our cloud systems is essential for more accurate weather predictions.
    “The nucleation of ice is really important for the atmospheric science community and climate modelling,” said Davies. “At the moment there is no viable way to predict ice nucleation other than direct experiments or expensive simulations. IcePic should open up a lot more applications for discovery.”
    Story Source:
    Materials provided by University of Cambridge. Note: Content may be edited for style and length. More

  • in

    A new leap in understanding nickel oxide superconductors

    A new study shows that nickel oxide superconductors, which conduct electricity with no loss at higher temperatures than conventional superconductors do, contain a type of quantum matter called charge density waves, or CDWs, that can accompany superconductivity.
    The presence of CDWs shows that these recently discovered materials, also known as nickelates, are capable of forming correlated states — “electron soups” that can host a variety of quantum phases, including superconductivity, researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University reported in Nature Physics today.
    “Unlike in any other superconductor we know about, CDWs appear even before we dope the material by replacing some atoms with others to change the number of electrons that are free to move around,” said Wei-Sheng Lee, a SLAC lead scientist and investigator with the Stanford Institute for Materials and Energy Science (SIMES) who led the study.
    “This makes the nickelates a very interesting new system — a new playground for studying unconventional superconductors.”
    Nickelates and cuprates
    In the 35 years since the first unconventional “high-temperature” superconductors were discovered, researchers have been racing to find one that could carry electricity with no loss at close to room temperature. This would be a revolutionary development, allowing things like perfectly efficient power lines, maglev trains and a host of other futuristic, energy-saving technologies. More

  • in

    Using AI to train teams of robots to work together

    When communication lines are open, individual agents such as robots or drones can work together to collaborate and complete a task. But what if they aren’t equipped with the right hardware or the signals are blocked, making communication impossible? University of Illinois Urbana-Champaign researchers started with this more difficult challenge. They developed a method to train multiple agents to work together using multi-agent reinforcement learning, a type of artificial intelligence.
    “It’s easier when agents can talk to each other,” said Huy Tran, an aerospace engineer at Illinois. “But we wanted to do this in a way that’s decentralized, meaning that they don’t talk to each other. We also focused on situations where it’s not obvious what the different roles or jobs for the agents should be.”
    Tran said this scenario is much more complex and a harder problem because it’s not clear what one agent should do versus another agent.
    “The interesting question is how do we learn to accomplish a task together over time,” Tran said.
    Tran and his collaborators used machine learning to solve this problem by creating a utility function that tells the agent when it is doing something useful or good for the team.
    “With team goals, it’s hard to know who contributed to the win,” he said. “We developed a machine learning technique that allows us to identify when an individual agent contributes to the global team objective. If you look at it in terms of sports, one soccer player may score, but we also want to know about actions by other teammates that led to the goal, like assists. It’s hard to understand these delayed effects.”
    The algorithms the researchers developed can also identify when an agent or robot is doing something that doesn’t contribute to the goal. “It’s not so much the robot chose to do something wrong, just something that isn’t useful to the end goal.”
    They tested their algorithms using simulated games like Capture the Flag and StarCraft, a popular computer game.
    You can watch a video of Huy Tran demonstrating related research using deep reinforcement learning to help robots evaluate their next move in Capture the Flag.
    “StarCraft can be a little bit more unpredictable — we were excited to see our method work well in this environment too.”
    Tran said this type of algorithm is applicable to many real-life situations, such as military surveillance, robots working together in a warehouse, traffic signal control, autonomous vehicles coordinating deliveries, or controlling an electric power grid.
    Tran said Seung Hyun Kim did most of the theory behind the idea when he was an undergraduate student studying mechanical engineering, with Neale Van Stralen, an aerospace student, helping with the implementation. Tran and Girish Chowdhary advised both students. The work was recently presented to the AI community at the Autonomous Agents and Multi-Agent Systems peer-reviewed conference.
    Story Source:
    Materials provided by University of Illinois Grainger College of Engineering. Original written by Debra Levey Larson. Note: Content may be edited for style and length. More

  • in

    The structure of the smallest semiconductor was elucidated

    A semiconductor is a material whose conductivity lies somewhere between that of a conductor and an insulator. This property allows semiconductors to serve as the base material for modern electronics and transistors. It is no understatement that the technological progress in the latter part of the 20th century was largely spearheaded by the semiconductor industry.
    Today, technological advancements in semiconductor nanocrystals are currently ongoing. For example, quantum dots and wires from semiconducting materials are of great interest in displays, photocatalytic, and other electronic devices. However, numerous aspects of the colloidal nanocrystals are still remaining to be understood at the fundamental level. An important one among them is the elucidation of the molecular-level mechanisms of the formation and growth of the nanocrystals.
    These semiconducting nanocrystals are grown starting from tiny individual precursors made of a small number of atoms. These precursors are called “nanoclusters.” Isolation and molecular structure determination of such nanoclusters (or simply clusters) have been the subject of immense interest in the past several decades. The structural details of clusters, typical nuclei of the nanocrystals, are anticipated to provide critical insights into the evolution of the properties of the nanocrystals.
    Different ‘seed’ nanoclusters result in the growth of different nanocrystals. As such, it is important to have a homogenous mixture of identical nanoclusters if one wishes to grow identical nanocrystals. However, the synthesis of nanoclusters often results in the production of clusters with all sorts of different size and configuration, and purifying the mixture to obtain only the desirable particles is very challenging.
    Therefore, producing nanoclusters with homogenous sizes is important. “Magic-sized nanoclusters, MSCs,” which are preferably formed over random sizes in a uniform manner, possess size range from 0.5 to 3.0 nm. Among these, MSCs composed of non-stoichiometric cadmium and chalcogenide ratio (non 1:1) are the most studied. A new class of MSCs with a 1:1 stoichiometric ratio of metal-chalcogenide ratio have been under spotlight owing to the prediction of intriguing structures. For example, Cd13Se13, Cd33Se33 and Cd34Se34, which consist of an equal number of cadmium and selenium atoms have been synthesized and characterized.
    Recently, researchers at the Center for Nanoparticle Research (led by Professor HYEON Taeghwan) within the Institute for Basic Science (IBS) in collaboration with the teams at Xiamen University (led by Professor Nanfeng ZHENG) and at the University of Toronto (led by Professor Oleksandr VOZNYY) reported the colloidal synthesis and atomic-level structure of stoichiometric semiconductor cadmium selenide (CdSe) cluster. This is the smallest nanocluster synthesized as of today. More

  • in

    Next generation atomic clocks are a step closer to real world applications

    Quantum clocks are shrinking, thanks to new technologies developed at the University of Birmingham-led UK Quantum Technology Hub Sensors and Timing
    Working in collaboration with and partly funded by the UK’s Defence Science and Technology Laboratory (Dstl), a team of quantum physicists have devised new approaches that not only reduce the size of their clock, but also make it robust enough to be transported out of the laboratory and employed in the ‘real world’.
    Quantum — or atomic — clocks are widely seen as essential for increasingly precise approaches to areas such as online communications across the world, navigation systems, or global trading in stocks, where fractions of seconds could make a huge economic difference. Atomic clocks with optical clock frequencies can be 10,000 times more accurate than their microwave counterparts, opening up the possibility of redefining the standard (SI) unit of measurement.
    Even more advanced optical clocks could one day make a significant difference both in everyday life and in fundamental science. By allowing longer periods between needing to resynchronise than other kinds of clock, they offer increased resilience for national timing infrastructure and unlock future positioning and navigation applications for autonomous vehicles. The unparalleled accuracy of these clocks can also help us see beyond standard models of physics and understand some of the most mysterious aspects of the universe, including dark matter and dark energy. Such clocks will also help to address fundamental physics questions such as whether the fundamental constants are really ‘constants’ or they are varying with time
    Lead researcher, Dr Yogeshwar Kale, said: “The stability and precision of optical clocks make them crucial to many future information networks and communications. Once we have a system that is ready for use outside the laboratory, we can use them, for example, on -ground navigation networks where all such clocks are connected via optical fibre and started talking with each other. Such networks will reduce our dependence on GPS systems, which can sometimes fail.
    “These transportable optical clocks not only will help to improve geodetic measurements — the fundamental properties of the Earth’s shape and gravity variations — but will also serve as precursors to monitor and identify geodynamic signals like earthquakes and volcanoes at early stages.”
    Although such quantum clocks are advancing rapidly, key barriers to deploying them are their size — current models come in a van or in a car trailer and are about 1500 litres — and their sensitivity to environmental conditions limiting their transport between different places. More