More stories

  • in

    Anti-butterfly effect enables new benchmarking of quantum-computer performance

    Research drawing on the quantum “anti-butterfly effect” solves a longstanding experimental problem in physics and establishes a method for benchmarking the performance of quantum computers.
    “Using the simple, robust protocol we developed, we can determine the degree to which quantum computers can effectively process information, and it applies to information loss in other complex quantum systems, too,” said Bin Yan, a quantum theorist at Los Alamos National Laboratory.
    Yan is corresponding author of the paper on benchmarking information scrambling published today in Physical Review Letters. “Our protocol quantifies information scrambling in a quantum system and unambiguously distinguishes it from fake positive signals in the noisy background caused by quantum decoherence,” he said.
    Noise in the form of decoherence erases all the quantum information in a complex system such as a quantum computer as it couples with the surrounding environment. Information scrambling through quantum chaos, on the other hand, spreads information across the system, protecting it and allowing it to be retrieved.
    Coherence is a quantum state that enables quantum computing, and decoherence refers to the loss of that state as information leaks to the surrounding environment.
    “Our method, which draws on the quantum anti-butterfly effect we discovered two years ago, evolves a system forward and backward through time in a single loop, so we can apply it to any system with time-reversing the dynamics, including quantum computers and quantum simulators using cold atoms,” Yan said.
    The Los Alamos team demonstrated the protocol with simulations on IBM cloud-based quantum computers.
    The inability to distinguish decoherence from information scrambling has stymied experimental research into the phenomenon. First studied in black-hole physics, information scrambling has proved relevant across a wide range of research areas, including quantum chaos in many-body systems, phase transition, quantum machine learning and quantum computing. Experimental platforms for studying information scrambling include superconductors, trapped ions and cloud-based quantum computers.
    Practical application of the quantum anti-butterfly effect
    Yan and co-author Nikolai Sinitsyn published a paper in 2020 proving that evolving quantum processes backwards on a quantum computer to damage information in the simulated past causes little change when returned to the present. In contrast, a classical-physics system smears the information irrecoverably during the back-and-forth time loop.
    Building on this discovery, Yan, Sinitsyn and co-author Joseph Harris, a University of Edinburgh graduate student who worked on the current paper as a participant in the Los Alamos Quantum Computing Summer School, developed the protocol. It prepares a quantum system and subsystem, evolves the full system forward in time, causes a change in a different subsystem, then evolves the system backward for the same amount of time. Measuring the overlap of information between the two subsystems shows how much information has been preserved by scrambling and how much lost to decoherence.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Engineering roboticists discover alternative physics

    A precursor step to understanding physics is identifying relevant variables. Columbia Engineers developed an AI program to tackle a longstanding problem: whether it is possible to identify state variables from only high-dimensional observational data. Using video recordings of a variety of physical dynamical systems, the algorithm discovered the intrinsic dimension of the observed dynamics and identified candidate sets of state variables — without prior knowledge of the underlying physics.
    Energy, Mass, Velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.
    This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.
    The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double-pendulum known to have exactly four “state variables” — the angle and angular velocity of each of the two arms. After a few hours of analysis, the AI outputted the answer: 4.7.
    “We thought this answer was close enough,” said Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering, where the work was primarily done. “Especially since all the AI had access to was raw video footage, without any knowledge of physics or geometry. But we wanted to know what the variables actually were, not just their number.”
    The researchers then proceeded to visualize the actual variables that the program identified. Extracting the variables themselves was not easy, since the program cannot describe them in any intuitive way that would be understandable to humans. After some probing, it appeared that two of the variables the program chose loosely corresponded to the angles of the arms, but the other two remain a mystery. “We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities,” explained Boyuan Chen PhD ’22, now an assistant professor at Duke University, who led the work. “But nothing seemed to match perfectly.” The team was confident that the AI had found a valid set of four variables, since it was making good predictions, “but we don’t yet understand the mathematical language it is speaking,” he explained. More

  • in

    Seeing the light: Researchers develop new AI system using light to learn associatively

    Researchers at Oxford University’s Department of Materials, working in collaboration with colleagues from Exeter and Munster have developed an on-chip optical processor capable of detecting similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.
    The new research published in Optica took its inspiration from Nobel Prize laureate Ivan Pavlov’s discovery of classical conditioning. In his experiments, Pavlov found that by providing another stimulus during feeding, such as the sound of a bell or metronome, his dogs began to link the two experiences and would salivate at the sound alone. The repeated associations of two unrelated events paired together could produce a learned response — a conditional reflex.
    Co-first author Dr James Tan You Sian, who did this work as part of his DPhil in the Department of Materials, University of Oxford said: ‘Pavlovian associative learning is regarded as a basic form of learning that shapes the behaviour of humans and animals — but adoption in AI systems is largely unheard of. Our research on Pavlovian learning in tandem with optical parallel processing demonstrates the exciting potential for a variety of AI tasks.’
    The neural networks used in most AI systems often require a substantial number of data examples during a learning process — training a model to reliably recognise a cat could use up to 10,000 cat/non-cat images — at a computational and processing cost.
    Rather than relying on backpropagation favoured by neural networks to ‘fine-tune’ results, the Associative Monadic Learning Element (AMLE) uses a memory material that learns patterns to associate together similar features in datasets — mimicking the conditional reflex observed by Pavlov in the case of a ‘match’.
    The AMLE inputs are paired with the correct outputs to supervise the learning process, and the memory material can be reset using light signals. In testing, the AMLE could correctly identify cat/non-cat images after being trained with just five pairs of images.
    The considerable performance capabilities of the new optical chip over a conventional electronic chip are down to two key differences in design: a unique network architecture incorporating associative learning as a building block rather than using neurons and a neural network the use of ‘wavelength-division multiplexing’to send multiple optical signals on different wavelengths on a single channel to increase computational speed.The chip hardware uses light to send and retrieve data to maximise information density — several signals on different wavelengths are sent simultaneously for parallel processing which increases the detection speed of recognition tasks. Each wavelength increases the computational speed.
    Professor Wolfram Pernice, co-author from Münster University explained: ‘The device naturally captures similarities in datasets while doing so in parallel using light to increase the overall computation speed — which can far exceed the capabilities of conventional electronic chips.’
    An associative learning approach could complement neural networks rather than replace them clarified co-first author Professor Zengguang Cheng, now at Fudan University.
    ‘It is more efficient for problems that don’t need substantial analysis of highly complex features in the datasets’ said Professor Cheng. ‘Many learning tasks are volume based and don’t have that level of complexity — in these cases, associative learning can complete the tasks more quickly and at a lower computational cost.’
    ‘It is increasingly evident that AI will be at the centre of many innovations we will witness in the coming phase of human history. This work paves the way towards realising fast optical processors that capture data associations for particular types of AI computations, although there are still many exciting challenges ahead.’ said Professor Harish Bhaskaran, who led the study.
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Improving image sensors for machine vision

    Image sensors measure light intensity, but angle, spectrum, and other aspects of light must also be extracted to significantly advance machine vision.
    In Applied Physics Letters, published by AIP Publishing, researchers at the University of Wisconsin-Madison, Washington University in St. Louis, and OmniVision Technologies highlight the latest nanostructured components integrated on image sensor chips that are most likely to make the biggest impact in multimodal imaging.
    The developments could enable autonomous vehicles to see around corners instead of just a straight line, biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.
    “Image sensors will gradually undergo a transition to become the ideal artificial eyes of machines,” co-author Yurui Qu, from the University of Wisconsin-Madison, said. “An evolution leveraging the remarkable achievement of existing imaging sensors is likely to generate more immediate impacts.”
    Image sensors, which converts light into electrical signals, are composed of millions of pixels on a single chip. The challenge is how to combine and miniaturize multifunctional components as part of the sensor.
    In their own work, the researchers detailed a promising approach to detect multiple-band spectra by fabricating an on-chip spectrometer. They deposited photonic crystal filters made up of silicondirectly on top of the pixels to create complex interactions between incident light and the sensor.
    The pixels beneath the films record the distribution of light energy, from which light spectral information can be inferred. The device — less than a hundredth of a square inch in size — is programmable to meet various dynamic ranges, resolution levels, and almost any spectral regime from visible to infrared.
    The researchers built a component that detects angular information to measure depth and construct 3D shapes at subcellular scales. Their work was inspired by directional hearing sensors found in animals, like geckos, whose heads are too small to determine where sound is coming from in the same way humans and other animals can. Instead, they use coupled eardrums to measure the direction of sound within a size that is orders of magnitude smaller than the corresponding acoustic wavelength.
    Similarly, pairs of silicon nanowires were constructed as resonators to support optical resonance. The optical energy stored in two resonators is sensitive to the incident angle. The wire closest to the light sends the strongest current. By comparing the strongest and weakest currents from both wires, the angle of the incoming light waves can be determined.
    Millions of these nanowires can be placed on a 1-square-millimeter chip. The research could support advances in lensless cameras, augmented reality, and robotic vision.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    How to make jet fuel from sunlight, air and water vapor

    Jet fuel can now be siphoned from the air.

    Or at least that’s the case in Móstoles, Spain, where researchers demonstrated that an outdoor system could produce kerosene, used as jet fuel, with three simple ingredients: sunlight, carbon dioxide and water vapor. Solar kerosene could replace petroleum-derived jet fuel in aviation and help stabilize greenhouse gas emissions, the researchers report in the July 20 Joule.

    Burning solar-derived kerosene releases carbon dioxide, but only as much as is used to make it, says Aldo Steinfeld, an engineer at ETH Zurich. “That makes the fuel carbon neutral, especially if we use carbon dioxide captured directly from the air.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Kerosene is the fuel of choice for aviation, a sector responsible for around 5 percent of human-caused greenhouse gas emissions. Finding sustainable alternatives has proven difficult, especially for long-distance aviation, because kerosene is packed with so much energy, says chemical physicist Ellen Stechel of Arizona State University in Tempe who was not involved in the study.

    In 2015, Steinfeld and his colleagues synthesized solar kerosene in the laboratory, but no one had produced the fuel entirely in a single system in the field. So Steinfeld and his team positioned 169 sun-tracking mirrors to reflect and focus radiation equivalent to about 2,500 suns into a solar reactor atop a 15-meter-tall tower. The reactor has a window to let the light in, ports that supply carbon dioxide and water vapor as well as a material used to catalyze chemical reactions called porous ceria.

    Within the solar reactor, porous ceria (shown) gets heated by sunlight and reacts with carbon dioxide and water vapor to produce syngas, a mixture of hydrogen gas and carbon monoxide.ETH Zurich

    When heated with solar radiation, the ceria reacts with carbon dioxide and water vapor in the reactor to produce syngas — a mixture of hydrogen gas and carbon monoxide. The syngas is then piped to the tower’s base where a machine converts it into kerosene and other hydrocarbons.

    Over nine days of operation, the researchers found that the tower converted about 4 percent of the used solar energy into roughly 5,191 liters of syngas, which was used to synthesize both kerosene and diesel. This proof-of-principle setup produced about a liter of kerosene a day, Steinfeld says.

    “It’s a major milestone,” Stechel says, though the efficiency needs to be improved for the technology to be useful to industry. For context, a Boeing 747 passenger jet burns around 19,000 liters of fuel during takeoff and the ascent to cruising altitude. Recovering heat unused by the system and improving the ceria’s heat absorption could boost the tower’s efficiency to more than 20 percent, making it economically practical, the researchers say. More

  • in

    'IcePic' algorithm outperforms humans in predicting ice crystal formation

    Cambridge scientists have developed an artificially intelligent algorithm capable of beating scientists at predicting how and when different materials form ice crystals.
    The program — IcePic — could help atmospheric scientists improve climate change models in the future. Details are published today in the journal PNAS.
    Water has some unusual properties, such as expanding when it turns into ice. Understanding water and how it freezes around different molecules has wide-reaching implications in a broad range of areas, from weather systems that can affect whole continents to storing biological tissue samples in a hospital.
    The Celsius temperature scale was designed based on the premise that it is the transition temperature between water and ice; however, whilst ice always melts at 0°C, water doesn’t necessarily freeze at 0°C. Water can still be in liquid form at -40°C, and it is impurities in wate that enable ice to freeze at higher temperatures. One of the biggest aims of the field has been to predict the ability of different materials to promote the formation of ice — known as a material’s “ice nucleation ability.”
    Researchers at the University of Cambridge, have developed a ‘deep learning’ tool able to predict the ice nucleation ability of different materials — and which was able to beat scientists in an online ‘quiz’ in which they were asked to predict when ice crystals would form.
    Deep learning is how artificial intelligence (AI) learns to draw insights from raw data. It finds its own patterns in the data, freeing it of the need for human input so that it can process results faster and more precisely. In the case of IcePic, it can infer different ice crystal formation properties around different materials. IcePic has been trained on thousands of images so that it can look at completely new systems and infer accurate predictions from them.
    The team set up a quiz in which scientists were asked to predict when ice crystals would form in different conditions shown by 15 different images. These results were then measured against IcePic’s performance. When put to the test, IcePic was far more accurate in determining a material’s ice nucleation ability than over 50 researchers from across the globe. Moreover, it helped identify where humans were going wrong.
    Michael Davies, a PhD student in the ICE lab at the Yusuf Hamied Department of Chemistry, Cambridge, and University College London, London, first author of the study, said: “It was fascinating to learn that the images of water we showed IcePic contain enough information to actually predict ice nucleation.
    “Despite us — that is, human scientists — having a 75 year head start in terms of the science, IcePic was still able to do something we couldn’t.”
    Determining the formation of ice has become especially relevant in climate change research.
    Water continuously moves within the Earth and its atmosphere, condensing to form clouds, and precipitating in the form of rain and snow. Different foreign particles affect how ice forms in these clouds, for example, smoke particles from pollution compared to smoke particles from a volcano. Understanding how different conditions affect our cloud systems is essential for more accurate weather predictions.
    “The nucleation of ice is really important for the atmospheric science community and climate modelling,” said Davies. “At the moment there is no viable way to predict ice nucleation other than direct experiments or expensive simulations. IcePic should open up a lot more applications for discovery.”
    Story Source:
    Materials provided by University of Cambridge. Note: Content may be edited for style and length. More

  • in

    A new leap in understanding nickel oxide superconductors

    A new study shows that nickel oxide superconductors, which conduct electricity with no loss at higher temperatures than conventional superconductors do, contain a type of quantum matter called charge density waves, or CDWs, that can accompany superconductivity.
    The presence of CDWs shows that these recently discovered materials, also known as nickelates, are capable of forming correlated states — “electron soups” that can host a variety of quantum phases, including superconductivity, researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University reported in Nature Physics today.
    “Unlike in any other superconductor we know about, CDWs appear even before we dope the material by replacing some atoms with others to change the number of electrons that are free to move around,” said Wei-Sheng Lee, a SLAC lead scientist and investigator with the Stanford Institute for Materials and Energy Science (SIMES) who led the study.
    “This makes the nickelates a very interesting new system — a new playground for studying unconventional superconductors.”
    Nickelates and cuprates
    In the 35 years since the first unconventional “high-temperature” superconductors were discovered, researchers have been racing to find one that could carry electricity with no loss at close to room temperature. This would be a revolutionary development, allowing things like perfectly efficient power lines, maglev trains and a host of other futuristic, energy-saving technologies. More

  • in

    Using AI to train teams of robots to work together

    When communication lines are open, individual agents such as robots or drones can work together to collaborate and complete a task. But what if they aren’t equipped with the right hardware or the signals are blocked, making communication impossible? University of Illinois Urbana-Champaign researchers started with this more difficult challenge. They developed a method to train multiple agents to work together using multi-agent reinforcement learning, a type of artificial intelligence.
    “It’s easier when agents can talk to each other,” said Huy Tran, an aerospace engineer at Illinois. “But we wanted to do this in a way that’s decentralized, meaning that they don’t talk to each other. We also focused on situations where it’s not obvious what the different roles or jobs for the agents should be.”
    Tran said this scenario is much more complex and a harder problem because it’s not clear what one agent should do versus another agent.
    “The interesting question is how do we learn to accomplish a task together over time,” Tran said.
    Tran and his collaborators used machine learning to solve this problem by creating a utility function that tells the agent when it is doing something useful or good for the team.
    “With team goals, it’s hard to know who contributed to the win,” he said. “We developed a machine learning technique that allows us to identify when an individual agent contributes to the global team objective. If you look at it in terms of sports, one soccer player may score, but we also want to know about actions by other teammates that led to the goal, like assists. It’s hard to understand these delayed effects.”
    The algorithms the researchers developed can also identify when an agent or robot is doing something that doesn’t contribute to the goal. “It’s not so much the robot chose to do something wrong, just something that isn’t useful to the end goal.”
    They tested their algorithms using simulated games like Capture the Flag and StarCraft, a popular computer game.
    You can watch a video of Huy Tran demonstrating related research using deep reinforcement learning to help robots evaluate their next move in Capture the Flag.
    “StarCraft can be a little bit more unpredictable — we were excited to see our method work well in this environment too.”
    Tran said this type of algorithm is applicable to many real-life situations, such as military surveillance, robots working together in a warehouse, traffic signal control, autonomous vehicles coordinating deliveries, or controlling an electric power grid.
    Tran said Seung Hyun Kim did most of the theory behind the idea when he was an undergraduate student studying mechanical engineering, with Neale Van Stralen, an aerospace student, helping with the implementation. Tran and Girish Chowdhary advised both students. The work was recently presented to the AI community at the Autonomous Agents and Multi-Agent Systems peer-reviewed conference.
    Story Source:
    Materials provided by University of Illinois Grainger College of Engineering. Original written by Debra Levey Larson. Note: Content may be edited for style and length. More