More stories

  • in

    AI reveals unsuspected math underlying search for exoplanets

    Artificial intelligence (AI) algorithms trained on real astronomical observations now outperform astronomers in sifting through massive amounts of data to find new exploding stars, identify new types of galaxies and detect the mergers of massive stars, accelerating the rate of new discovery in the world’s oldest science.
    But AI, also called machine learning, can reveal something deeper, University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity — in particular, how that theory is applied to finding new planets around other stars.
    In a paper appearing this week in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it — a process called gravitational microlensing — revealed that the decades-old theories now used to explain these observations are woefully incomplete.
    In 1936, Albert Einstein himself used his new theory of general relativity to show how the light from a distant star can be bent by the gravity of a foreground star, not only brightening it as seen from Earth, but often splitting it into several points of light or distorting it into a ring, now called an Einstein ring. This is similar to the way a hand lens can focus and intensify light from the sun.
    But when the foreground object is a star with a planet, the brightening over time — the light curve — is more complicated. What’s more, there are often multiple planetary orbits that can explain a given light curve equally well — so called degeneracies. That’s where humans simplified the math and missed the bigger picture.
    The AI algorithm, however, pointed to a mathematical way to unify the two major kinds of degeneracy in interpreting what telescopes detect during microlensing, showing that the two “theories” are really special cases of a broader theory that, the researchers admit, is likely still incomplete. More

  • in

    Breakthrough in quantum universal gate sets: A high-fidelity iToffoli gate

    High-fidelity quantum logic gates applied to quantum bits (qubits) are the basic building blocks of programmable quantum circuits. Researchers at the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (Berkeley Lab) conducted the first experimental demonstration of a three-qubit high-fidelity iToffoli native gate in a superconducting quantum information processor and in a single step.
    Noisy intermediate-scale quantum processors typically support one- or two-qubit native gates, the types of gates that can be implemented directly by hardware. More complex gates are implemented by breaking them up into sequences of native gates. The team’s demonstration adds a novel and robust native three-qubit iToffoli gate for universal quantum computing. Furthermore, the team demonstrated a very high fidelity operation of the gate at 98.26%. The team’s experimental breakthrough was published in Nature Physics this May.
    Quantum Logic Gates, Quantum Circuits
    The Toffoli or the controlled-controlled-NOT (CCNOT) is a key logical gate in classical computing because it is universal, so it can build all logic circuits to compute any desired binary operation. Furthermore, it is reversible, which allows the determination and recovery of the binary inputs (bits) from the outputs, so no information is lost.
    In quantum circuits, the input qubit can be in a superposition of 0 and 1 states. The qubit is physically connected to other qubits in the circuit, which makes it more difficult to implement a high-fidelity quantum gate as the number of qubits increases. The fewer quantum gates needed to compute an operation, the shorter the quantum circuit, thereby improving the implementation of an algorithm before the qubits decohere causing errors in the final result. Therefore, reducing the complexity and running time of quantum gates is critical.
    In tandem with the Hadamard gate, the Toffoli gate forms a universal quantum gate set, which allows researchers to run any quantum algorithm. Experiments implementing multi-qubit gates in major computing technologies — superconducting circuits, trapped ions, and Rydberg atoms — successfully demonstrated Toffoli gates on three-qubit gates with fidelities averaging between 87% and 90%. However, such demonstrations required researchers to break up the Toffoli gates into one- and two-qubit gates, making the gate operation time longer and degrading their fidelity. More

  • in

    Computer model predicts dominant SARS-CoV-2 variants

    Scientists at the Broad Institute of MIT and Harvard and the University of Massachusetts Medical School have developed a machine learning model that can analyze millions of SARS-CoV-2 genomes and predict which viral variants will likely dominate and cause surges in COVID-19 cases. The model, called PyR0 (pronounced “pie-are-nought”), could help researchers identify which parts of the viral genome will be less likely to mutate and hence be good targets for vaccines that will work against future variants. The findings appear today in Science.
    The researchers trained the machine-learning model using 6 million SARS-CoV-2 genomes that were in the GISAID database in January 2022. They showed how their tool can also estimate the effect of genetic mutations on the virus’s fitness — its ability to multiply and spread through a population. When the team tested their model on viral genomic data from January 2022, it predicted the rise of the BA.2 variant, which became dominant in many countries in March 2022. PyR0 would have also identified the alpha variant (B.1.1.7) by late November 2020, a month before the World Health Organization listed it as a variant of concern.
    The research team includes first author Fritz Obermeyer, a machine learning fellow at the Broad Institute when the study began, and senior authors Jacob Lemieux, an instructor of medicine at Harvard Medical School and Massachusetts General Hospital, and Pardis Sabeti, an institute member at Broad, a professor at the Center for Systems Biology and the Department of Organismic and Evolutionary Biology at Harvard University, and a professor in the Department of Immunology and Infectious Disease at the Harvard T. H. Chan School of Public Health. Sabeti is also a Howard Hughes Medical Institute investigator.
    PyR0 is based on a machine learning framework called Pyro, which was originally developed by a team at Uber AI Labs. In 2020, three members of that team including Obermeyer and Martin Jankowiak, the study’s second author, joined the Broad Institute and began applying the framework to biology.
    “This work was the result of biologists and geneticists coming together with software engineers and computer scientists,” Lemieux said. “We were able to tackle some really challenging questions in public health that no single disciplinary approach could have answered on its own.”
    “This kind of machine learning-based approach that looks at all the data and combines that into a single prediction is extremely valuable,” said Sabeti. “It gives us a leg up on identifying what’s emerging and could be a potential threat.”
    The future of SARS-CoV-2 More

  • in

    Traveling wave of light sparks simple liquid crystal microposts to perform complex dance

    When humans twist and turn it is the result of complex internal functions: the body’s nervous system signals our intentions; the musculoskeletal system supports the motion; and the digestive system generates the energy to power the move. The body seamlessly integrates these activities without our even being aware that coordinated, dynamic processes are taking place. Reproducing similar, integrated functioning in a single synthetic material has proven difficult -few one-component materials naturally encompass the spatial and temporal coordination needed to mimic the spontaneity and dexterity of biological behavior.
    However, through a combination of experiments and modeling, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and University of Pittsburgh Swanson School of Engineering created a single-material, self-regulating system that controllably twists and bends to undergo biomimetic motion. 
    Senior author is Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS. Inspired by experiments performed in the Aizenberg lab, contributing authors at the University of Pittsburgh, Anna Balazs and James Waters, developed the theoretical and computational models to design liquid crystal elastomers (LCEs) that imitate the seamless coupling of dynamic processes observed in living systems.
    “Our movements occur spontaneously because the human body contains several interconnected structures, and the performance of each structure is highly coordinated in space and time, allowing one event to instigate the behavior in another part of the body,” explained Balazs, Distinguished Professor of Chemical Engineering and the John A. Swanson Chair of Engineering. “For example, the firing of neurons in the spine triggers a signal that causes a particular muscle to contract; the muscle expands when the neurons have stopped firing, allowing the body to return to its relaxed shape. If we could replicate this level of interlocking, multi-functionality in a synthetic material, we could ultimately devise effective self-regulating, autonomously operating devices.”
    The LCE material used in this collaborative Harvard- Pitt study was composed of long polymer chains with rod-like groups (mesogens) attached via side branches; photo-responsive crosslinkers were used to make the LCE responsive to UV light. The material was molded into a micron-scale posts anchored to an underlying surface. The Harvard team then demonstrated an extremely diverse set of complex motions that the microstructures can display when exposed to light. “The coupling among microscopic units — the polymers, side chains, meogens and crosslinkers — within this material could remind you of the interlocking of different components within a human body” said Balazs, “suggesting that with the right trigger, the LCE might display rich spatiotemporal behavior.”
    To devise the most effective triggers, Waters formulated a model that describes the simultaneous optical, chemical and mechanical phenomena occurring over the range of length and time scales that characterize the LCE. The simulations also provided an effective means of uncovering and visualizing the complex interactions within this responsive opto-chemo-mechanical system.
    “Our model can accurately predict the spatial and temporal evolution of the posts and reveal how different behaviors can be triggered by varying the materials’ properties and features of the imposed light,” Waters said, further noting “The model serves as a particularly useful predictive tool when the complexity of the system is increased by, for example, introducing multiple interacting posts, which can be arranged in an essentially infinite number of ways.”
    According to Balazs, these combined modeling and experimental studies pave the way for creating the next generation of light-responsive, soft machines or robots that begin to exhibit life-like autonomy. “Light is a particularly useful stimulus for activating these materials since the light source can be easily moved to instigate motion in different parts of the post or collection of posts,” she said.
    In future studies, Waters and Balazs will investigate how arrays of posts and posts with different geometries behave under the influence of multiple or more localized beams of light. Preliminary results indicate that in the presence of multiple light beams, the LCE posts can mimic the movement and flexibility of fingers, suggesting new routes for designing soft robotic hands that can be manipulated with light.
    “The vast design space for individual and collective motions is potentially transformative for soft robotics, micro-walkers, sensors, and robust information encryption systems,” said Aizenberg.
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Emulating impossible 'unipolar' laser pulses paves the way for processing quantum information

    A laser pulse that sidesteps the inherent symmetry of light waves could manipulate quantum information, potentially bringing us closer to room temperature quantum computing.
    The study, led by researchers at the University of Regensburg and the University of Michigan, could also accelerate conventional computing.
    Quantum computing has the potential to accelerate solutions to problems that need to explore many variables at the same time, including drug discovery, weather prediction and encryption for cybersecurity. Conventional computer bits encode either a 1 or 0, but quantum bits, or qubits, can encode both at the same time. This essentially enables quantum computers to work through multiple scenarios simultaneously, rather than exploring them one after the other. However, these mixed states don’t last long, so the information processing must be faster than electronic circuits can muster.
    While laser pulses can be used to manipulate the energy states of qubits, different ways of computing are possible if charge carriers used to encode quantum information could be moved around — including a room-temperature approach. Terahertz light, which sits between infrared and microwave radiation, oscillates fast enough to provide the speed, but the shape of the wave is also a problem. Namely, electromagnetic waves are obliged to produce oscillations that are both positive and negative, which sum to zero.
    The positive cycle may move charge carriers, such as electrons. But then the negative cycle pulls the charges back to where they started. To reliably control the quantum information, an asymmetric light wave is needed.
    “The optimum would be a completely directional, unipolar ‘wave,’ so there would be only the central peak, no oscillations. That would be the dream. But the reality is that light fields that propagate have to oscillate, so we try to make the oscillations as small as we can,” said Mackillo Kira, U-M professor of electrical engineering and computer science and leader of the theory aspects of the study in Light: Science & Applications. More

  • in

    Developing next-generation superconducting cables

    Researchers at Florida State University’s Center for Advanced Power Systems (CAPS), in collaboration with Colorado-based Advanced Conductor Technologies, have demonstrated a new, ready-to-use superconducting cable system — an improvement to superconductor technology that drives the development of technologies such as all-electric ships or airplanes.
    In a paper published in Superconductor Science and Technology, the researchers demonstrated a system that uses helium gas for crucial cooling. Superconducting cables can move electrical current with no resistance, but they need very cold temperatures to function.
    “We want to make these cables smaller, with lower weight and lower volume,” said paper co-author Sastry Pamidi, a FAMU-FSU College of Engineering professor and CAPS associate director. “These are very efficient power cables, and this research is focused on improving efficiency and practicality needed to achieve the promise of next-generation superconductor technology.”
    Previous work showed that the body of superconducting cables could be cooled with helium gas, but the cable ends needed another medium for cooling, such as liquid nitrogen. In this paper, researchers overcame that obstacle and were able to cool an entire cable system with helium gas.
    The work gives engineers more design flexibility because helium remains a gas in a wider range of temperatures than other mediums. Liquid nitrogen, for example, isn’t a suitable cooling medium for some applications, and this research moves superconducting technology closer to practical solutions for those scenarios.
    The paper is the latest outcome of the partnership between researchers at CAPS and Advanced Conductor Technologies (ACT). Previous teamwork has led to other publications and to the development of Conductor on Round Core (CORC®) cables that were the subject of this research.
    “Removing the need for liquid nitrogen to pre-cool the current leads of the superconducting cable and instead using the same helium gas that cools the cable allowed us to make a highly compact superconducting power cable that can be operated in a continuous mode,” said Danko van der Laan, ACT’s founder. “It therefore has become an elegant system that’s small and lightweight and it allows much easier integration into electric ships and aircraft.”
    The ongoing collaboration has been funded by Small Business Innovation Research (SBIR) grants from the U.S. Navy. The grants encourage businesses to partner with universities to conduct high-level research.
    The collaboration provides benefits for all involved. Companies receive help creating new products. Students see how their classwork applies to real-life engineering problems. Taxpayers get the technical and economic benefits that come from the innovations. And faculty members receive a share of a company’s research funding and the opportunity to tackle exciting work.
    “We like challenges,” Pamidi said. “These grants come with challenges that have a clear target. The company says ‘This is what we want to develop. Can you help us with this?’ It is motivating, and it also provides students with connections. The small businesses we work with not only provide money, but they also see the skills our students are gaining.”
    CAPS researcher Chul Kim and ACT researcher Jeremy Weiss were co-authors on this work. Along with the U.S. Navy grant, this research was supported by the U.S. Department of Energy.
    Story Source:
    Materials provided by Florida State University. Original written by Bill Wellock. Note: Content may be edited for style and length. More

  • in

    Scientists use quantum computers to simulate quantum materials

    Scientists achieve important milestone in making quantum computing more effective.
    Quantum computers promise to revolutionize science by enabling computations that were once thought impossible. But for quantum computers to become an everyday reality, there is a long way to go and many challenging tests to pass.
    One of the tests involves using quantum computers to simulate the properties of materials for next-generation quantum technologies.
    In a new study from the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago, researchers performed quantum simulations of spin defects, which are specific impurities in materials that could offer a promising basis for new quantum technologies. The study improved the accuracy of calculations on quantum computers by correcting for noise introduced by quantum hardware.
    “We want to learn how to use new computational technologies that are up-and-coming. Developing robust strategies in the early days of quantum computing is an important first step in being able to understand how to use these machines efficiently in the future.” — Giulia Galli, Argonne and University of Chicago
    The research was conducted as part of the Midwest Integrated Center for Computational Materials (MICCoM), a DOE computational materials science program headquartered at Argonne, as well as Q-NEXT, a DOE National Quantum Information Science Research Center. More

  • in

    Significant energy savings using neuromorphic hardware

    For the first time TU Graz’s Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.
    The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.
    Human brain as a role model
    Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.
    In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.
    Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware
    “Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication. More