More stories

  • in

    Toward error-free quantum computing

    For quantum computers to be useful in practice, errors must be detected and corrected. At the University of Innsbruck, Austria, a team of experimental physicists has now implemented a universal set of computational operations on fault-tolerant quantum bits for the first time, demonstrating how an algorithm can be programmed on a quantum computer so that errors do not spoil the result.
    In modern computers errors during processing and storage of information have become a rarity due to high-quality fabrication. However, for critical applications, where even single errors can have serious effects, error correction mechanisms based on redundancy of the processed data are still used.
    Quantum computers are inherently much more susceptible to disturbances and will thus probably always require error correction mechanisms, because otherwise errors will propagate uncontrolled in the system and information will be lost. Because the fundamental laws of quantum mechanics forbid copying quantum information, redundancy can be achieved by distributing logical quantum information into an entangled state of several physical systems, for example multiple individual atoms.
    The team led by Thomas Monz of the Department of Experimental Physics at the University of Innsbruck and Markus Müller of RWTH Aachen University and Forschungszentrum Jülich in Germany has now succeeded for the first time in realizing a set of computational operations on two logical quantum bits that can be used to implement any possible operation. “For a real-world quantum computer, we need a universal set of gates with which we can program all algorithms,” explains Lukas Postler, an experimental physicist from Innsbruck.
    Fundamental quantum operation realized
    The team of researchers implemented this universal gate set on an ion trap quantum computer featuring 16 trapped atoms. The quantum information was stored in two logical quantum bits, each distributed over seven atoms. More

  • in

    Secure communication with light particles

    While quantum computers offer many novel possibilities, they also pose a threat to internet security since these supercomputers make common encryption methods vulnerable. Based on the so-called quantum key distribution, researchers at TU Darmstadt have developed a new, tap-proof communication network.
    The new system is used to exchange symmetric keys between parties in order to encrypt messages so that they cannot be read by third parties. In cooperation with Deutsche Telekom, the researchers led by physics professor Thomas Walther succeeded in operating a quantum network that is scalable in terms of the number of users and at the same time robust without the need for trusted nodes. In the future, such systems could protect critical infrastructure from the growing danger of cyberattacks. In addition, tap-proof connections could be installed between different government sites in larger cities.
    The system developed by the Darmstadt researchers enables the so-called quantum key exchange, providing several parties in a star-shaped network with a common random number. Individual light quanta, so-called photons, are distributed to users in the communication network in order to calculate the random number and thus the digital key. Due to quantum physical effects, these keys are particularly secure. In this way, communication is particularly highly protected, and existing eavesdropping attacks can be detected.
    So far, such quantum key methods have been technically complex and sensitive to external influences. The system of the Darmstadt group from the Collaborative Research Center CROSSING is based on a special protocol. The system distributes photons from a central source to all users in the network and establishes the security of the quantum keys through the effect of so-called quantum entanglement. This quantum-physical effect produces correlations between two light particles, which are observable even when they are far apart. The property of the partner particle can be predicted by measuring a property of the light particle from a pair.
    Polarization is often used as a property, but this is typically disturbed in the glass fibers used for transmission due to environmental influences such as vibrations or small temperature changes. However, the Darmstadt system uses a protocol in which the quantum information is encoded in the phase and arrival time of the photons and is therefore particularly insensitive to such disturbances. For the first time, the group has succeeded in providing a network of users with quantum keys by means of this robust protocol.
    The high stability of the transmission and the scalability in principle were successfully demonstrated in a field test together with Deutsche Telekom Technik GmbH. As a next step, the researchers at TU Darmstadt are planning to connect other buildings in the city to their system.
    Story Source:
    Materials provided by Technische Universitat Darmstadt. Note: Content may be edited for style and length. More

  • in

    AI can predict cancer risk of lung nodules

    An artificial intelligence (AI) tool helps doctors predict the cancer risk in lung nodules seen on CT, according to a new study published in the journal Radiology.
    Pulmonary nodules appear as small spots on the lungs on chest imaging. They have become a much more common finding as CT has gained favor over X-rays for chest imaging.
    “A nodule would appear on somewhere between 5% to 8% of chest X-rays,” said study senior author Anil Vachani, M.D., director of clinical research in the section of Interventional Pulmonology and Thoracic Oncology at the Perelman School of Medicine, University of Pennsylvania in Philadelphia. “Chest CT is such a sensitive test, you’ll see a small nodule in upwards of a third to a half of cases. We’ve gone from a problem that was relatively uncommon to one that affects 1.6 million people in the U.S. every year.”
    Dr. Vachani and colleagues evaluated an AI-based computer-aided diagnosis tool developed by Optellum Ltd. of Oxford, England, to assist clinicians in assessing pulmonary nodules on chest CT. While CT scans show many aspects of the nodule, such as size and border characteristics, AI can delve even deeper.
    “AI can go through very large datasets to come up with unique patterns that can’t be seen through the naked eye and end up being predictive of malignancy,” Dr. Vachani said.
    In the study, six radiologists and six pulmonologists made estimates of malignancy risk for nodules using CT imaging data alone. They also made management recommendations such as CT surveillance or a diagnostic procedure for each case without and with the AI tool.
    A total of 300 chest CTs of indeterminant pulmonary nodules were used in the study. The researchers defined indeterminant nodules as those between 5 and 30 millimeters in diameter.
    Analysis showed that use of the AI tool improved estimation of nodule malignancy risk on chest CT. It also improved agreement among the different readers for both risk stratification and management recommendations.
    “The readers judge malignant or benign with a reasonable level of accuracy based on imaging itself, but when you combine their clinical interpretation with the AI algorithm, the accuracy level improves significantly,” Dr. Vachani said. “The level of improvement suggests that this tool has the potential to change how we judge cancer versus benign and hopefully improve how we manage patients.”
    The model appears to work equally well on diagnostic CT and low-dose screening CT, Dr. Vachani said, but more study is needed before the AI tool can be used in the clinic.
    “We’ve taken the first step here and shown that decision making is better if the AI tool is incorporated into radiology or pulmonology practice,” Dr. Vachani said. “The next step is to take the tool and do some prospective trials where physicians use the AI tool in a real-world setting. We are in the process of designing those trials.”
    Story Source:
    Materials provided by Radiological Society of North America. Note: Content may be edited for style and length. More

  • in

    AI reveals unsuspected math underlying search for exoplanets

    Artificial intelligence (AI) algorithms trained on real astronomical observations now outperform astronomers in sifting through massive amounts of data to find new exploding stars, identify new types of galaxies and detect the mergers of massive stars, accelerating the rate of new discovery in the world’s oldest science.
    But AI, also called machine learning, can reveal something deeper, University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity — in particular, how that theory is applied to finding new planets around other stars.
    In a paper appearing this week in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it — a process called gravitational microlensing — revealed that the decades-old theories now used to explain these observations are woefully incomplete.
    In 1936, Albert Einstein himself used his new theory of general relativity to show how the light from a distant star can be bent by the gravity of a foreground star, not only brightening it as seen from Earth, but often splitting it into several points of light or distorting it into a ring, now called an Einstein ring. This is similar to the way a hand lens can focus and intensify light from the sun.
    But when the foreground object is a star with a planet, the brightening over time — the light curve — is more complicated. What’s more, there are often multiple planetary orbits that can explain a given light curve equally well — so called degeneracies. That’s where humans simplified the math and missed the bigger picture.
    The AI algorithm, however, pointed to a mathematical way to unify the two major kinds of degeneracy in interpreting what telescopes detect during microlensing, showing that the two “theories” are really special cases of a broader theory that, the researchers admit, is likely still incomplete. More

  • in

    Breakthrough in quantum universal gate sets: A high-fidelity iToffoli gate

    High-fidelity quantum logic gates applied to quantum bits (qubits) are the basic building blocks of programmable quantum circuits. Researchers at the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (Berkeley Lab) conducted the first experimental demonstration of a three-qubit high-fidelity iToffoli native gate in a superconducting quantum information processor and in a single step.
    Noisy intermediate-scale quantum processors typically support one- or two-qubit native gates, the types of gates that can be implemented directly by hardware. More complex gates are implemented by breaking them up into sequences of native gates. The team’s demonstration adds a novel and robust native three-qubit iToffoli gate for universal quantum computing. Furthermore, the team demonstrated a very high fidelity operation of the gate at 98.26%. The team’s experimental breakthrough was published in Nature Physics this May.
    Quantum Logic Gates, Quantum Circuits
    The Toffoli or the controlled-controlled-NOT (CCNOT) is a key logical gate in classical computing because it is universal, so it can build all logic circuits to compute any desired binary operation. Furthermore, it is reversible, which allows the determination and recovery of the binary inputs (bits) from the outputs, so no information is lost.
    In quantum circuits, the input qubit can be in a superposition of 0 and 1 states. The qubit is physically connected to other qubits in the circuit, which makes it more difficult to implement a high-fidelity quantum gate as the number of qubits increases. The fewer quantum gates needed to compute an operation, the shorter the quantum circuit, thereby improving the implementation of an algorithm before the qubits decohere causing errors in the final result. Therefore, reducing the complexity and running time of quantum gates is critical.
    In tandem with the Hadamard gate, the Toffoli gate forms a universal quantum gate set, which allows researchers to run any quantum algorithm. Experiments implementing multi-qubit gates in major computing technologies — superconducting circuits, trapped ions, and Rydberg atoms — successfully demonstrated Toffoli gates on three-qubit gates with fidelities averaging between 87% and 90%. However, such demonstrations required researchers to break up the Toffoli gates into one- and two-qubit gates, making the gate operation time longer and degrading their fidelity. More

  • in

    Computer model predicts dominant SARS-CoV-2 variants

    Scientists at the Broad Institute of MIT and Harvard and the University of Massachusetts Medical School have developed a machine learning model that can analyze millions of SARS-CoV-2 genomes and predict which viral variants will likely dominate and cause surges in COVID-19 cases. The model, called PyR0 (pronounced “pie-are-nought”), could help researchers identify which parts of the viral genome will be less likely to mutate and hence be good targets for vaccines that will work against future variants. The findings appear today in Science.
    The researchers trained the machine-learning model using 6 million SARS-CoV-2 genomes that were in the GISAID database in January 2022. They showed how their tool can also estimate the effect of genetic mutations on the virus’s fitness — its ability to multiply and spread through a population. When the team tested their model on viral genomic data from January 2022, it predicted the rise of the BA.2 variant, which became dominant in many countries in March 2022. PyR0 would have also identified the alpha variant (B.1.1.7) by late November 2020, a month before the World Health Organization listed it as a variant of concern.
    The research team includes first author Fritz Obermeyer, a machine learning fellow at the Broad Institute when the study began, and senior authors Jacob Lemieux, an instructor of medicine at Harvard Medical School and Massachusetts General Hospital, and Pardis Sabeti, an institute member at Broad, a professor at the Center for Systems Biology and the Department of Organismic and Evolutionary Biology at Harvard University, and a professor in the Department of Immunology and Infectious Disease at the Harvard T. H. Chan School of Public Health. Sabeti is also a Howard Hughes Medical Institute investigator.
    PyR0 is based on a machine learning framework called Pyro, which was originally developed by a team at Uber AI Labs. In 2020, three members of that team including Obermeyer and Martin Jankowiak, the study’s second author, joined the Broad Institute and began applying the framework to biology.
    “This work was the result of biologists and geneticists coming together with software engineers and computer scientists,” Lemieux said. “We were able to tackle some really challenging questions in public health that no single disciplinary approach could have answered on its own.”
    “This kind of machine learning-based approach that looks at all the data and combines that into a single prediction is extremely valuable,” said Sabeti. “It gives us a leg up on identifying what’s emerging and could be a potential threat.”
    The future of SARS-CoV-2 More

  • in

    Traveling wave of light sparks simple liquid crystal microposts to perform complex dance

    When humans twist and turn it is the result of complex internal functions: the body’s nervous system signals our intentions; the musculoskeletal system supports the motion; and the digestive system generates the energy to power the move. The body seamlessly integrates these activities without our even being aware that coordinated, dynamic processes are taking place. Reproducing similar, integrated functioning in a single synthetic material has proven difficult -few one-component materials naturally encompass the spatial and temporal coordination needed to mimic the spontaneity and dexterity of biological behavior.
    However, through a combination of experiments and modeling, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and University of Pittsburgh Swanson School of Engineering created a single-material, self-regulating system that controllably twists and bends to undergo biomimetic motion. 
    Senior author is Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS. Inspired by experiments performed in the Aizenberg lab, contributing authors at the University of Pittsburgh, Anna Balazs and James Waters, developed the theoretical and computational models to design liquid crystal elastomers (LCEs) that imitate the seamless coupling of dynamic processes observed in living systems.
    “Our movements occur spontaneously because the human body contains several interconnected structures, and the performance of each structure is highly coordinated in space and time, allowing one event to instigate the behavior in another part of the body,” explained Balazs, Distinguished Professor of Chemical Engineering and the John A. Swanson Chair of Engineering. “For example, the firing of neurons in the spine triggers a signal that causes a particular muscle to contract; the muscle expands when the neurons have stopped firing, allowing the body to return to its relaxed shape. If we could replicate this level of interlocking, multi-functionality in a synthetic material, we could ultimately devise effective self-regulating, autonomously operating devices.”
    The LCE material used in this collaborative Harvard- Pitt study was composed of long polymer chains with rod-like groups (mesogens) attached via side branches; photo-responsive crosslinkers were used to make the LCE responsive to UV light. The material was molded into a micron-scale posts anchored to an underlying surface. The Harvard team then demonstrated an extremely diverse set of complex motions that the microstructures can display when exposed to light. “The coupling among microscopic units — the polymers, side chains, meogens and crosslinkers — within this material could remind you of the interlocking of different components within a human body” said Balazs, “suggesting that with the right trigger, the LCE might display rich spatiotemporal behavior.”
    To devise the most effective triggers, Waters formulated a model that describes the simultaneous optical, chemical and mechanical phenomena occurring over the range of length and time scales that characterize the LCE. The simulations also provided an effective means of uncovering and visualizing the complex interactions within this responsive opto-chemo-mechanical system.
    “Our model can accurately predict the spatial and temporal evolution of the posts and reveal how different behaviors can be triggered by varying the materials’ properties and features of the imposed light,” Waters said, further noting “The model serves as a particularly useful predictive tool when the complexity of the system is increased by, for example, introducing multiple interacting posts, which can be arranged in an essentially infinite number of ways.”
    According to Balazs, these combined modeling and experimental studies pave the way for creating the next generation of light-responsive, soft machines or robots that begin to exhibit life-like autonomy. “Light is a particularly useful stimulus for activating these materials since the light source can be easily moved to instigate motion in different parts of the post or collection of posts,” she said.
    In future studies, Waters and Balazs will investigate how arrays of posts and posts with different geometries behave under the influence of multiple or more localized beams of light. Preliminary results indicate that in the presence of multiple light beams, the LCE posts can mimic the movement and flexibility of fingers, suggesting new routes for designing soft robotic hands that can be manipulated with light.
    “The vast design space for individual and collective motions is potentially transformative for soft robotics, micro-walkers, sensors, and robust information encryption systems,” said Aizenberg.
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Emulating impossible 'unipolar' laser pulses paves the way for processing quantum information

    A laser pulse that sidesteps the inherent symmetry of light waves could manipulate quantum information, potentially bringing us closer to room temperature quantum computing.
    The study, led by researchers at the University of Regensburg and the University of Michigan, could also accelerate conventional computing.
    Quantum computing has the potential to accelerate solutions to problems that need to explore many variables at the same time, including drug discovery, weather prediction and encryption for cybersecurity. Conventional computer bits encode either a 1 or 0, but quantum bits, or qubits, can encode both at the same time. This essentially enables quantum computers to work through multiple scenarios simultaneously, rather than exploring them one after the other. However, these mixed states don’t last long, so the information processing must be faster than electronic circuits can muster.
    While laser pulses can be used to manipulate the energy states of qubits, different ways of computing are possible if charge carriers used to encode quantum information could be moved around — including a room-temperature approach. Terahertz light, which sits between infrared and microwave radiation, oscillates fast enough to provide the speed, but the shape of the wave is also a problem. Namely, electromagnetic waves are obliged to produce oscillations that are both positive and negative, which sum to zero.
    The positive cycle may move charge carriers, such as electrons. But then the negative cycle pulls the charges back to where they started. To reliably control the quantum information, an asymmetric light wave is needed.
    “The optimum would be a completely directional, unipolar ‘wave,’ so there would be only the central peak, no oscillations. That would be the dream. But the reality is that light fields that propagate have to oscillate, so we try to make the oscillations as small as we can,” said Mackillo Kira, U-M professor of electrical engineering and computer science and leader of the theory aspects of the study in Light: Science & Applications. More