More stories

  • in

    Cheap, potent pathway to pandemic therapeutics

    By capitalizing on a convergence of chemical, biological and artificial intelligence advances, University of Pittsburgh School of Medicine scientists have developed an unusually fast and efficient method for discovering tiny antibody fragments with big potential for development into therapeutics against deadly diseases.
    The technique, published today in the journal Cell Systems, is the same process the Pitt team used to extract tiny SARS-CoV-2 antibody fragments from llamas, which could become an inhalable COVID-19 treatment for humans. This approach has the potential to quickly identify multiple potent nanobodies that target different parts of a pathogen — thwarting variants.
    “Most of the vaccines and treatments against SARS-CoV-2 target the spike protein, but if that part of the virus mutates, which we know it is, those vaccines and treatments may be less effective,” said senior author Yi Shi, Ph.D., assistant professor of cell biology at Pitt. “Our approach is an efficient way to develop therapeutic cocktails consisting of multiple nanobodies that can launch a multipronged attack to neutralize the pathogen.”
    Shi and his team specialize in finding nanobodies — which are small, highly specific fragments of antibodies produced by llamas and other camelids. Nanobodies are particularly attractive for development into therapeutics because they are easy to produce and bioengineer. In addition, they feature high stability and solubility, and can be aerosolized and inhaled, rather than administered through intravenous infusion, like traditional antibodies.
    By immunizing a llama with a piece of a pathogen, the animal’s immune system produces a plethora of mature nanobodies in about two months. Then it’s a matter of teasing out which nanobodies are best at neutralizing the pathogen — and most promising for development into therapies for humans.
    That’s where Shi’s “high-throughput proteomics strategy” comes into play.

    advertisement

    “Using this new technique, in a matter of days we’re typically able to identify tens of thousands of distinct, highly potent nanobodies from the immunized llama serum and survey them for certain characteristics, such as where they bind to the pathogen,” Shi said. “Prior to this approach, it has been extremely challenging to identify high-affinity nanobodies.”
    After drawing a llama blood sample rich in mature nanobodies, the researchers isolate those nanobodies that bind specifically to the target of interest on the pathogen. The nanobodies are then broken down to release small “fingerprint” peptides that are unique to each nanobody. These fingerprint peptides are placed into a mass spectrometer, which is a machine that measures their mass. By knowing their mass, the scientists can figure out their amino acid sequence — the protein building blocks that determine the nanobody’s structure. Then, from the amino acids, the researchers can work backward to DNA — the directions for building more nanobodies.
    Simultaneously, the amino acid sequence is uploaded to a computer outfitted with artificial intelligence software. By rapidly sifting through mountains of data, the program “learns” which nanobodies bind the tightest to the pathogen and where on the pathogen they bind. In the case of most of the currently available COVID-19 therapeutics, this is the spike protein, but recently it has become clear that some sites on the spike are prone to mutations that change its shape and allow for antibody “escape.” Shi’s approach can select for binding sites on the spike that are evolutionarily stable, and therefore less likely to allow new variants to slip past.
    Finally, the directions for building the most potent and diverse nanobodies can then be fed into vats of bacterial cells, which act as mini factories, churning out orders of magnitude more nanobodies compared to the human cells required to produce traditional antibodies. Bacterial cells double in 10 minutes, effectively doubling the nanobodies with them, whereas human cells take 24 hours to do the same.
    “This drastically reduces the cost of producing these therapeutics,” said Shi.
    Shi and his team believe their technology could be beneficial for more than just developing therapeutics against COVID-19 — or even the next pandemic.
    “The possible uses of highly potent and specific nanobodies that can be identified quickly and inexpensively are tremendous,” said Shi. “We’re exploring their use in treating cancer and neurodegenerative diseases. Our technique could even be used in personalized medicine, developing specific treatments for mutated superbugs for which every other antibiotic has failed.”
    Additional researchers on this publication are Yufei Xiang and Jianquan Xu, Ph.D., both of Pitt; Zhe Sang of Pitt and Carnegie Mellon University; and Lirane Bitton and Dina Schneidman-Duhovny, Ph.D., both of the Hebrew University of Jerusalem.
    This research was supported by the UPMC Aging Institute, National Institutes of Health grant 1R35GM137905-01, Israel Science Foundation grant 1466/18, the Ministry of Science and Technology of Israel and the Hebrew University of Jerusalem Center for Interdisciplinary Data Science Research. More

  • in

    A machine-learning approach to finding treatment options for COVID-19

    When the Covid-19 pandemic struck in early 2020, doctors and researchers rushed to find effective treatments. There was little time to spare. “Making new drugs takes forever,” says Caroline Uhler, a computational biologist in MIT’s Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, and an associate member of the Broad Institute of MIT and Harvard. “Really, the only expedient option is to repurpose existing drugs.”
    Uhler’s team has now developed a machine learning-based approach to identify drugs already on the market that could potentially be repurposed to fight Covid-19, particularly in the elderly. The system accounts for changes in gene expression in lung cells caused by both the disease and aging. That combination could allow medical experts to more quickly seek drugs for clinical testing in elderly patients, who tend to experience more severe symptoms. The researchers pinpointed the protein RIPK1 as a promising target for Covid-19 drugs, and they identified three approved drugs that act on the expression of RIPK1.
    The research appears today in the journal Nature Communications. Co-authors include MIT PhD students Anastasiya Belyaeva, Adityanarayanan Radhakrishnan, Chandler Squires, and Karren Dai Yang, as well as PhD student Louis Cammarata of Harvard University and long-term collaborator G.V. Shivashankar of ETH Zurich in Switzerland.
    Early in the pandemic, it grew clear that Covid-19 harmed older patients more than younger ones, on average. Uhler’s team wondered why. “The prevalent hypothesis is the aging immune system,” she says. But Uhler and Shivashankar suggested an additional factor: “One of the main changes in the lung that happens through aging is that it becomes stiffer.”
    The stiffening lung tissue shows different patterns of gene expression than in younger people, even in response to the same signal. “Earlier work by the Shivashankar lab showed that if you stimulate cells on a stiffer substrate with a cytokine, similar to what the virus does, they actually turn on different genes,” says Uhler. “So, that motivated this hypothesis. We need to look at aging together with SARS-CoV-2 — what are the genes at the intersection of these two pathways?” To select approved drugs that might act on these pathways, the team turned to big data and artificial intelligence.
    The researchers zeroed in on the most promising drug repurposing candidates in three broad steps. First, they generated a large list of possible drugs using a machine-learning technique called an autoencoder. Next, they mapped the network of genes and proteins involved in both aging and SARS-CoV-2 infection. Finally, they used statistical algorithms to understand causality in that network, allowing them to pinpoint “upstream” genes that caused cascading effects throughout the network. In principle, drugs targeting those upstream genes and proteins should be promising candidates for clinical trials.
    To generate an initial list of potential drugs, the team’s autoencoder relied on two key datasets of gene expression patterns. One dataset showed how expression in various cell types responded to a range of drugs already on the market, and the other showed how expression responded to infection with SARS-CoV-2. The autoencoder scoured the datasets to highlight drugs whose impacts on gene expression appeared to counteract the effects of SARS-CoV-2. “This application of autoencoders was challenging and required foundational insights into the working of these neural networks, which we developed in a paper recently published in PNAS,” notes Radhakrishnan.
    Next, the researchers narrowed the list of potential drugs by homing in on key genetic pathways. They mapped the interactions of proteins involved in the aging and Sars-CoV-2 infection pathways. Then they identified areas of overlap among the two maps. That effort pinpointed the precise gene expression network that a drug would need to target to combat Covid-19 in elderly patients.
    “At this point, we had an undirected network,” says Belyaeva, meaning the researchers had yet to identify which genes and proteins were “upstream” (i.e. they have cascading effects on the expression of other genes) and which were “downstream” (i.e. their expression is altered by prior changes in the network). An ideal drug candidate would target the genes at the upstream end of the network to minimize the impacts of infection.
    “We want to identify a drug that has an effect on all of these differentially expressed genes downstream,” says Belyaeva. So the team used algorithms that infer causality in interacting systems to turn their undirected network into a causal network. The final causal network identified RIPK1 as a target gene/protein for potential Covid-19 drugs, since it has numerous downstream effects. The researchers identified a list of the approved drugs that act on RIPK1 and may have potential to treat Covid-19. Previously these drugs have been approved for the use in cancer. Other drugs that were also identified, including ribavirin and quinapril, are already in clinical trials for Covid-19.
    Uhler plans to share the team’s findings with pharmaceutical companies. She emphasizes that before any of the drugs they identified can be approved for repurposed use in elderly Covid-19 patients, clinical testing is needed to determine efficacy. While this particular study focused on Covid-19, the researchers say their framework is extendable. “I’m really excited that this platform can be more generally applied to other infections or diseases,” says Belyaeva. Radhakrishnan emphasizes the importance of gathering information on how various diseases impact gene expression. “The more data we have in this space, the better this could work,” he says.
    This research was supported, in part, by the Office of Naval Research, the National Science Foundation, the Simons Foundation, IBM, and the MIT Jameel Clinic for Machine Learning and Health. More

  • in

    Scientists manipulate magnets at the atomic scale

    Fast and energy-efficient future data processing technologies are on the horizon after an international team of scientists successfully manipulated magnets at the atomic level.
    Physicist Dr Rostislav Mikhaylovskiy from Lancaster University said: “With stalling efficiency trends of current technology, new scientific approaches are especially valuable. Our discovery of the atomically-driven ultrafast control of magnetism opens broad avenues for fast and energy-efficient future data processing technologies essential to keep up with our data hunger.”
    Magnetic materials are heavily used in modern life with applications ranging from fridge magnets to Google and Amazon’s data centers used to store digital information.
    These materials host trillions of mutually aligned elementary magnetic moments or “spins,” whose alignment is largely governed by the arrangement of the atoms in the crystal lattice.
    The spin can be seen as an elementary “needle of a compass,” typically depicted as an arrow showing the direction from North to South poles. In magnets all spins are aligned along the same direction by the force called exchange interaction. The exchange interaction is one of the strongest quantum effects which is responsible for the very existence of magnetic materials.
    The ever-growing demand for efficient magnetic data processing calls for novel means to manipulate the magnetic state and manipulating the exchange interaction would be the most efficient and ultimately fastest way to control magnetism.

    advertisement

    To achieve this result, the researchers used the fastest and the strongest stimulus available: ultrashort laser pulse excitation. They used light to optically stimulate specific atomic vibrations of the magnet’s crystal lattice which extensively disturbed and distorted the structure of the material.
    The results of this study are published in the  journal Nature Materials by the international team from Lancaster, Delft, Nijmegen, Liege and Kiev.
    PhD student Jorrit Hortensius from the Technical University of Delft said: “We optically shake the lattice of a magnet that is made up of alternating up and down small magnetic moments and therefore does not have a net magnetization, unlike the familiar fridge magnets.”
    After shaking the crystal for a very short period of time, the researchers measured how the magnetic properties evolve directly in time. Following the shaking, the magnetic system of the antiferromagnet changes, such that a net magnetization appears: for a fraction of time the material becomes similar to the everyday fridge magnets.
    This all occurs within an unprecedentedly short time of less than a few picoseconds (millionth of a millionth of a second). This time is not only orders of magnitude shorter than the recording time in modern computer hard drives, but also exactly matches the fundamental limit for the magnetization switching.
    Dr Rostislav Mikhaylovskiy from Lancaster University explains: “It has long been thought that the control of magnetism by atomic vibrations is restricted to acoustic excitations (sound waves) and cannot be faster than nanoseconds. We have reduced the magnetic switching time by 1000 times that is a major milestone in itself.”
    Dr Dmytro Afanasiev from the Technical University of Delft adds: “We believe that our findings will stimulate further research into exploring and understanding the exact mechanisms governing the ultrafast lattice control of the magnetic state.”

    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    Algorithm that performs as accurately as dermatologists

    A study has now been presented that boosts the evidence for using AI solutions in skin cancer diagnostics. With an algorithm they devised themselves, scientists at the University of Gothenburg show the capacity of technology to perform at the same level as dermatologists in assessing the severity of skin melanoma.
    The study, published in the Journal of the American Academy of Dermatology, and its results are the work of a research group at the Department of Dermatology and Venereology at Sahlgrenska Academy, University of Gothenburg.
    The study was conducted at Sahlgrenska University Hospital in Gothenburg. Its purpose was, through machine learning (ML), to train an algorithm to determine whether skin melanoma is invasive and there is a risk of it spreading (metastatizing), or whether it remains at a growth stage in which it is confined to the epidermis, with no risk of metastasis.
    The algorithm was trained and validated on 937 dermatoscopic images of melanoma, and subsequently tested on 200 cases. All the cases included were diagnosed by a dermatopathologist.
    The majority of melanomas are found by patients rather than doctors. This suggests that, in most cases, diagnosis is relatively easy. Before surgery, however, it is often much more difficult to determine the stage the melanoma has reached.
    To make the classifications more accurate, dermatologists use dermatoscopes — instruments that combine a type of magnifying glass with bright illumination. In recent years, interest in using ML for skin tumor classifications has increased, and several publications have shown that ML algorithms can perform on par with, or even better than, experienced dermatologists.
    The current study is now giving a further boost to research in this field. When the same classification task was performed by the algorithm on the one hand and seven independent dermatologists on the other, the result was a draw.
    “None of the dermatologists significantly outperformed the ML algorithm,” states Sam Polesie, a researcher at the University of Gothenburg and specialist doctor at Sahlgrenska University Hospital, who is the corresponding author of the study.
    In a developed form, the algorithm could serve as support in the task of assessing the severity of skin melanoma before surgery. The classification affects how extensive an operation needs to be, and is therefore important for both the patient and the surgeon.
    “The results of the study are interesting, and the hope is that the algorithm can be used as clinical decision support in the future. But it needs refining further, and prospective studies that monitor patients over time are necessary, too,” Polesie concludes.

    Story Source:
    Materials provided by University of Gothenburg. Note: Content may be edited for style and length. More

  • in

    Detecting single molecules and diagnosing diseases with a smartphone

    Biomarkers play a central role in the diagnosis of disease and assessment of its course. Among the markers now in use are genes, proteins, hormones, lipids and other classes of molecules. Biomarkers can be found in the blood, in cerebrospinal fluid, urine and various types of tissues, but most of them have one thing in common: They occur in extremely low concentrations, and are therefore technically challenging to detect and quantify.
    Many detection procedures use molecular probes, such as antibodies or short nucleic-acid sequences, which are designed to bind to specific biomarkers. When a probe recognizes and binds to its target, chemical or physical reactions give rise to fluorescence signals. Such methods work well, provided they are sensitive enough to recognize the relevant biomarker in a high percentage of all patients who carry it in their blood. In addition, before such fluorescence-based tests can be used in practice, the biomarkers themselves or their signals must be amplified. The ultimate goal is to enable medical screening to be carried out directly on patients, without having to send the samples to a distant laboratory for analysis.
    Molecular antennas amplify fluorescence signals Philip Tinnefeld, who holds a Chair in Physical Chemistry at LMU, has developed a strategy for determining levels of biomarkers present in low concentrations. He has succeeded in coupling DNA probes to tiny particles of gold or silver. Pairs of particles (‘dimers’) act as nano-antennas that amplify the fluorescence signals. The trick works as follows: Interactions between the nanoparticles and incoming light waves intensify the local electromagnetic fields, and this in turn leads to a massive increase in the amplitude of the fluorescence. In this way, bacteria that contain antibiotic resistance genes and even viruses can be specifically detected.
    “DNA-based nano-antennas have been studied for the last few years,” says Kateryna Trofymchuk, joint first author of the study. “But the fabrication of these nanostructures presents challenges.” Philip Tinnefeld’s research group has now succeeded in configuring the components of their nano-antennas more precisely, and in positioning the DNA molecules that serve as capture probes at the site of signal amplification. Together, these modifications enable the fluorescence signal to be more effectively amplified. Furthermore, in the minuscule volume involved, which is on the order of zeptoliters (a zeptoliter equals 10-21 of a liter), even more molecules can be captured.
    The high degree of positioning control is made possible by DNA nanotechnology, which exploits the structural properties of DNA to guide the assembly of all sorts of nanoscale objects — in extremely large numbers. “In one sample, we can simultaneously produce billions of these nano-antennas, using a procedure that basically consists of pipetting a few solutions together,” says Trofymchuk.
    Routine diagnostics on the smartphone “In the future,” says Viktorija Glembockyte, also joint first author of the publication, “our technology could be utilized for diagnostic tests even in areas in which access to electricity or laboratory equipment is restricted. We have shown that we can directly detect small fragments of DNA in blood serum, using a portable, smartphone-based microscope that runs on a conventional USB power pack to monitor the assay.” Newer smartphones are usually equipped with pretty good cameras. Apart from that, all that’s needed is a laser and a lens — two readily available and cheap components. The LMU researchers used this basic recipe to construct their prototypes.
    They went on to demonstrate that DNA fragments that are specific for antibiotic resistance genes in bacteria could be detected by this set-up. But the assay could be easily modified to detect a whole range of interesting target types, such as viruses. Tinnefeld is optimistic: “The past year has shown that there is always a need for new and innovative diagnostic methods, and perhaps our technology can one day contribute to the development of an inexpensive and reliable diagnostic test that can be carried out at home.”

    Story Source:
    Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length. More

  • in

    New machine learning theory raises questions about nature of science

    A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.
    The algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”
    Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a “serving algorithm,” then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”
    The program does not happen upon accurate predictions by accident. “Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system,” said Joshua Burby, a physicist at the DOE’s Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qin’s mentorship. “The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really ‘learns’ the laws of physics.”
    Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.
    The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless “translate” a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

    advertisement

    Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.
    The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.
    Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.
    Fusion, the power that drives the sun and stars, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe — to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
    “In a magnetic fusion device, the dynamics of plasmas are complex and multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear,” Qin said. “In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.”
    This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

    advertisement

    “I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”
    Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” Palmerduca said.
    The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”
    Support for this research came from the DOE Office of Science (Fusion Energy Sciences). More

  • in

    Applying quantum computing to a particle process

    A team of researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) used a quantum computer to successfully simulate an aspect of particle collisions that is typically neglected in high-energy physics experiments, such as those that occur at CERN’s Large Hadron Collider.
    The quantum algorithm they developed accounts for the complexity of parton showers, which are complicated bursts of particles produced in the collisions that involve particle production and decay processes.
    Classical algorithms typically used to model parton showers, such as the popular Markov Chain Monte Carlo algorithms, overlook several quantum-based effects, the researchers note in a study published online Feb. 10 in the journal Physical Review Letters that details their quantum algorithm.
    “We’ve essentially shown that you can put a parton shower on a quantum computer with efficient resources,” said Christian Bauer, who is Theory Group leader and serves as principal investigator for quantum computing efforts in Berkeley Lab’s Physics Division, “and we’ve shown there are certain quantum effects that are difficult to describe on a classical computer that you could describe on a quantum computer.” Bauer led the recent study.
    Their approach meshes quantum and classical computing: It uses the quantum solution only for the part of the particle collisions that cannot be addressed with classical computing, and uses classical computing to address all of the other aspects of the particle collisions.
    Researchers constructed a so-called “toy model,” a simplified theory that can be run on an actual quantum computer while still containing enough complexity that prevents it from being simulated using classical methods.

    advertisement

    “What a quantum algorithm does is compute all possible outcomes at the same time, then picks one,” Bauer said. “As the data gets more and more precise, our theoretical predictions need to get more and more precise. And at some point these quantum effects become big enough that they actually matter,” and need to be accounted for.
    In constructing their quantum algorithm, researchers factored in the different particle processes and outcomes that can occur in a parton shower, accounting for particle state, particle emission history, whether emissions occurred, and the number of particles produced in the shower, including separate counts for bosons and for two types of fermions.
    The quantum computer “computed these histories at the same time, and summed up all of the possible histories at each intermediate stage,” Bauer noted.
    The research team used the IBM Q Johannesburg chip, a quantum computer with 20 qubits. Each qubit, or quantum bit, is capable of representing a zero, one, and a state of so-called superposition in which it represents both a zero and a one simultaneously. This superposition is what makes qubits uniquely powerful compared to standard computing bits, which can represent a zero or one.
    Researchers constructed a four-step quantum computer circuit using five qubits, and the algorithm requires 48 operations. Researchers noted that noise in the quantum computer is likely to blame for differences in results with the quantum simulator.

    advertisement

    While the team’s pioneering efforts to apply quantum computing to a simplified portion of particle collider data are promising, Bauer said that he doesn’t expect quantum computers to have a large impact on the high-energy physics field for several years — at least until the hardware improves.
    Quantum computers will need more qubits and much lower noise to have a real breakthrough, Bauer said. “A lot depends on how quickly the machines get better.” But he noted that there is a huge and growing effort to make that happen, and it’s important to start thinking about these quantum algorithms now to be ready for the coming advances in hardware.
    Such quantum leaps in technology are a prime focus of an Energy Department-supported collaborative quantum R&D center that Berkeley Lab is a part of, called the Quantum Systems Accelerator.
    As hardware improves it will be possible to account for more types of bosons and fermions in the quantum algorithm, which will improve its accuracy.
    Such algorithms should eventually have broad impact in the high-energy physics field, he said, and could also find application in heavy-ion-collider experiments.
    Also participating in the study were Benjamin Nachman and Davide Provasoli of the Berkeley Lab Physics Division, and Wibe de Jong of the Berkeley Lab Computational Research Division.
    This work was supported by the U.S. Department of Energy Office of Science. It used resources at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science user facility. More

  • in

    Spontaneous quantum error correction demonstrated

    To build a universal quantum computer from fragile quantum components, effective implementation of quantum error correction (QEC) is an essential requirement and a central challenge. QEC is used in quantum computing, which has the potential to solve scientific problems beyond the scope of supercomputers, to protect quantum information from errors due to various noise.
    Published by the journal Nature, research co-authored by University of Massachusetts Amherst physicist Chen Wang, graduate students Jeffrey Gertler and Shruti Shirol, and postdoctoral researcher Juliang Li takes a step toward building a fault-tolerant quantum computer. They have realized a novel type of QEC where the quantum errors are spontaneously corrected.
    Today’s computers are built with transistors representing classical bits (0’s or 1’s). Quantum computing is an exciting new paradigm of computation using quantum bits (qubits) where quantum superposition can be exploited for exponential gains in processing power. Fault-tolerant quantum computing may immensely advance new materials discovery, artificial intelligence, biochemical engineering and many other disciplines.
    Since qubits are intrinsically fragile, the most outstanding challenge of building such powerful quantum computers is efficient implementation of quantum error correction. Existing demonstrations of QEC are active, meaning that they require periodically checking for errors and immediately fixing them, which is very demanding in hardware resources and hence hinders the scaling of quantum computers.
    In contrast, the researchers’ experiment achieves passive QEC by tailoring the friction (or dissipation) experienced by the qubit. Because friction is commonly considered the nemesis of quantum coherence, this result may appear quite surprising. The trick is that the dissipation has to be designed specifically in a quantum manner. This general strategy has been known in theory for about two decades, but a practical way to obtain such dissipation and put it in use for QEC has been a challenge.
    “Although our experiment is still a rather rudimentary demonstration, we have finally fulfilled this counterintuitive theoretical possibility of dissipative QEC,” says Chen. “Looking forward, the implication is that there may be more avenues to protect our qubits from errors and do so less expensively. Therefore, this experiment raises the outlook of potentially building a useful fault-tolerant quantum computer in the mid to long run.”
    Chen describes in layman’s terms how strange the quantum world can be. “As in German physicist Erwin Schrödinger’s famous (or infamous) example, a cat packed in a closed box can be dead or alive at the same time. Each logical qubit in our quantum processor is very much like a mini-Schrödinger’s cat. In fact, we quite literally call it a `cat qubit.’ Having lots of such cats can help us solve some of the world’s most difficult problems.
    “Unfortunately, it is very difficult to keep a cat staying that way since any gas, light, or anything leaking into box will destroy the magic: The cat will become either dead or just a regular live cat,” explains Chen. “The most straightforward strategy to protect a Schrodinger’s cat is to make the box as tight as possible, but that also makes it harder to use it for computation. What we just demonstrated was akin to painting the inside of the box in a special way and that somehow helps the cat better survive the inevitable harm of the outside world.”

    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More