More stories

  • in

    Designed antiviral proteins inhibit SARS-CoV-2 in the lab

    Computer-designed small proteins have now been shown to protect lab-grown human cells from SARS-CoV-2, the coronavirus that causes COVID-19.
    The findings are reported today, Sept. 9, in Science
    In the experiments, the lead antiviral candidate, named LCB1, rivaled the best-known SARS-CoV-2 neutralizing antibodies in its protective actions. LCB1 is currently being evaluated in rodents.
    Coronaviruses are studded with so-called Spike proteins. These latch onto human cells to enable the virus to break in and infect them. The development of drugs that interfere with this entry mechanism could lead to treatment of or even prevention of infection.
    Institute for Protein Design researchers at the University of Washington School of Medicine used computers to originate new proteins that bind tightly to SARS-CoV-2 Spike protein and obstruct it from infecting cells.
    Beginning in January, more than two million candidate Spike-binding proteins were designed on the computer. Over 118,000 were then produced and tested in the lab.

    advertisement

    “Although extensive clinical testing is still needed, we believe the best of these computer-generated antivirals are quite promising,” said lead author Longxing Cao, a postdoctoral scholar at the Institute for Protein Design.
    “They appear to block SARS-CoV-2 infection at least as well as monoclonal antibodies, but are much easier to produce and far more stable, potentially eliminating the need for refrigeration,” he added.
    The researchers created antiviral proteins through two approaches. First, a segment of the ACE2 receptor, which SARS-CoV-2 naturally binds to on the surface of human cells, was incorporated into a series of small protein scaffolds.
    Second, completely synthetic proteins were designed from scratch. The latter method produced the most potent antivirals, including LCB1, which is roughly six times more potent on a per mass basis than the most effective monoclonal antibodies reported thus far.
    Scientists from the University of Washington School of Medicine in Seattle and Washington University School of Medicine in St. Louis collaborated on this work.
    “Our success in designing high-affinity antiviral proteins from scratch is further proof that computational protein design can be used to create promising drug candidates,” said senior author and Howard Hughes Medical Institute Investigator David Baker, professor of biochemistry at the UW School of Medicine and head of the Institute for Protein Design. In 2019, Baker gave a TED talk on how protein design might be used to stop viruses.
    To confirm that the new antiviral proteins attached to the coronavirus Spike protein as intended, the team collected snapshots of the two molecules interacting by using cryo-electron microscopy. These experiments were performed by researchers in the laboratories of David Veesler, assistant professor of biochemistry at the UW School of Medicine, and Michael S. Diamond, the Herbert S. Gasser Professor in the Division of Infectious Diseases at Washington University School of Medicine in St. Louis.
    “The hyperstable minibinders provide promising starting points for new SARS-CoV-2 therapeutics,” the antiviral research team wrote in their study pre-print, “and illustrate the power of computational protein design for rapidly generating potential therapeutic candidates against pandemic threats.”

    Story Source:
    Materials provided by University of Washington Health Sciences/UW Medicine. Original written by Ian Haydon, Institute for Protein Design. Note: Content may be edited for style and length. More

  • in

    Seeing objects through clouds and fog

    Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision — only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.
    “A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”
    This technique complements other vision systems that can see through barriers on the microscopic scale — for applications in medicine — because it’s more focused on large-scale situations, such as navigating self-driving cars in fog or heavy rain and satellite imaging of the surface of Earth and other planets through hazy atmosphere.
    Supersight from scattered light
    In order to see through environments that scatter light every-which-way, the system pairs a laser with a super-sensitive photon detector that records every bit of laser light that hits it. As the laser scans an obstruction like a wall of foam, an occasional photon will manage to pass through the foam, hit the objects hidden behind it and pass back through the foam to reach the detector. The algorithm-supported software then uses those few photons — and information about where and when they hit the detector — to reconstruct the hidden objects in 3D.
    This is not the first system with the ability to reveal hidden objects through scattering environments, but it circumvents limitations associated with other techniques. For example, some require knowledge about how far away the object of interest is. It is also common that these systems only use information from ballistic photons, which are photons that travel to and from the hidden object through the scattering field but without actually scattering along the way.

    advertisement

    “We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image,” said David Lindell, a graduate student in electrical engineering and lead author of the paper. “This makes our system especially useful for large-scale applications, where there would be very few ballistic photons.”
    In order to make their algorithm amenable to the complexities of scattering, the researchers had to closely co-design their hardware and software, although the hardware components they used are only slightly more advanced than what is currently found in autonomous cars. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.
    “You couldn’t see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don’t see anything,” said Lindell. “But, with just a handful of photons, the reconstruction algorithm can expose these objects — and you can see not only what they look like, but where they are in 3D space.”
    Space and fog
    Someday, a descendant of this system could be sent through space to other planets and moons to help see through icy clouds to deeper layers and surfaces. In the nearer term, the researchers would like to experiment with different scattering environments to simulate other circumstances where this technology could be useful.
    “We’re excited to push this further with other types of scattering geometries,” said Lindell. “So, not just objects hidden behind a thick slab of material but objects that are embedded in densely scattering material, which would be like seeing an object that’s surrounded by fog.”
    Lindell and Wetzstein are also enthusiastic about how this work represents a deeply interdisciplinary intersection of science and engineering.
    “These sensing systems are devices with lasers, detectors and advanced algorithms, which puts them in an interdisciplinary research area between hardware and physics and applied math,” said Wetzstein. “All of those are critical, core fields in this work and that’s what’s the most exciting for me.”

    Story Source:
    Materials provided by Stanford University. Original written by Taylor Kubota. Note: Content may be edited for style and length. More

  • in

    As collegiate esports become more professional, women are being left out

    A new study from North Carolina State University reports that the rapidly growing field of collegiate esports is effectively becoming a two-tiered system, with club-level programs that are often supportive of gender diversity being clearly distinct from well-funded varsity programs that are dominated by men.
    “Five years ago, we thought collegiate esports might be an opportunity to create a welcoming, diverse competitive arena, which was a big deal given how male-dominated the professional esports scene was,” says Nick Taylor, co-author of the study and an associate professor of communication at NC State. “Rapid growth of collegiate esports over the past five years has led to it becoming more professional, with many universities having paid esports positions, recruiting players, and so on. We wanted to see how that professionalization has affected collegiate esports and what that means for gender diversity. The findings did not give us reason to be optimistic.”
    For this qualitative study, the researchers conducted in-depth interviews with 21 collegiate esports leaders from the U.S. and Canada. Eight of the study participants were involved in varsity-level esports, such as coaches or administrators, while the remaining 13 participants were presidents of collegiate esports clubs. Six of the participants identified as women; 15 identified as men.
    “Essentially, we found that women are effectively pushed out of esports at many colleges when they start investing financial resources in esports programs,” says Bryce Stout, co-author of the study and a Ph.D. student at NC State. “We thought collegiate esports might help to address the disenfranchisement of women in esports and in gaming more generally; instead, it seems to simply be an extension of that disenfranchisement.”
    “Higher education has been spending increasing amounts of time, money and effort on professionalizing esports programs,” Taylor says. “With some key exceptions, these institutions are clearly not putting as much effort into encouraging diversity in these programs. That effectively cuts out women and minorities.
    “Some leaders stress that they will welcome any player onto their team, as long as the player has a certain skill level,” Taylor says. “But this ignores the systemic problems that effectively drive most women out of gaming — such as harassment. There needs to be a focus on cultivating skill and developing players, rather than focusing exclusively on recruitment.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Transistor-integrated cooling for a more powerful chip

    Managing the heat generated in electronics is a huge problem, especially with the constant push to reduce the size and pack as many transistors as possible in the same chip. The whole problem is how to manage such high heat fluxes efficiently. Usually electronic technologies, designed by electrical engineers, and cooling systems, designed by mechanical engineers, are done independently and separately. But now EPFL researchers have quietly revolutionized the process by combining these two design steps into one: they’ve developed an integrated microfluidic cooling technology together with the electronics, that can efficiently manage the large heat fluxes generated by transistors. Their research, which has been published in Nature, will lead to even more compact electronic devices and enable the integration of power converters, with several high-voltage devices, into a single chip.
    The best of both worlds
    In this ERC-funded project, Professor Elison Matioli, his doctoral student Remco Van Erp, and their team from the School of Engineering’s Power and Wide-band-gap Electronics Research Laboratory (POWERLAB), began working to bring about a real change in mentality when it comes to designing electronic devices, by conceiving the electronics and cooling together, right from the beginning, aiming to extract the heat very near the regions that heat up the most in the device. “We wanted to combine skills in electrical and mechanical engineering in order to create a new kind of device,” says Van Erp.
    The team was looking to solve the issue of how to cool electronic devices, and especially transistors. “Managing the heat produced by these devices is one of the biggest challenges in electronics going forward,” says Elison Matioli. “It’s becoming increasingly important to minimize the environmental impact, so we need innovative cooling technologies that can efficiently process the large amounts of heat produced in a sustainable and cost-effective way.”
    Microfluidic channels and hot spots
    Their technology is based on integrating microfluidic channels inside the semiconductor chip, together with the electronics, so a cooling liquid flows inside an electronic chip. “We placed microfluidic channels very close to the transistor’s hot spots, with a straightforward and integrated fabrication process, so that we could extract the heat in exactly the right place and prevent it from spreading throughout the device,” says Matioli. The cooling liquid they used was deionized water, which doesn’t conduct electricity. “We chose this liquid for our experiments, but we’re already testing other, more effective liquids so that we can extract even more heat out of the transistor,” says Van Erp.
    Reducing energy consumption
    “This cooling technology will enable us to make electronic devices even more compact and could considerably reduce energy consumption around the world,” says Matioli. “We’ve eliminated the need for large external heat sinks and shown that it’s possible to create ultra-compact power converters in a single chip. This will prove useful as society becomes increasingly reliant on electronics.” The researchers are now looking at how to manage heat in other devices, such as lasers and communications systems.

    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Valérie Geneux. Note: Content may be edited for style and length. More

  • in

    AI used to show how hydrogen becomes a metal inside giant planets

    Dense metallic hydrogen — a phase of hydrogen which behaves like an electrical conductor — makes up the interior of giant planets, but it is difficult to study and poorly understood. By combining artificial intelligence and quantum mechanics, researchers have found how hydrogen becomes a metal under the extreme pressure conditions of these planets.
    The researchers, from the University of Cambridge, IBM Research and EPFL, used machine learning to mimic the interactions between hydrogen atoms in order to overcome the size and timescale limitations of even the most powerful supercomputers. They found that instead of happening as a sudden, or first-order, transition, the hydrogen changes in a smooth and gradual way. The results are reported in the journal Nature.
    Hydrogen, consisting of one proton and one electron, is both the simplest and the most abundant element in the Universe. It is the dominant component of the interior of the giant planets in our solar system — Jupiter, Saturn, Uranus, and Neptune — as well as exoplanets orbiting other stars.
    At the surfaces of giant planets, hydrogen remains a molecular gas. Moving deeper into the interiors of giant planets however, the pressure exceeds millions of standard atmospheres. Under this extreme compression, hydrogen undergoes a phase transition: the covalent bonds inside hydrogen molecules break, and the gas becomes a metal that conducts electricity.
    “The existence of metallic hydrogen was theorised a century ago, but what we haven’t known is how this process occurs, due to the difficulties in recreating the extreme pressure conditions of the interior of a giant planet in a laboratory setting, and the enormous complexities of predicting the behaviour of large hydrogen systems,” said lead author Dr Bingqing Cheng from Cambridge’s Cavendish Laboratory.
    Experimentalists have attempted to investigate dense hydrogen using a diamond anvil cell, in which two diamonds apply high pressure to a confined sample. Although diamond is the hardest substance on Earth, the device will fail under extreme pressure and high temperatures, especially when in contact with hydrogen, contrary to the claim that a diamond is forever. This makes the experiments both difficult and expensive.
    Theoretical studies are also challenging: although the motion of hydrogen atoms can be solved using equations based on quantum mechanics, the computational power needed to calculate the behaviour of systems with more than a few thousand atoms for longer than a few nanoseconds exceeds the capability of the world’s largest and fastest supercomputers.
    It is commonly assumed that the transition of dense hydrogen is first-order, which is accompanied by abrupt changes in all physical properties. A common example of a first-order phase transition is boiling liquid water: once the liquid becomes a vapour, its appearance and behaviour completely change despite the fact that the temperature and the pressure remain the same.
    In the current theoretical study, Cheng and her colleagues used machine learning to mimic the interactions between hydrogen atoms, in order to overcome limitations of direct quantum mechanical calculations.
    “We reached a surprising conclusion and found evidence for a continuous molecular to atomic transition in the dense hydrogen fluid, instead of a first-order one,” said Cheng, who is also a Junior Research Fellow at Trinity College.
    The transition is smooth because the associated ‘critical point’ is hidden. Critical points are ubiquitous in all phase transitions between fluids: all substances that can exist in two phases have critical points. A system with an exposed critical point, such as the one for vapour and liquid water, has clearly distinct phases. However, the dense hydrogen fluid, with the hidden critical point, can transform gradually and continuously between the molecular and the atomic phases. Furthermore, this hidden critical point also induces other unusual phenomena, including density and heat capacity maxima.
    The finding about the continuous transition provides a new way of interpreting the contradicting body of experiments on dense hydrogen. It also implies a smooth transition between insulating and metallic layers in giant gas planets. The study would not be possible without combining machine learning, quantum mechanics, and statistical mechanics. Without any doubt, this approach will uncover more physical insights about hydrogen systems in the future. As the next step, the researchers aim to answer the many open questions concerning the solid phase diagram of dense hydrogen. More

  • in

    Artificial intelligence aids gene activation discovery

    Scientists have long known that human genes spring into action through instructions delivered by the precise order of our DNA, directed by the four different types of individual links, or “bases,” coded A, C, G and T.
    Nearly 25% of our genes are widely known to be transcribed by sequences that resemble TATAAA, which is called the “TATA box.” How the other three-quarters are turned on, or promoted, has remained a mystery due to the enormous number of DNA base sequence possibilities, which has kept the activation information shrouded.
    Now, with the help of artificial intelligence, researchers at the University of California San Diego have identified a DNA activation code that’s used at least as frequently as the TATA box in humans. Their discovery, which they termed the downstream core promoter region (DPR), could eventually be used to control gene activation in biotechnology and biomedical applications. The details are described September 9 in the journal Nature.
    “The identification of the DPR reveals a key step in the activation of about a quarter to a third of our genes,” said James T. Kadonaga, a distinguished professor in UC San Diego’s Division of Biological Sciences and the paper’s senior author. “The DPR has been an enigma — it’s been controversial whether or not it even exists in humans. Fortunately, we’ve been able to solve this puzzle by using machine learning.”
    In 1996, Kadonaga and his colleagues working in fruit flies identified a novel gene activation sequence, termed the DPE (which corresponds to a portion of the DPR), that enables genes to be turned on in the absence of the TATA box. Then, in 1997, they found a single DPE-like sequence in humans. However, since that time, deciphering the details and prevalence of the human DPE has been elusive. Most strikingly, there have been only two or three active DPE-like sequences found in the tens of thousands of human genes. To crack this case after more than 20 years, Kadonaga worked with lead author and post-doctoral scholar Long Vo ngoc, Cassidy Yunjing Huang, Jack Cassidy, a retired computer scientist who helped the team leverage the powerful tools of artificial intelligence, and Claudia Medrano.
    In what Kadonaga describes as “fairly serious computation” brought to bear in a biological problem, the researchers made a pool of 500,000 random versions of DNA sequences and evaluated the DPR activity of each. From there, 200,000 versions were used to create a machine learning model that could accurately predict DPR activity in human DNA.

    advertisement

    The results, as Kadonaga describes them, were “absurdly good.” So good, in fact, that they created a similar machine learning model as a new way to identify TATA box sequences. They evaluated the new models with thousands of test cases in which the TATA box and DPR results were already known and found that the predictive ability was “incredible,” according to Kadonaga.
    These results clearly revealed the existence of the DPR motif in human genes. Moreover, the frequency of occurrence of the DPR appears to be comparable to that of the TATA box. In addition, they observed an intriguing duality between the DPR and TATA. Genes that are activated with TATA box sequences lack DPR sequences, and vice versa.
    Kadonaga says finding the six bases in the TATA box sequence was straightforward. At 19 bases, cracking the code for DPR was much more challenging.
    “The DPR could not be found because it has no clearly apparent sequence pattern,” said Kadonaga. “There is hidden information that is encrypted in the DNA sequence that makes it an active DPR element. The machine learning model can decipher that code, but we humans cannot.”
    Going forward, the further use of artificial intelligence for analyzing DNA sequence patterns should increase researchers’ ability to understand as well as to control gene activation in human cells. This knowledge will likely be useful in biotechnology and in the biomedical sciences, said Kadonaga.
    “In the same manner that machine learning enabled us to identify the DPR, it is likely that related artificial intelligence approaches will be useful for studying other important DNA sequence motifs,” said Kadonaga. “A lot of things that are unexplained could now be explainable.”
    This study was supported by the National Institute of General Medical Sciences (NIGMS) at the National Institutes of Health. More

  • in

    How AI-controlled sensors could save lives in 'smart' hospitals and homes

    As many as 400,000 Americans die each year because of medical errors, but many of these deaths could be prevented by using electronic sensors and artificial intelligence to help medical professionals monitor and treat vulnerable patients in ways that improve outcomes while respecting privacy.
    “We have the ability to build technologies into the physical spaces where health care is delivered to help cut the rate of fatal errors that occur today due to the sheer volume of patients and the complexity of their care,” said Arnold Milstein, a professor of medicine and director of Stanford’s Clinical Excellence Research Center (CERC).
    Milstein, along with computer science professor Fei-Fei Li and graduate student Albert Haque, are co-authors of a Nature paper that reviews the field of “ambient intelligence” in health care — an interdisciplinary effort to create such smart hospital rooms equipped with AI systems that can do a range of things to improve outcomes. For example, sensors and AI can immediately alert clinicians and patient visitors when they fail to sanitize their hands before entering a hospital room. AI tools can be built into smart homes where technology could unobtrusively monitor the frail elderly for behavioral clues of impending health crises. And they prompt in-home caregivers, remotely located clinicians and patients themselves to make timely, life-saving interventions.
    Li, who is co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), said ambient technologies have many potential benefits, but they also raise legal and regulatory issues, as well as privacy concerns that must be identified and addressed in a public way to win the trust of patients and providers, as well as the various agencies and institutions that pay health care costs. “Technology to protect the health of medically fragile populations is inherently human-centered,” Li said. “Researchers must listen to all the stakeholders in order to create systems that supplement and complement the efforts of nurses, doctors and other caregivers, as well as patients themselves.”
    Li and Milstein co-direct the 8-year-old Stanford Partnership in AI-Assisted Care (PAC), one of a growing number of centers, including those at Johns Hopkins University and the University of Toronto, where technologists and clinicians have teamed up to develop ambient intelligence technologies to help health care providers manage patient volumes so huge — roughly 24 million Americans required an overnight hospital stay in 2018 — that even the tiniest margin of error can cost many lives.
    “We are in a foot race with the complexity of bedside care,” Milstein said. “By one recent count, clinicians in a hospital’s neonatal intensive care unit took 600 bedside actions, per patient, per day. Without technology assistance, perfect execution of this volume of complex actions is well beyond what is reasonable to expect of even the most conscientious clinical teams.”
    The Fix: Invisible light guided by AI?

    advertisement

    Haque, who compiled the 170 scientific papers cited in the Nature article, said the field is based largely on the convergence of two technological trends: the availability of infrared sensors that are inexpensive enough to build into high-risk care-giving environments, and the rise of machine learning systems as a way to use sensor input to train specialized AI applications in health care.
    The infrared technologies are of two types. The first is active infrared, such as the invisible light beams used by TV remote controls. But instead of simply beaming invisible light in one direction, like a TV remote, new active infrared systems use AI to compute how long it takes the invisible rays to bounce back to the source, like a light-based form of radar that maps the 3D outlines of a person or object.
    Such infrared depth sensors are already being used outside hospital rooms, for instance, to discern whether a person washed their hands before entering and, if not, issue an alert. In one Stanford experiment, a tablet computer hung near the door shows a solid green screen that transitions to red, or some other alert color that might be tested, should a hygiene failure occur. Researchers had considered using audible warnings until medical professionals advised otherwise. “Hospitals are already full of buzzes and beeps,” Milstein said. “Our human-centered design interviews with clinicians taught us that a visual cue would likely be more effective and less annoying.”
    These alert systems are being tested to see if they can reduce the number of ICU patients who get nosocomial infections — potentially deadly illnesses contracted by patients due to failure of other people in the hospital to fully adhere to infection prevention protocols.
    The second type of infrared technology are passive detectors, of the sort that allow night vision goggles to create thermal images from the infrared rays generated by body heat. In a hospital setting, a thermal sensor above an ICU bed would enable the governing AI to detect twitching or writhing beneath the sheets, and alert clinical team members to impending health crises without constantly going from room to room.
    So far, the researchers have avoided using high-definition video sensors, such as those in smartphones, as capturing video imagery could unnecessarily intrude on the privacy of clinicians and patients. “The silhouette images provided by infrared sensors may provide data that is sufficiently accurate to train AI algorithms for many clinically important applications,” Haque said.
    Constant monitoring by ambient intelligence systems in a home environment could also be used to detect clues of serious illness or potential accidents, and alert caregivers to make timely interventions. For instance, when frail seniors start moving more slowly or stop eating regularly, such behaviors can presage depression, a greater likelihood of a fall or the rapid onset of a dangerous health crisis. Researchers are developing activity recognition algorithms that can sift through infrared sensing data to detect changes in habitual behaviors, and help caregivers get a more holistic view of patient well-being.
    Privacy is of particular concern in homes, assisted living settings and nursing homes, but “the preliminary results we’re getting from hospitals and daily living spaces confirm that ambient sensing technologies can provide the data we need to curb medical errors,” Milstein said. “Our Nature review tells the field that we’re on the right track.” More

  • in

    New method prevents quantum computers from crashing

    Quantum information is fragile, which is why quantum computers must be able to correct errors. But what if whole qubits are lost? Researchers are now presenting a method that allows quantum computers to keep going even if they lose some qubits along the way.
    Qubits — the carriers of quantum information — are prone to errors induced by undesired environmental interactions. These errors accumulate during a quantum computation and correcting them is thus a key requirement for a reliable use of quantum computers.
    It is by now well known that quantum computers can withstand a certain amount of computational errors, such as bit flip or phase flip errors. However, in addition to computational errors, qubits might get lost altogether. Depending on the type of quantum computer, this can be due to actual loss of particles, such as atoms or ions, or due to quantum particles transitioning for instance to unwanted energy states, so that they are no longer recognized as a qubit. When a qubit gets lost, the information in the remaining qubits becomes scrambled and unprotected, rendering this process a potentially fatal type of error.
    Detect and correct loss in real time
    A team of physicists led by Rainer Blatt from the Department of Experimental Physics at the University of Innsbruck, in collaboration with theoretical physicists from Germany and Italy, has now developed and implemented advanced techniques that allow their trapped-ion quantum computer to adapt in real-time to loss of qubits and to maintain protection of the fragile stored quantum information. “In our trapped-ion quantum computer, ions hosting the qubits can be trapped for very long times, even days,” says Innsbruck physicist Roman Stricker. “However, our ions are much more complex than a simplified description as a two-level qubit captures. This offers great potential and additional flexibility in controlling our quantum computer, but unfortunately it also provides a possibility for quantum information to leak out of the qubit space due to imperfect operations or radiative decay.” Using an approach developed by the Markus Müller’s theoretical quantum technology group at RWTH Aachen University and Forschungszentrum Jülich, in collaboration with Davide Vodola from the University of Bologna, the Innsbruck team has demonstrated that such leakage can be detected and corrected in real-time. Müller emphasizes that “combining quantum error correction with correction of qubit loss and leakage is a necessary next step towards large-scale and robust quantum computing.”
    Widely applicable techniques
    The researchers had to develop two key techniques to protect their quantum computer from the loss of qubits. The first challenge was to detect the loss of a qubit in the first place: “Measuring the qubit directly was not an option as this would destroy the quantum information that is stored in it,” explains Philipp Schindler from the University of Innsbruck. “We managed to overcome this problem by developing a technique where we used an additional ion to probe whether the qubit in question was still there or not, without disturbing it,” explains Martin Ringbauer. The second challenge was to adapt the rest of the computation in real-time in case the qubit was indeed lost. This adaptation is crucial to unscramble the quantum information after a loss and maintain protection of the remaining qubits. Thomas Monz, who lead the Innsbruck team, emphasizes that “all the building blocks developed in this work are readily applicable to other quantum computer architectures and other leading quantum error correction protocols.”
    The research was financed by the Austrian Science Fund FWF, the Austrian Research Promotion Agency FFG and the European Union, among others.

    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More