More stories

  • in

    Thermodynamics of computation: A quest to find the cost of running a Turing machine

    Turing machines were first proposed by British mathematician Alan Turing in 1936, and are a theoretical mathematical model of what it means for a system to “be a computer.”
    At a high level, these machines are similar to real-world modern computers because they have storage for digital data and programs (somewhat like a hard drive), a little central processing unit (CPU) to perform computations, and can read programs from their storage, run them, and produce outputs. Amazingly, Turing proposed his model before real-world electronic computers existed.
    In a paper published in the American Physical Society’s Physical Review Research, Santa Fe Institute researchers Artemy Kolchinsky and David Wolpert present their work exploring the thermodynamics of computation within the context of Turing machines.
    “Our hunch was that the physics of Turing machines would show a lot of rich and novel structure because they have special properties that simpler models of computation lack, such as universality,” says Kolchinsky.
    Turing machines are widely believed to be universal, in the sense that any computation done by any system can also be done by a Turing machine.
    The quest to find the cost of running a Turing machine began with Wolpert trying to use information theory — the quantification, storage, and communication of information — to formalize how complex a given operation of a computer is. While not restricting his attention to Turing machines per se, it was clear that any results he derived would have to apply to them as well.
    During the process, Wolpert stumbled onto the field of stochastic thermodynamics. “I realized, very grudgingly, that I had to throw out the work I had done trying to reformulate nonequilibrium statistical physics, and instead adopt stochastic thermodynamics,” he says. “Once I did that, I had the tools to address my original question by rephrasing it as: In terms of stochastic thermodynamics cost functions, what’s the cost of running a Turing machine? In other words, I reformulated my question as a thermodynamics of computation calculation.”
    Thermodynamics of computation is a subfield of physics that explores what the fundamental laws of physics say about the relationship between energy and computation. It has important implications for the absolute minimum amount of energy required to perform computations.
    Wolpert and Kolchinsky’s work shows that relationships exist between energy and computation that can be stated in terms of algorithmic information (which defines information as compression length), rather than “Shannon information” (which defines information as reduction of uncertainty about the state of the computer).
    Put another way: The energy required by a computation depends on how much more compressible the output of the computation is than the input. “To stretch a Shakespeare analogy, imagine a Turing machine reads-in the entire works of Shakespeare, and then outputs a single sonnet,” explains Kolchinsky. “The output has a much shorter compression than the input. Any physical process that carries out that computation would, relatively speaking, require a lot of energy.”
    While important earlier work also proposed relationships between algorithmic information and energy, Wolpert and Kolchinsky derived these relationships using the formal tools of modern statistical physics. This allows them to analyze a broader range of scenarios and to be more precise about the conditions under which their results hold than was possible by earlier researchers.
    “Our results point to new kinds of relationships between energy and computation,” says Kolchinsky. “This broadens our understanding of the connection between contemporary physics and information, which is one of the most exciting research areas in physics.”

    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    U.S. political parties become extremist to get more votes

    New research shows that U.S. political parties are becoming increasingly polarized due to their quest for voters — not because voters themselves are becoming more extremist.
    The research team, which includes Northwestern University researchers, found that extremism is a strategy that has worked over the years even if voters’ views remain in the center. Voters are not looking for a perfect representative but a “satisficing,” meaning “good enough,” candidate.
    “Our assumption is not that people aren’t trying to make the perfect choice, but in the presence of uncertainty, misinformation or a lack of information, voters move toward satisficing,” said Northwestern’s Daniel Abrams, a senior author of the study.
    The study is now available online and will be published in SIAM Review’s printed edition on Sept. 1.
    Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern’s McCormick School of Engineering. Co-authors include Adilson Motter, the Morrison Professor of Physics and Astronomy in Northwestern’s Weinberg College of Arts and Sciences, and Vicky Chuqiao Yang, a postdoctoral fellow at the Santa Fe Institute and former student in Abrams’ laboratory.
    To accommodate voters’ “satisficing” behavior, the team developed a mathematical model using differential equations to understand how a rational political party would position itself to get the most votes. The tool is reactive, with the past influencing future behaviors of the parties.

    advertisement

    The team tested 150 years of U.S. Congressional voting data and found the model’s predictions are consistent with the political parties’ historical trajectories: Congressional voting has shifted to the margins, but voters’ positions have not changed much.
    “The two major political parties have been getting more and more polarized since World War II, while historical data indicates the average American voter remains just as moderate on key issues and policies as they always have been,” Abrams said.
    The team found that polarization is instead tied to the ideological homogeneity within the constituencies of the two major parties. To differentiate themselves, the politicians of the parties move further away from the center.
    The new model helps explains why. The moves to the extremes can be interpreted as attempts by the Democratic and Republican parties to minimize an overlap of constituency. Test runs of the model show how staying within the party lines creates a winning strategy.
    “Right now, we have one party with a lot of support from minorities and women, and another party with a lot of support from white men,” Motter said.

    advertisement

    Why not have both parties appeal to everyone? “Because of the perception that if you gain support from one group, it comes at the expense of the other group,” he added. “The model shows that the increased polarization is not voters’ fault. It is a way to get votes. This study shows that we don’t need to assume that voters have a hidden agenda driving polarization in Congress. There is no mastermind behind the policy. It is an emergent phenomenon.”
    The researchers caution that many other factors — political contributions, gerrymandering and party primaries — also contribute to election outcomes, which future work can examine.
    The work challenges a model introduced in the late 1950s by economist Anthony Downs, which assumes everyone votes and makes well-informed, completely rational choices, picking the candidate closest to their opinions. The Downsian model predicts that political parties over time would move closer to the center.
    However, U.S. voters’ behaviors don’t necessarily follow those patterns, and the parties’ positions have become dramatically polarized.
    “People aren’t perfectly rational, but they’re not totally irrational either,” Abrams said. “They’ll vote for the candidate that’s good enough — or not too bad — without making fine distinctions among those that meet their perhaps low bar for good enough. If we want to reduce political polarization between the parties, we need both parties to be more tolerant of the diversity within their own ranks.” More

  • in

    New neural network differentiates Middle and Late Stone Age toolkits

    MSA toolkits first appear some 300 thousand years ago, at the same time as the earliest fossils of Homo sapiens, and are still in use 30 thousand years ago. However, from 67 thousand years ago, changes in stone tool production indicate a marked shift in behaviour; the new toolkits that emerge are labelled LSA and remained in use into the recent past. A growing body of evidence suggests that the transition from MSA to LSA was not a linear process, but occurred at different times in different places. Understanding this process is important to examine what drives cultural innovation and creativity, and what explains this critical behavioural change. Defining differences between the MSA and LSA is an important step towards this goal.
    “Eastern Africa is a key region to examine this major cultural change, not only because it hosts some of the youngest MSA sites and some of the oldest LSA sites, but also because the large number of well excavated and dated sites make it ideal for research using quantitative methods,” says Dr. Jimbob Blinkhorn, an archaeologist from the Pan African Evolution Research Group, Max Planck Institute for the Science of Human History and the Centre for Quaternary Research, Department of Geography, Royal Holloway. “This enabled us to pull together a substantial database of changing patterns of stone tool production and use, spanning 130 to 12 thousand years ago, to examine the MSA-LSA transition.”
    The study examines the presence or absence of 16 alternate tool types across 92 stone tool assemblages, but rather than focusing on them individually, emphasis is placed on the constellations of tool forms that frequently occur together.
    “We’ve employed an Artificial Neural Network (ANN) approach to train and test models that differentiate LSA assemblages from MSA assemblages, as well as examining chronological differences between older (130-71 thousand years ago) and younger (71-28 thousand years ago) MSA assemblages with a 94% success rate,” says Dr. Matt Grove, an archaeologist at the University of Liverpool.
    Artificial Neural Networks (ANNs) are computer models intended to mimic the salient features of information processing in the brain. Like the brain, their considerable processing power arises not from the complexity of any single unit but from the action of many simple units acting in parallel. Despite the widespread use of ANNs today, applications in archaeological research remain limited.
    “ANNs have sometimes been described as a ‘black box’ approach, as even when they are highly successful, it may not always be clear exactly why,” says Grove. “We employed a simulation approach that breaks open this black box to understand which inputs have a significant impact on the results. This enabled us to identify how patterns of stone tool assemblage composition vary between the MSA and LSA, and we hope this demonstrates how such methods can be used more widely in archaeological research in the future.”
    “The results of our study show that MSA and LSA assemblages can be differentiated based on the constellation of artefact types found within an assemblage alone,” Blinkhorn adds. “The combined occurrence of backed pieces, blade and bipolar technologies together with the combined absence of core tools, Levallois flake technology, point technology and scrapers robustly identifies LSA assemblages, with the opposite pattern identifying MSA assemblages. Significantly, this provides quantified support to qualitative differences noted by earlier researchers that key typological changes do occur with this cultural transition.”
    The team plans to expand the use of these methods to dig deeper into different regional trajectories of cultural change in the African Stone Age. “The approach we’ve employed offers a powerful toolkit to examine the categories we use to describe the archaeological record and to help us examine and explain cultural change amongst our ancestors,” says Blinkhorn. More

  • in

    Don't forget to clean robotic support pets, study says

    Robotic support pets used to reduce depression in older adults and people with dementia acquire bacteria over time, but a simple cleaning procedure can help them from spreading illnesses, according to a new study published August 26, 2020 in the open-access journal PLOS ONE by Hannah Bradwell of the University of Plymouth, UK and colleagues.
    There is a wealth of research on the use of social robots, or companion robots, in care and long-term nursing homes. “Paro the robot seal” and other robotic animals have been linked to reductions in depression, agitation, loneliness, nursing staff stress, and medication use — especially relevant during this period of pandemic-related social isolation.
    In the new study, researchers measured the microbial load found on the surface of eight different robot animals (Paro, Miro, Pleo rb, Joy for All dog, Joy for All cat, Furby Connect, Perfect Petzzz dog, and Handmade Hedgehog) after interaction with four care home residents, and again after cleaning by a researcher or care home staff member. The animals ranged in material from fur to soft plastic to solid plastic. The cleaning process involved spraying with anti-bacterial product, brushing any fur, and vigorous cleaning with anti-bacterial wipes.
    Most of the devices gathered enough harmful microbes during 20 minutes of standard use to have a microbial load above the acceptable threshold of 2.5 CFU/cm2 (colony forming units per square centimetre). Only the Joy for All cat and the MiRo robot remained below this level when microbes were measured after a 48 hour incubation period; microbial loads on the other 6 robots ranged from 2.56 to 17.28 CFU/cm2. The post-cleaning microbial load, however, demonstrated that regardless of material type, previous microbial load, or who carried out the cleaning procedure, all robots could be brought to well below acceptable levels. 5 of the 8 robots had undetectable levels of microbes after cleaning and 48 hours of incubation, and the remaining 3 robots had only 0.04 to 0.08 CFU/cm2 after this protocol.
    Hannah Bradwell, researcher at the Centre for Health Technology says: “Robot pets may be beneficial for older adults and people with dementia living in care homes, likely improving wellbeing and providing company. This benefit could be particularly relevant at present, in light of social isolation, however our study has shown the strong requirement for considerations around infection control for these devices.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Scientists take new spin on quantum research

    Army researchers discovered a way to further enhance quantum systems to provide Soldiers with more reliable and secure capabilities on the battlefield.
    Specifically, this research informs how future quantum networks will be designed to deal with the effects of noise and decoherence, or the loss of information from a quantum system in the environment.
    As one of the U.S. Army’s priority research areas in its Modernization Strategy, quantum research will help transform the service into a multi-domain force by 2035 and deliver on its enduring responsibility as part of the joint force providing for the defense of the United States.
    “Quantum networking, and quantum information science as a whole, will potentially lead to unsurpassed capabilities in computation, communication and sensing,” said Dr. Brian Kirby, researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “Example applications of Army interest include secure secret sharing, distributed network sensing and efficient decision making.”
    This research effort considers how dispersion, a very common effect found in optical systems, impacts quantum states of three or more particles of light.
    Dispersion is an effect where a pulse of light spreads out in time as it is transmitted through a medium, such as a fiber optic. This effect can destroy time correlations in communication systems, which can result in reduced data rates or the introduction of errors.

    advertisement

    To understand this, Kirby said, consider the situation where two light pulses are created simultaneously and the goal is to send them to two different detectors so that they arrive at the same time. If each light pulse goes through a different dispersive media, such as two different fiber optic paths, then each pulse will be spread in time, ultimately making the arrival time of the pulses less correlated.
    “Amazingly, it was shown that the situation is different in quantum mechanics,” Kirby said. “In quantum mechanics, it is possible to describe the behavior of individual particles of light, called photons. Here, it was shown by research team member Professor James Franson from the University of Maryland, Baltimore County, that quantum mechanics allows for certain situations where the dispersion on each photon can actually cancel out so that the arrival times remain correlated.”
    The key to this is something called entanglement, a strong correlation between quantum systems, which is not possible in classical physics, Kirby said.
    In this new work, Nonlocal Dispersion Cancellation for Three or More Photons, published in the peer-reviewed Physical Review A, the researchers extend the analysis to systems of three or more entangled photons and identify in what scenarios quantum systems outperform classical ones. This is unique from similar research as it considers the effects of noise on entangled systems beyond two-qubits, which is where the primary focus has been.
    “This informs how future quantum networks will be designed to deal with the effects of noise and decoherence, in this case, dispersion specifically,” Kirby said.

    advertisement

    Additionally, based on the success of Franson’s initial work on systems of two-photons, it was reasonable to assume that dispersion on one part of a quantum system could always be cancelled out with the proper application of dispersion on another part of the system.
    “Our work clarifies that perfect compensation is not, in general, possible when you move to entangled systems of three or more photons,” Kirby said. “Therefore, dispersion mitigation in future quantum networks may need to take place in each communication channel independently.”
    Further, Kirby said, this work is valuable for quantum communications because it allows for increased data rates.
    “Precise timing is required to correlate detection events at different nodes of a network,” Kirby said. “Conventionally the reduction in time correlations between quantum systems due to dispersion would necessitate the use of larger timing windows between transmissions to avoid confusing sequential signals.”
    Since Kirby and his colleagues’ new work describes how to limit the uncertainty in joint detection times of networks, it will allow subsequent transmissions in quicker succession.
    The next step for this research is to determine if these results can be readily verified in an experimental setting. More

  • in

    Cosmic rays may soon stymie quantum computing

    The practicality of quantum computing hangs on the integrity of the quantum bit, or qubit.
    Qubits, the logic elements of quantum computers, are coherent two-level systems that represent quantum information. Each qubit has the strange ability to be in a quantum superposition, carrying aspects of both states simultaneously, enabling a quantum version of parallel computation. Quantum computers, if they can be scaled to accommodate many qubits on one processor, could be dizzyingly faster, and able to handle far more complex problems, than today’s conventional computers.
    But that all depends on a qubit’s integrity, or how long it can operate before its superposition and the quantum information are lost — a process called decoherence, which ultimately limits the computer run-time. Superconducting qubits — a leading qubit modality today — have achieved exponential improvement in this key metric, from less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.
    But researchers at MIT, MIT Lincoln Laboratory, and Pacific Northwest National Laboratory (PNNL) have found that a qubit’s performance will soon hit a wall. In a paper published in Nature, the team reports that the low-level, otherwise harmless background radiation that is emitted by trace elements in concrete walls and incoming cosmic rays are enough to cause decoherence in qubits. They found that this effect, if left unmitigated, will limit the performance of qubits to just a few milliseconds.
    Given the rate at which scientists have been improving qubits, they may hit this radiation-induced wall in just a few years. To overcome this barrier, scientists will have to find ways to shield qubits — and any practical quantum computers — from low-level radiation, perhaps by building the computers underground or designing qubits that are tolerant to radiation’s effects.
    “These decoherence mechanisms are like an onion, and we’ve been peeling back the layers for past 20 years, but there’s another layer that left unabated is going to limit us in a couple years, which is environmental radiation,” says William Oliver, associate professor of electrical engineering and computer science and Lincoln Laboratory Fellow at MIT. “This is an exciting result, because it motivates us to think of other ways to design qubits to get around this problem.”
    The paper’s lead author is Antti Vepsäläinen, a postdoc in MIT’s Research Laboratory of Electronics.

    advertisement

    “It is fascinating how sensitive superconducting qubits are to the weak radiation. Understanding these effects in our devices can also be helpful in other applications such as superconducting sensors used in astronomy,” Vepsäläinen says.
    Co-authors at MIT include Amir Karamlou, Akshunna Dogra, Francisca Vasconcelos, Simon Gustavsson, and physics professor Joseph Formaggio, along with David Kim, Alexander Melville, Bethany Niedzielski, and Jonilyn Yoder at Lincoln Laboratory, and John Orrell, Ben Loer, and Brent VanDevender of PNNL.
    A cosmic effect
    Superconducting qubits are electrical circuits made from superconducting materials. They comprise multitudes of paired electrons, known as Cooper pairs, that flow through the circuit without resistance and work together to maintain the qubit’s tenuous superposition state. If the circuit is heated or otherwise disrupted, electron pairs can split up into “quasiparticles,” causing decoherence in the qubit that limits its operation.
    There are many sources of decoherence that could destabilize a qubit, such as fluctuating magnetic and electric fields, thermal energy, and even interference between qubits.

    advertisement

    Scientists have long suspected that very low levels of radiation may have a similar destabilizing effect in qubits.
    “I the last five years, the quality of superconducting qubits has become much better, and now we’re within a factor of 10 of where the effects of radiation are going to matter,” adds Kim, a technical staff member at MIT Lincoln Laboratotry.
    So Oliver and Formaggio teamed up to see how they might nail down the effect of low-level environmental radiation on qubits. As a neutrino physicist, Formaggio has expertise in designing experiments that shield against the smallest sources of radiation, to be able to see neutrinos and other hard-to-detect particles.
    “Calibration is key”
    The team, working with collaborators at Lincoln Laboratory and PNNL, first had to design an experiment to calibrate the impact of known levels of radiation on superconducting qubit performance. To do this, they needed a known radioactive source — one which became less radioactive slowly enough to assess the impact at essentially constant radiation levels, yet quickly enough to assess a range of radiation levels within a few weeks, down to the level of background radiation.
    The group chose to irradiate a foil of high purity copper. When exposed to a high flux of neutrons, copper produces copious amounts of copper-64, an unstable isotope with exactly the desired properties.
    “Copper just absorbs neutrons like a sponge,” says Formaggio, who worked with operators at MIT’s Nuclear Reactor Laboratory to irradiate two small disks of copper for several minutes. They then placed one of the disks next to the superconducting qubits in a dilution refrigerator in Oliver’s lab on campus. At temperatures about 200 times colder than outer space, they measured the impact of the copper’s radioactivity on qubits’ coherence while the radioactivity decreased — down toward environmental background levels.
    The radioactivity of the second disk was measured at room temperature as a gauge for the levels hitting the qubit. Through these measurements and related simulations, the team understood the relation between radiation levels and qubit performance, one that could be used to infer the effect of naturally occurring environmental radiation. Based on these measurements, the qubit coherence time would be limited to about 4 milliseconds.
    “Not game over”
    The team then removed the radioactive source and proceeded to demonstrate that shielding the qubits from the environmental radiation improves the coherence time. To do this, the researchers built a 2-ton wall of lead bricks that could be raised and lowered on a scissor lift, to either shield or expose the refrigerator to surrounding radiation.
    “We built a little castle around this fridge,” Oliver says.
    Every 10 minutes, and over several weeks, students in Oliver’s lab alternated pushing a button to either lift or lower the wall, as a detector measured the qubits’ integrity, or “relaxation rate,” a measure of how the environmental radiation impacts the qubit, with and without the shield. By comparing the two results, they effectively extracted the impact attributed to environmental radiation, confirming the 4 millisecond prediction and demonstrating that shielding improved qubit performance.
    “Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”
    Going underground isn’t the only option, and Oliver has ideas for how to design quantum computing devices that still work in the face of background radiation.
    “If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”
    This research was funded, in part, by the U.S. Department of Energy Office of Nuclear Physics, the U.S. Army Research Office, the U.S. Department of Defense, and the U.S. National Science Foundation. More

  • in

    Microscopic robots 'walk' thanks to laser tech

    A Cornell University-led collaboration has created the first microscopic robots that incorporate semiconductor components, allowing them to be controlled — and made to walk — with standard electronic signals.
    These robots, roughly the size of paramecium, provide a template for building even more complex versions that utilize silicon-based intelligence, can be mass produced, and may someday travel through human tissue and blood.
    The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania.
    The walking robots are the latest iteration, and in many ways an evolution, of Cohen and McEuen’s previous nanoscale creations, from microscopic sensors to graphene-based origami machines.
    The new robots are about 5 microns thick (a micron is one-millionth of a meter), 40 microns wide and range from 40 to 70 microns in length. Each bot consists of a simple circuit made from silicon photovoltaics — which essentially functions as the torso and brain — and four electrochemical actuators that function as legs.
    The researchers control the robots by flashing laser pulses at different photovoltaics, each of which charges up a separate set of legs. By toggling the laser back and forth between the front and back photovoltaics, the robot walks.
    The robots are certainly high-tech, but they operate with low voltage (200 millivolts) and low power (10 nanowatts), and remain strong and robust for their size. Because they are made with standard lithographic processes, they can be fabricated in parallel: About 1 million bots fit on a 4-inch silicon wafer.
    The researchers are exploring ways to soup up the robots with more complicated electronics and onboard computation — improvements that could one day result in swarms of microscopic robots crawling through and restructuring materials, or suturing blood vessels, or being dispatched en masse to probe large swaths of the human brain.
    “Controlling a tiny robot is maybe as close as you can come to shrinking yourself down. I think machines like these are going to take us into all kinds of amazing worlds that are too small to see,” said Miskin, the study’s lead author.
    “This research breakthrough provides exciting scientific opportunity for investigating new questions relevant to the physics of active matter and may ultimately lead to futuristic robotic materials,” said Sam Stanton, program manager for the Army Research Office, an element of the Combat Capabilities Development Command’s Army Research Laboratory, which supported the research.

    Story Source:
    Materials provided by Cornell University. Original written by David Nutt. Note: Content may be edited for style and length. More

  • in

    Natural radiation can interfere with quantum computers

    A multidisciplinary research team has shown that radiation from natural sources in the environment can limit the performance of superconducting quantum bits, known as qubits. The discovery, reported today in the journal Nature, has implications for the construction and operation of quantum computers, an advanced form of computing that has attracted billions of dollars in public and private investment globally.
    The collaboration between teams at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) and the Massachusetts Institute of Technology (MIT), helps explain a mysterious source of interference limiting qubit performance.
    “Our study is the first to show clearly that low-level ionizing radiation in the environment degrades the performance of superconducting qubits,” said John Orrell, a PNNL research physicist, senior author of the study and expert in low-level radiation measurement. “These findings suggest that radiation shielding will be necessary to attain long-sought performance in quantum computers of this design.”
    Natural radiation wreaks havoc with computers
    Computer engineers have known for at least a decade that natural radiation emanating from materials like concrete and pulsing through our atmosphere in the form of cosmic rays can cause digital computers to malfunction. But digital computers aren’t nearly as sensitive as a quantum computer.
    “We found that practical quantum computing with these devices will not be possible unless we address the radiation issue,” said PNNL physicist Brent VanDevender, a co-investigator on the study.

    advertisement

    The researchers teamed up to solve a puzzle that has been vexing efforts to keep superconducting quantum computers working for long enough to make them reliable and practical. A working quantum computer would be thousands of times faster than even the fastest supercomputer operating today. And it would be able to tackle computing challenges that today’s digital computers are ill-equipped to take on. But the immediate challenge is to have the qubits maintain their state, a feat called “coherence,” said Orrell. This desirable quantum state is what gives quantum computers their power.
    MIT physicist Will Oliver was working with superconducting qubits and became perplexed at a source of interference that helped push the qubits out of their prepared state, leading to “decoherence,” and making them non-functional. After ruling out a number of different possibilities, he considered the idea that natural radiation from sources like metals found in the soil and cosmic radiation from space might be pushing the qubits into decoherence.
    A chance conversation between Oliver, VanDevender, and his long-time collaborator, MIT physicist Joe Formaggio, led to the current project.
    It’s only natural
    To test the idea, the research team measured the performance of prototype superconducting qubits in two different experiments:
    They exposed the qubits to elevated radiation from copper metal activated in a reactor.
    They built a shield around the qubits that lowered the amount of natural radiation in their environment.
    The pair of experiments clearly demonstrated the inverse relationship between radiation levels and length of time qubits remain in a coherent state.

    advertisement

    “The radiation breaks apart matched pairs of electrons that typically carry electric current without resistance in a superconductor,” said VanDevender. “The resistance of those unpaired electrons destroys the delicately prepared state of a qubit.”
    The findings have immediate implications for qubit design and construction, the researchers concluded. For example, the materials used to construct quantum computers should exclude material that emits radiation, the researchers said. In addition, it may be necessary to shield experimental quantum computers from radiation in the atmosphere.
    At PNNL, interest has turned to whether the Shallow Underground Laboratory, which reduces surface radiation exposure by 99%, could serve future quantum computer development. Indeed, a recent study by a European research team corroborates the improvement in qubit coherence when experiments are conducted underground.
    “Without mitigation, radiation will limit the coherence time of superconducting qubits to a few milliseconds, which is insufficient for practical quantum computing,” said VanDevender.
    The researchers emphasize that factors other than radiation exposure are bigger impediments to qubit stability for the moment. Things like microscopic defects or impurities in the materials used to construct qubits are thought to be primarily responsible for the current performance limit of about one-tenth of a millisecond. But once those limitations are overcome, radiation begins to assert itself as a limit and will eventually become a problem without adequate natural radiation shielding strategies, the researchers said.
    Findings affect global search for dark matter
    In addition to helping explain a source of qubit instability, the research findings may also have implications for the global search for dark matter, which is thought to comprise just under 85% of the known universe, but which has so far escaped human detection with existing instruments. One approach to signals involves using research that depends on superconducting detectors of similar design to qubits. Dark matter detectors also need to be shielded from external sources of radiation, because radiation can trigger false recordings that obscure the desirable dark matter signals.
    “Improving our understanding of this process may lead to improved designs for these superconducting sensors and lead to more sensitive dark matter searches,” said Ben Loer, a PNNL research physicist who is working both in dark matter detection and radiation effects on superconducting qubits. “We may also be able to use our experience with these particle physics sensors to improve future superconducting qubit designs.” More