More stories

  • in

    Most important global supply chain linkages

    In today’s global economy, production of goods depends on inputs from many trade partners around the world. Companies and governments need a deeper understanding of the global value chain to reduce costs, maintain a profitable production system, and anticipate ripple effects of disruptions in the supply chain.
    Applied economists from the University of Illinois have developed a new model for in-depth analysis of global supply chain linkages across countries and industries, providing a rich tool that delivers valuable insights for businesses and policy makers around the world.
    “We live in a time when production processes are very much fragmented. In order to end up with one type of good, a car for example, many inputs are assembled abroad and imported from different places around the world. For instance, a car sold by leading U.S. companies may have anywhere from just 2% to 85% of U.S. and Canadian parts in it,” says Sandy Dall’Erba, professor in the Department of Agricultural and Consumer Economics and director of the Regional Economics Applications Laboratory (REAL) at U of I. Dall’Erba is co-author of the study.
    “Coordination of the entire supply chain system becomes more and more complicated and sensitive to disruptions at any stage throughout the process. If just one element in your supply chain is missing, it will have a ripple effect on the entire industry,” Dall’Erba notes. “An example of this was the global semiconductor shortage that recently forced U.S. automakers to halt production.”
    The researchers started with a widely used economic growth model called shift-share decomposition and expanded its components to include interregional and inter-sectoral linkages. This allows them to identify, for each industrial sector and each country, if the growth of the sector of interest is due to supply chain linkages at the domestic level versus the international level. The latter can be further split between linkages with trade agreement partners (such as NAFTA for the U.S.) and countries from the rest of the world, highlighting the benefits of trade agreements.
    “When we apply our technique to understand the drivers of growth in a particular sector, we not only can say whether it is growing faster or slower than another sector or region, we can also identify other sectors that are important for the growth of this particular sector,” says Claudia Montania, the study’s lead author. Montania was a visiting scholar in REAL when she conducted the study and is currently a researcher at the United Nations Development Accelerator Lab in Asuncion, Paraguay.

    advertisement

    Traditional shift-share decomposition includes information about changes in the industry mix and in region-specific features such as taxes, regulations, or characteristics of the labor force. But it does not include connections among different regions or different industry sectors.
    “The information provided by the traditional shift-share model is not enough,” Dall’Erba notes. “For example, it would be a mistake to study only the food manufacturing sector in order to know what is happening in that sector, because it obviously depends on grain and livestock production which, in turn, depends on water and fertilizers among other inputs.
    “In addition, grains are not always used for food manufacturing but they may end up as fuel. The supply chain of any sector is intertwined with that of many other sectors,” he adds.
    In the paper, Dall’Erba and Montania apply their model to country-sector linkages in the European Union, allowing them to compare three levels of connections — domestic, within the EU, and with the rest of the world, and to identify which ones matter most for each sector. The analysis included 35 industrial sectors in 15 countries from 1995 to 2006.
    Overall, the researchers found the most important linkages were among EU trade partners; the second-most important were domestic ties; and the least important linkages were with the rest of the world. They emphasize the results vary across sectors and countries. For example, the supply-chain linkages in place to manufacture a French car are different from those that exist for a German car. Their multi-dynamic model can provide detailed, specific information for each country-sector combination as needed for preemptive and tailored planning and policy making.
    “Knowing which type of linkages are the most important for your product or your sector can be very useful for local governments, for companies, and for producers, because you can make better plans to achieve the expected growth for your sector,” Montania states. “You can also promote trade and diplomatic relationships in regions where you have strong sectoral linkages.”
    Dall’Erba points out this information can help countries and industries protect against supply chain disruptions. Those can occur in many forms, ranging from natural disasters such as drought or earthquake to political upheaval, trade wars, and even the global pandemic. For instance, the extreme disruption airline companies have experienced as demand for air travel dropped in 2020 means both Boeing and Airbus have significantly reduced their production and so have the multiple companies manufacturing airplane components from fuselage to seat belts.
    “COVID-19 has pushed several governments to consider bringing back some industries in order to get better control over all the supply chain links. However, it is not necessarily a viable option as many companies have already de-located their unskilled labor-intensive production to low-wage countries while maintaining high-skilled workers at home,” Dall’Erba concludes. More

  • in

    Game theory may be useful in explaining and combating viruses

    A team of researchers concludes that a game-theory approach may offer new insights into both the spread and disruption of viruses, such as SARS-CoV-2. Its work, described in the journal Royal Society Interface, applies a “signaling game” to an analysis of cellular processes in illuminating molecular behavior.
    “We need new models and technologies at many levels in order to understand how to tame viral pandemics,” explains Bud Mishra, a professor at NYU’s Courant Institute of Mathematical Sciences and one of the paper’s authors. “At the biomolecular level, we explain how cellularization may be understood in ways that stymie disease and encourage healthy functioning.”
    The analysis, which also included William Casey, an assistant professor in the U.S. Naval Academy’s Cyber Science Department, and Steven Massey, an assistant professor in the Department of Biology at the University of Puerto Rico, centered on the biological and evolutionary phenomenon “mimicry” — organisms changing form to represent another.
    The researchers, in particular, focused on two types of mimicry: “Batesian” and “Muellerian.” Batesian mimicry, named after the naturalist Henry Walter Bates, involves conflict or deception between the sender and receiver — for example, a harmless hoverfly mimics a more dangerous wasp in order to deter predators. By contrast, Muellerian mimicry, named after the zoologist and naturalist Johann Friedrich Theodor Mueller, occurs when there is a common interest between the sender and receiver — for instance, two species that adopt each other’s warning signals as a means to offer protection for both.
    These types of mimicry also occur at the molecular level.
    “The gene for an RNA or a protein macro-molecule can be considered as the sender, while the signal consists of the three-dimensional conformation of the expressed gene product,” write the authors. “The receiver is the macro-molecule, which specifically interacts with the signal macro-molecule, typically a protein, but could also be an RNA or DNA molecule.”
    The SARS-CoV-2 virus, they add, makes multiple uses of molecular mimicry in its efforts to exploit its human host by mimicking, in Batesian fashion, healthy cells in order to infect the host organism. By contrast, vaccines deceive the human immune system into sensing that it is being attacked by a virus. While this deception is costly to the vaccinated subject in the short term — in the form of reactions to the injection — the immune system retains a memory and so is pre-prepared for a future encounter with the real virus.
    This dynamic plays out annually in the creation of flu shots — vaccines are altered each year in order to accurately mimic a newly evolved flu virus.
    With this in mind, the researchers sought to determine if a signaling game could provide a framework for analyzing the different types of mimicry. Under a signaling game, a sender aims to persuade the receiver that it carries a message that benefits both — independent of the veracity of the claim.
    In their analysis, the paper’s authors constructed a mathematical model that mapped out a series of signaling strategies that, theoretically, could be adopted by both a virus (Batesian mimicry) and a vaccine (Mullerian mimicry). Their results offered a range of blueprints of how mimicry is formed, maintained, and destroyed in cellular populations.
    “Better knowledge of the deceptive strategies of SARS-CoV-2 will help to inform vaccine design,” the researchers conclude.
    The research was supported by the Office of Naval Research (N0001420WX01716), a National Cancer Institute Physical Sciences-Oncology Center grant (U54 CA193313-01), and a U.S. Army grant (W911NF1810427).

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Machine learning aids in simulating dynamics of interacting atoms

    A revolutionary machine-learning (ML) approach to simulate the motions of atoms in materials such as aluminum is described in this week’s Nature Communications journal. This automated approach to “interatomic potential development” could transform the field of computational materials discovery.
    “This approach promises to be an important building block for the study of materials damage and aging from first principles,” said project lead Justin Smith of Los Alamos National Laboratory. “Simulating the dynamics of interacting atoms is a cornerstone of understanding and developing new materials. Machine learning methods are providing computational scientists new tools to accurately and efficiently conduct these atomistic simulations. Machine learning models like this are designed to emulate the results of highly accurate quantum simulations, at a small fraction of the computational cost.”
    To maximize the general accuracy of these machine learning models, he said, it is essential to design a highly diverse dataset from which to train the model. A challenge is that it is not obvious, a priori, what training data will be most needed by the ML model. The team’s recent work presents an automated “active learning” methodology for iteratively building a training dataset.
    At each iteration, the method uses the current-best machine learning model to perform atomistic simulations; when new physical situations are encountered that are beyond the ML model’s knowledge, new reference data is collected via expensive quantum simulations, and the ML model is retrained. Through this process, the active learning procedure collects data regarding many different types of atomic configurations, including a variety of crystal structures, and a variety of defect patterns appearing within crystals.

    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Measuring hemoglobin levels with AI microscope, microfluidic chips

    One of the most performed medical diagnostic tests to ascertain the health of patients is a complete blood count, which typically includes an estimate of the hemoglobin concentration. The hemoglobin level in the blood is an important biochemical parameter that can indicate a host of medical conditions including anemia, polycythemia, and pulmonary fibrosis.
    In AIP Advances, by AIP Publishing, researchers from SigTuple Technologies and the Indian Institute of Science describe a new AI-powered imaging-based tool to estimate hemoglobin levels. The setup was developed in conjunction with a microfluidic chip and an AI-powered automated microscope that was designed for deriving the total as well as differential counts of blood cells.
    Often, medical diagnostics equipment capable of multiparameter assessment, such as hematology analyzers, has dedicated subcompartments with separate optical detection systems. This leads to increased sample volume as well as an increase in cost of the entire equipment.
    “In this study, we demonstrate that the applicability of a system originally designed for the purposes of imaging can be extended towards the performance of biochemical tests without any additional modifications to the hardware unit, thereby retraining the cost and laboratory footprint of the original device,” said author Srinivasan Kandaswamy.
    The hemoglobin testing solution is possible thanks to the design behind the microfluidic chip, a customized biochemical reagent, optimized imaging, and an image analysis procedure specifically tailored to enable the good clinical performance of the medical diagnostic test.
    The data obtained from the microfluidic chip in combination with an automated microscope was comparable with the predictions of hematology analyzers (Pearson correlation of 0.99). The validation study showed the method meets regulatory standards, which means doctors and hospitals are likely to accept it.
    The automated microscope, which normally uses a combination of red, green, and blue LEDs, used only the green LED during the hemoglobin estimation mode, because the optimized reagent (SDS-HB) complex absorbs light in the green wavelength.
    Chip-based, microfluidic, diagnostic platforms are on the verge of revolutionizing the field of health care and colorimetric biochemical assays are widely performed diagnostic tests.
    “This paper lays the foundation and will also serve as a guide to future attempts to translate conventional biochemical assays onto a chip, from point of view of both chip design and reagent development,” said Kandaswamy.
    Besides measuring hemoglobin in the blood, a similar setup with minor modifications could be used to measure protein content, cholesterol, and glycated hemoglobin.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Environmental policies not always bad for business, study finds

    Critics claim environmental regulations hurt productivity and profits, but the reality is more nuanced, according to an analysis of environmental policies in China by a pair of Cornell economists.
    The analysis found that, contrary to conventional wisdom, market-based or incentive-based policies may actually benefit regulated firms in the traditional and “green” energy sectors, by spurring innovation and improvements in production processes. Policies that mandate environmental standards and technologies, on the other hand, may broadly harm output and profits.
    “The conventional wisdom is not entirely accurate,” said Shuyang Si, a doctoral student in applied economics and management. “The type of policy matters, and policy effects vary by firm, industry and sector.”
    Si is the lead author of “The Effects of Environmental Policies in China on GDP, Output, and Profits,” published in the current issue of the journal Energy Economics. C.-Y. Cynthia Lin Lawell, associate professor in the Charles H. Dyson School of Applied Economics and Management and the Robert Dyson Sesquicentennial Chair in Environmental, Energy and Resource Economics, is a co-author.
    Si mined Chinese provincial government websites and other online sources to compile a comprehensive data set of nearly 2,700 environmental laws and regulations in effect in at least one of 30 provinces between 2002 and 2013. This period came just before China declared a “war on pollution,” instituting major regulatory changes that shifted its longtime prioritization of economic growth over environmental concerns.
    “We really looked deep into the policies and carefully examined their features and provisions,” Si said.

    advertisement

    The researchers categorized each policy as one of four types: “command and control,” such as mandates to use a portion of electricity from renewable sources; financial incentives, including taxes, subsidies and loans; monetary awards for cutting pollution or improving efficiency and technology; and nonmonetary awards, such as public recognition.
    They assessed how each type of policy impacted China’s gross domestic product, industrial output in traditional energy industries and the profits of new energy sector companies, using publicly available data on economic indicators and publicly traded companies.
    Command and control policies and nonmonetary award policies had significant negative effects on GDP, output and profits, Si and Lin Lawell concluded. But a financial incentive — loans for increasing renewable energy consumption — improved industrial output in the petroleum and nuclear energy industries, and monetary awards for reducing pollution boosted new energy sector profits.
    “Environmental policies do not necessarily lead to a decrease in output or profits,” the researchers wrote.
    That finding, they said, is consistent with the “Porter hypothesis” — Harvard Business School Professor Michael Porter’s 1991 proposal that environmental policies could stimulate growth and development, by spurring technology and business innovation to reduce both pollution and costs.
    While certain policies benefited regulated firms and industries, the study found that those benefits came at a cost to other sectors and to the overall economy. Nevertheless, Si and Lin Lawell said, these costs should be weighed against the benefits of these policies to the environment and society, and to the regulated firms and industries.
    Economists generally prefer market-based or incentive-based environmental policies, Lin Lawell said, with a carbon tax or tradeable permit system representing the gold standard. The new study led by Si, she said, provides more support for those types of policies.
    “This work will make people aware, including firms that may be opposed to environmental regulation, that it’s not necessarily the case that these regulations will be harmful to their profits and productivity,” Lin Lawell said. “In fact, if policies promoting environmental protection are designed carefully, there are some that these firms might actually like.”
    Additional co-authors contributing to the study were Mingjie Lyu of Shanghai Lixin University of Accounting and Finance, and Song Chen of Tongji University. The authors acknowledged financial support from the Shanghai Science and Technology Development Fund and an Exxon-Mobil ITS-Davis Corporate Affiliate Fellowship. More

  • in

    Scientists use machine-learning approach to track disease-carrying mosquitoes

    You might not like mosquitoes, but they like you, says Utah State University biologist Norah Saarman. And where you lead, they will follow.
    In addition to annoying bites and buzzing, some mosquitoes carry harmful diseases. Aedes aegypti, the so-called Yellow Fever mosquito and the subject of a recent study by Saarman and colleagues, is the primary vector for transmission of viruses causing dengue fever, chikungunya and Zika, as well as yellow fever, in humans.
    “Aedes aegypti is an invasive species to North America that’s become widespread in the eastern United States,” says Saarman, assistant professor in USU’s Department of Biology and the USU Ecology Center, whose research focuses on evolutionary ecology and population genomics. “We’re examining the genetic connectivity of this species as it adapts to new landscapes and expands its range.”
    With Evlyn Pless of the University of California, Davis and Jeffrey Powell, Andalgisa Caccone and Giuseppe Amatulli of Yale University, Saarman published findings from a machine-learning approach to mapping landscape connectivity in the February 22, 2021 issue of the Proceedings of the National Academy of Sciences (PNAS).
    The team’s research was supported by the National Institutes of Health.
    “We’re excited about this approach, which uses a random forest algorithm that allows us to overcome some of the constraints of classical spatial models,” Saarman says. “Our approach combines the advantages of a machine-learning framework and an iterative optimization process that integrates genetic and environmental data.”
    In its native Africa, Aedes aegypti was a forest dweller, drawing sustenance in landscapes uninhabited or scarcely populated by humans. The mosquito has since specialized to feed on humans, and thrives in human-impacted areas, favoring trash piles, littered highways and well-irrigated gardens.

    advertisement

    “Using our machine-learning model and NASA-supplied satellite imagery, we can combine this spatial data with the genetic data we have already collected to drill down into very specific movement of these mosquitoes,” Saarman says. “For example, our data reveal their attraction to human transportation networks, indicating that activities such as plant nurseries are inadvertently transporting these insects to new areas.”
    Public officials and land managers once relied on pesticides, including DDT, to keep the pesky mosquitoes at bay.
    “As we now know, those pesticides caused environmental harm, including harm to humans,” she says. “At the same time, mosquitos are evolving resistance to the pesticides that we have found to be safe for the environment. This creates a challenge that can only be solved by more information on where mosquitos live and how they get around.”
    Saarman adds the rugged survivors are not only adapting to different food sources and resisting pesticides, they’re also adapting to varied temperatures, which allows them to expand into colder ranges.
    Current methods to curb disease-carrying mosquitoes focus on biotechnological solutions, including cutting-edge genetic modification.
    “We hope the tools we’re developing can help managers identify effective methods of keeping mosquito populations small enough to avoid disease transmission,” Saarman says. “While native species play an important role in the food chain, invasive species, such as Aedes aegypti pose a significant public health risk that requires our vigilant attention.”

    Story Source:
    Materials provided by Utah State University. Original written by Mary-Ann Muffoletto. Note: Content may be edited for style and length. More

  • in

    'Beautiful marriage' of quantum enemies

    Cornell University scientists have identified a new contender when it comes to quantum materials for computing and low-temperature electronics.
    Using nitride-based materials, the researchers created a material structure that simultaneously exhibits superconductivity — in which electrical resistance vanishes completely — and the quantum Hall effect, which produces resistance with extreme precision when a magnetic field is applied.
    “This is a beautiful marriage of the two things we know, at the microscale, that give electrons the most startling quantum properties,” said Debdeep Jena, the David E. Burr Professor of Engineering in the School of Electrical and Computer Engineering and Department of Materials Science and Engineering. Jena led the research, published Feb. 19 in Science Advances, with doctoral student Phillip Dang and research associate Guru Khalsa, the paper’s senior authors.
    The two physical properties are rarely seen simultaneously because magnetism is like kryptonite for superconducting materials, according to Jena.
    “Magnetic fields destroy superconductivity, but the quantum Hall effect only shows up in semiconductors at large magnetic fields, so you’re having to play with these two extremes,” Jena said. “Researchers in the past few years have been trying to identify materials which show both properties with mixed success.”
    The research is the latest validation from the Jena-Xing Lab that nitride materials may have more to offer science than previously thought. Nitrides have traditionally been used for manufacturing LEDs and transistors for products like smartphones and home lighting, giving them a reputation as an industrial class of materials that has been overlooked for quantum computation and cryogenic electronics.

    advertisement

    “The material itself is not as perfect as silicon, meaning it has a lot more defects,” said co-author Huili Grace Xing, the William L. Quackenbush Professor of Electrical and Computer Engineering and of Materials Science and Engineering. “But because of its robustness, this material has thrown pleasant surprises to the research community more than once despite its extremely large irregularities in structure. There may be a path forward for us to truly integrate different modalities of quantum computing — computation, memory, communication.”
    Such integration could help to condense the size of quantum computers and other next-generation electronics, just as classical computers have shrunk from warehouse to pocket size.
    “We’re wondering what this sort of material platform can enable because we see that it’s checking off a lot of boxes,” said Jena, who added that new physical phenomena and technological applications could emerge with further research. “It has a superconductor, a semiconductor, a filter material — it has all kinds of other components, but we haven’t put them all together. We’ve just discovered they can coexist.”
    For this research, the Cornell team began engineering epitaxial nitride heterostructures — atomically thin layers of gallium nitride and niobium nitride — and searching for conditions in which magnetic fields and temperatures in the layers would retain their respective quantum Hall and superconducting properties.
    They eventually discovered a small window in which the properties were observed simultaneously, thanks to advances in the quality of the materials and structures produced in close collaboration with colleagues at the Naval Research Laboratory.
    “The quality of the niobium-nitride superconductor was improved enough that it can survive higher magnetic fields, and simultaneously we had to improve the quality of the gallium-nitride semiconductor enough that it could exhibit the quantum Hall effect at lower magnetic fields,” Dang said. “And that’s what will really allow for potential new physics to be seen at low temperature.”
    Potential applications for the material structure include more efficient electronics, such as data centers cooled to extremely low temperatures to eliminate heat waste. And the structure is the first to lay the groundwork for the use of nitride semiconductors and superconductors in topological quantum computing, in which the movement of electrons must be resilient to the material defects typically seen in nitrides.
    “What we’ve shown is that the ingredients you need to make this topological phase can be in the same structure,” Khalsa said, “and I think the flexibility of the nitrides really opens up new possibilities and ways to explore topological states of matter.”
    The research was funded by the Office of Naval Research and the National Science Foundation.

    Story Source:
    Materials provided by Cornell University. Original written by Syl Kacapyr. Note: Content may be edited for style and length. More

  • in

    Lack of symmetry in qubits can't fix errors in quantum computing, might explain matter/antimatter

    A team of quantum theorists seeking to cure a basic problem with quantum annealing computers — they have to run at a relatively slow pace to operate properly — found something intriguing instead. While probing how quantum annealers perform when operated faster than desired, the team unexpectedly discovered a new effect that may account for the imbalanced distribution of matter and antimatter in the universe and a novel approach to separating isotopes.
    “Although our discovery did not the cure the annealing time restriction, it brought a class of new physics problems that can now be studied with quantum annealers without requiring they be too slow,” said Nikolai Sinitsyn, a theoretical physicist at Los Alamos National Laboratory. Sinitsyn is author of the paper published Feb. 19 in Physical Review Letters, with coauthors Bin Yan and Wojciech Zurek, both also of Los Alamos, and Vladimir Chernyak of Wayne State University.
    Significantly, this finding hints at how at least two famous scientific problems may be resolved in the future. The first one is the apparent asymmetry between matter and antimatter in the universe.
    “We believe that small modifications to recent experiments with quantum annealing of interacting qubits made of ultracold atoms across phase transitions will be sufficient to demonstrate our effect,” Sinitsyn said.
    Explaining the Matter/Antimatter Discrepancy
    Both matter and antimatter resulted from the energy excitations that were produced at the birth of the universe. The symmetry between how matter and antimatter interact was broken but very weakly. It is still not completely clear how this subtle difference could lead to the large observed domination of matter compared to antimatter at the cosmological scale.

    advertisement

    The newly discovered effect demonstrates that such an asymmetry is physically possible. It happens when a large quantum system passes through a phase transition, that is, a very sharp rearrangement of quantum state. In such circumstances, strong but symmetric interactions roughly compensate each other. Then subtle, lingering differences can play the decisive role.
    Making Quantum Annealers Slow Enough
    Quantum annealing computers are built to solve complex optimization problems by associating variables with quantum states or qubits. Unlike a classical computer’s binary bits, which can only be in a state, or value, of 0 or 1, qubits can be in a quantum superposition of in-between values. That’s where all quantum computers derive their awesome, if still largely unexploited, powers.
    In a quantum annealing computer, the qubits are initially prepared in a simple lowest energy state by applying a strong external magnetic field. This field is then slowly switched off, while the interactions between the qubits are slowly switched on.
    “Ideally an annealer runs slow enough to run with minimal errors, but because of decoherence, one has to run the annealer faster,” Yan explained. The team studied the emerging effect when the annealers are operated at a faster speed, which limits them to a finite operation time.

    advertisement

    “According to the adiabatic theorem in quantum mechanics, if all changes are very slow, so-called adiabatically slow, then the qubits must always remain in their lowest energy state,” Sinitsyn said. “Hence, when we finally measure them, we find the desired configuration of 0s and 1s that minimizes the function of interest, which would be impossible to get with a modern classical computer.”
    Hobbled by Decoherence
    However, currently available quantum annealers, like all quantum computers so far, are hobbled by their qubits’ interactions with the surrounding environment, which causes decoherence. Those interactions restrict the purely quantum behavior of qubits to about one millionth of a second. In that timeframe, computations have to be fast — nonadiabatic — and unwanted energy excitations alter the quantum state, introducing inevitable computational mistakes.
    The Kibble-Zurek theory, co-developed by Wojciech Zurek, predicts that the most errors occur when the qubits encounter a phase transition, that is, a very sharp rearrangement of their collective quantum state.
    For this paper, the team studied a known solvable model where identical qubits interact only with their neighbors along a chain; the model verifies the Kibble-Zurek theory analytically. In the theorists’ quest to cure limited operation time in quantum annealing computers, they increased the complexity of that model by assuming that the qubits could be partitioned into two groups with identical interactions within each group but slightly different interactions for qubits from the different groups.
    In such a mixture, they discovered an unusual effect: One group still produced a large amount of energy excitations during the passage through a phase transition, but the other group remained in the energy minimum as if the system did not experience a phase transition at all.
    “The model we used is highly symmetric in order to be solvable, and we found a way to extend the model, breaking this symmetry and still solving it,” Sinitsyn explained. “Then we found that the Kibble-Zurek theory survived but with a twist — half of the qubits did not dissipate energy and behaved ‘nicely.’ In other words, they maintained their ground states.”
    Unfortunately, the other half of the qubits did produce many computational errors — thus, no cure so far for a passage through a phase transition in quantum annealing computers.
    A New Way to Separate Isotopes
    Another long-standing problem that can benefit from this effect is isotope separation. For instance, natural uranium often must be separated into the enriched and depleted isotopes, so the enriched uranium can be used for nuclear power or national security purposes. The current separation process is costly and energy intensive. The discovered effect means that by making a mixture of interacting ultra-cold atoms pass dynamically through a quantum phase transition, different isotopes can be selectively excited or not and then separated using available magnetic deflection technique.
    The funding: This work was carried out under the support of the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, Condensed Matter Theory Program. Bin Yan also acknowledges support from the Center for Nonlinear Studies at LANL. More