More stories

  • in

    New simulations can improve avalanche forecasting

    Computer simulations of snow cover can accurately forecast avalanche hazard, according to a new international study involving researchers from Simon Fraser University.
    Currently, avalanche forecasts in Canada are made by experienced professionals who rely on data from local weather stations and on-the-ground observations from ski and backcountry ski operators, avalanche control workers for transportation and industry, and volunteers who manually test the snowpack.
    But simulated snow cover models developed by a team of researchers are able detect and track weak layers of snow and identify avalanche hazard in a completely different way — and can provide forecasters with another reliable tool when local data is insufficient or not available, according to a new study published in the journal Cold Regions Science and Technology.
    “As far as natural hazards go, avalanches are still one of the leading causes of fatalities in Canada,” says Simon Horton, a post-doctoral fellow with the SFU Centre for Natural Hazards Research and a forecaster with Avalanche Canada. “We’ve had these complex models that simulate the layers in the snowpack for a few decades now and they’re getting more and more accurate, but it’s been difficult to find out how to apply that to actual decision-making and improving safety.”
    Researchers took 16 years’ worth of daily meteorological, snow cover and avalanche data from two sites in Canada (Whistler and Rogers Pass, both in British Columbia) and Weissfluhjoch in Davos, Switzerland and ran computer simulations that could classify different avalanche situations.
    The simulations could determine avalanche risk, for either natural or artificial release, for problem types such as new snow, wind slab, persistent weak layers and wet snow conditions.
    “In the avalanche forecasting world, describing avalanche problems — the common scenarios that you might expect to find — are a good way for forecasters to describe avalanche hazard and communicate it to the public, so they know what kind of conditions to expect when they head out,” says Horton. “So that information is already available, except those are all done through expert assessment based on what they know from available field observations. In a lot of situations, there’s a fair bit of uncertainty about the human assessment of what these types of avalanche problems will be.
    “That’s where having more automated tools that can help predict potential hazards can help forecasters better prepare an accurate, precise forecast.”
    The results of the study showed the modelling was consistent with the real observed frequencies of avalanches over those 16 years and that the approach has potential to support avalanche forecasting in the future.
    Researchers also believe the modelling might be useful to study the future impacts of climate change on snow instability.
    Story Source:
    Materials provided by Simon Fraser University. Note: Content may be edited for style and length. More

  • in

    Scientists achieve key elements for fault-tolerant quantum computation in silicon spin qubits

    Researchers from RIKEN and QuTech — a collaboration between TU Delft and the Netherlands Organisation for Applied Scientific Research (TNO) — have achieved a key milestone toward the development of a fault-tolerant quantum computer. They were able to demonstrate a two-qubit gate fidelity of 99.5 percent — higher than the 99 percent considered to be the threshold for building fault-tolerant computers — using electron spin qubits in silicon, which are promising for large-scale quantum computers as the nanofabrication technology for building them already exists. This study was published in Nature.
    The world is currently in a race to develop large-scale quantum computers that could vastly outperform classical computers in certain areas. However, these efforts have been hindered by a number of factors, including in particular the problem of decoherence, or noise generated in the qubits. This problem becomes more serious with the number of qubits, hampering scaling up. In order to achieve a large-scale computer that could be used for useful applications, it is believed that a two-qubit gate fidelity of at least 99 percent to implement the surface code for error correction is required. This has been achieved in certain types of computers, using qubits based on superconducting circuits, trapped ions, and nitrogen-vacancy centers in diamond, but these are hard to scale up to the millions of qubits required to implement practical quantum computation with an error correction.
    To address these problems, the group decided to experiment with a quantum dot structure that was nanofabricated on a strained silicon/silicon germanium quantum well substrate, using a controlled-NOT (CNOT) gate. In previous experiments, the gate fidelity was limited due to slow gate speed. To improve the gate speed, they carefully designed the device and tuned it by applying different voltages to the gate electrodes. This combined an established fast single-spin rotation technique using micromagnets with large two-qubit coupling. The result was a gate speed that was 10 times better than previous attempts. Interestingly, although it had been thought that increasing gate speed would always lead to better fidelity, they found that there was a limit beyond which increasing the speed actually made the fidelity worse.
    In the course of the experiments, they discovered that a property called the Rabi frequency — a marker of how the qubits change states in response to an oscillating field — is key to the performance of the system, and they found a range of frequencies for which the single-qubit gate fidelity was 99.8 percent and the two-qubit gate fidelity was 99.5 percent, clearing the required threshold.
    Through this, they demonstrated that they could achieve universal operations, meaning that all the basic operations that constitute quantum operations, consisting of a single qubit operation and a two-qubit operation, could be performed at gate fidelities above the error correction threshold.
    To test the capability of the new system, the researchers implemented a two-qubit Deutsch-Jozsa algorithm and the Grover search algorithm. Both algorithms output correct results with a high fidelity of 96%-97%, demonstrating that silicon quantum computers can perform quantum calculations with high accuracy.
    Akito Noiri, the first author of the study, says, “We are very happy to have achieved a high-fidelity universal quantum gate set, one of the key challenges for silicon quantum computers.”
    Seigo Tarucha, leader of the research groups, said, “The presented result makes spin qubits, for the first time, competitive against superconducting circuits and ion traps in terms of universal quantum control performance. This study demonstrates that silicon quantum computers are promising candidates, along with superconductivity and ion traps, for research and development toward the realization of large-scale quantum computers.
    Story Source:
    Materials provided by RIKEN. Note: Content may be edited for style and length. More

  • in

    Quantum computing in silicon hits 99% accuracy

    UNSW Sydney-led research paves the way for large silicon-based quantum processors for real-world manufacturing and application.
    Australian researchers have proven that near error-free quantum computing is possible, paving the way to build silicon-based quantum devices compatible with current semiconductor manufacturing technology.
    “Today’s publication in Nature shows our operations were 99 per cent error-free,” says Professor Andrea Morello of UNSW, who led the work.
    “When the errors are so rare, it becomes possible to detect them and correct them when they occur. This shows that it is possible to build quantum computers that have enough scale, and enough power, to handle meaningful computation.”
    This piece of research is an important milestone on the journey that will get us there,” Prof. Morello says.
    Quantum computing in silicon hits the 99% threshold
    Morello’s paper is one of three published today in Nature that independently confirm that robust, reliable quantum computing in silicon is now a reality. This breakthrough features on the front cover of the journal. Morello et al achieved 1-qubit operation fidelities up to 99.95 per cent, and 2-qubit fidelity of 99.37 per cent with a three-qubit system comprising an electron and two phosphorus atoms, introduced in silicon via ion implantation. A Delftteam in the Netherlands led by Lieven Vandersypen achieved 99.87 per cent 1-qubit and 99.65 per cent 2-qubit fidelities using electron spins in quantum dots formed in a stack of silicon and silicon-germanium alloy (Si/SiGe). A RIKEN team in Japan led by Seigo Tarucha similarly achieved 99.84 per cent 1-qubit and 99.51 per cent 2-qubit fidelities in a two-electron system using Si/SiGe quantum dots. More

  • in

    Inner workings of quantum computers

    A precision diagnostic developed at the Department of Energy’s Sandia National Laboratories is emerging as a gold standard for detecting and describing problems inside quantum computing hardware.
    Two papers published today in the scientific journal Nature describe how separate research teams — one including Sandia researchers — used a Sandia technique called gate set tomography to develop and validate highly reliable quantum processors. Sandia has been developing gate set tomography since 2012, with funding from the DOE Office of Science through the Advanced Scientific Computing Research program.
    Sandia scientists collaborated with Australian researchers at the University of New South Wales in Sydney, led by Professor Andrea Morello, to publish one of today’s papers. Together, they used GST to show that a sophisticated, three-qubit system comprising two atomic nuclei and one electron in a silicon chip could be manipulated reliably with 99%-plus accuracy.
    In another Nature article appearing today, a group led by Professor Lieven Vandersypen at Delft University of Technology in the Netherlands used gate set tomography, implemented using Sandia software, to demonstrate the important milestone of 99%-plus accuracy but with a different approach, controlling electrons trapped within quantum dots instead of isolated atomic nuclei.
    “We want researchers everywhere to know they have access to a powerful, cutting-edge tool that will help them make their breakthroughs,” said Sandia scientist Robin Blume-Kohout.
    Future quantum processors with many more qubits, or quantum bits, could enable users working in national security, science and industry to perform some tasks faster than they ever could with a conventional computer. But flaws in current system controls cause computational errors. A quantum computer can correct some errors, but the more errors it must correct, the larger and more expensive that computer becomes to build. More

  • in

    Solving a crystal's structure when you've only got powder

    Crystals reveal the hidden geometry of molecules to the naked eye. Scientists use crystals to figure out the atomic structure of new materials, but many can’t be grown large enough. Now, a team of researchers report a new technique in the January 19 issue of Nature that can discover the crystalline structure of any material.
    To truly understand a chemical, a scientist needs to know how its atoms are arranged. Sometimes that’s easy: for example, both diamond and gold are made of a single kind of atom (carbon or gold, respectively) arranged in a cubic grid. But often it’s harder to figure out more complicated ones.
    “Every single one of these is a special snowflake — growing them is really difficult,” says UConn chemical physicist Nate Hohman. Hohman studies metal organic chacogenolates. They’re made of a metal combined with an organic polymer and an element from column 16 of the periodic table (sulfur, selenium, tellurium or polonium.) Some are brightly colored pigments; others become more electrically conductive when light is shined on them; others make good solid lubricants that don’t burn up in the high temperatures of oil refineries or mines.
    It’s a large, useful family of chemicals. But the ones Hohman studies — hybrid chalcogenolates — are really difficult to crystallize. Hohman’s lab couldn’t solve the atomic structures, because they couldn’t grow large perfect crystals. Even the tiny powdered crystals they could get were imperfect and messy.
    X-ray crystallography is the standard way to figure out the atomic arrangements of more complicated materials. A famous, early example was how Rosalind Franklin used it to figure out the structure of DNA. She isolated large, perfect pieces of DNA in crystalline form, and then illuminated them with x-rays. X-rays are so small they diffract through the spaces between atoms, the same way visible light diffracts through slots in metal. By doing the math on the diffraction pattern, you can figure out the spacing of the slots — or atoms — that made it.
    Once you know the atomic structure of a material, a whole new world opens up. Materials scientists use that information to design specific materials to do special things. For example, maybe you have a material that bends light in cool ways, so that it becomes invisible under ultraviolet light. If you understand the atomic structure, you might be able to tweak it — substitute a similar element of a different size in a specific spot, say — and make it do the same thing in visible light. Voila, an invisibility cloak! More

  • in

    Smart windows can significantly reduce indoor pathogens

    Daylight passing through smart windows results in almost complete disinfection of surfaces within 24 hours while still blocking harmful ultraviolet (UV) light, according to new research from UBC’s Okanagan campus.
    Dr. Sepideh Pakpour is an Assistant Professor at UBC Okanagan’s School of Engineering. For this research, she tested four strains of hazardous bacteria — methicillin-resistance Staphylococcus aureus, Klebsiella pneumoniae, E. coli and Pseudomonas aeruginosa — using a mini-living lab set-up. The lab had smart windows, which tint dynamically based on outdoor conditions, and traditional windows with blinds. The researchers found that, compared to windows with blinds, the smart windows significantly reduce bacterial growth rate and their viability.
    In their darkest tint state, Dr. Pakpour says smart windows blocked more than 99.9 per cent of UV light, but still let in short-wavelength, high-energy daylight which acts as a disinfectant. This shorter wavelength light effectively eliminated contamination on glass, plastic and fabric surfaces.
    In contrast, traditional window blinds blocked almost all daylight, preventing surfaces from being disinfected. Blinds also collect dust and germs that get resuspended into the air whenever adjusted, with Dr. Pakpour noting previous research has shown 92 per cent of hospital curtains can get contaminated within a week of being cleaned.
    “We know that daylight kills bacteria and fungi,” she says. “But the question is, are there ways to harness that benefit in buildings, while still protecting us from glare and UV radiation? Our findings demonstrate the benefits of smart windows for disinfection, and have implications for infectious disease transmission in laboratories, health-care facilities and the buildings in which we live and work.”
    The pandemic has elevated concerns about how buildings might influence the health of the people inside. While particular attention has been paid to ventilation, cleaning and filtration, the importance of daylight has been ignored. According to research shared in a recent Harvard Business Review, office workers are pushing for “healthy buildings” as part of the return to work and consistently rank access to daylight and views among their most desired amenities. More

  • in

    Self-organization of complex structures: A matter of time

    LMU researchers have developed a new strategy for manufacturing nanoscale structures in a time- and resource-efficient manner.
    Macromolecules such as cellular structures or virus capsids can emerge from small building blocks without external control to form complex spatial structures. This self-organization is a central feature of biological systems. But such self-organized processes are also becoming increasingly important for the building of complex nanoparticles in nanotechnological applications. In DNA origami, for instance, larger structures are created out of individual bases.
    But how can these reactions be optimized? This is the question that LMU physicist Prof. Erwin Frey and his team are investigating. The researchers have now developed an approach based on the concept of time complexity, which allows new strategies to be created for the more efficient synthesizing of complex structures, as they report in the journal PNAS.
    A concept from the computer sciences
    Time complexity originally describes problems from the field of informatics. It involves investigating how the amount of time needed by an algorithm increases when there is more data to process. When the volume of data doubles, for example, the time required could double, quadruple, or increase to an even higher power. In the worst case, the running time of the algorithm increases so much that a result can no longer be output within a reasonable timeframe.
    “We applied this concept to self-organization,” explains Frey. “Our approach was: How does the time required to build large structures change when the number of individual building blocks increases?” If we assume — analogously to the case in computing — that the requisite period of time increases by a very high power as the number of components increases, this would practically render syntheses of large structures impossible. “As such, people want to develop methods in which the time depends as little as possible on the number of components,” explains Frey.
    The LMU researchers have now carried out such time complexity analyses using computer simulations and mathematical analysis and developed a new method for manufacturing complex structures. Their theory shows that different strategies for building complex molecules have completely different time complexities — and thus also different efficiencies. Some methods are more, and others less, suitable for synthesizing complex structures in nanotechnology. “Our time complexity analysis leads to a simple but informative description of self-assembly processes in order to precisely predict how the parameters of a system must be controlled to achieve optimum efficiency,” explains Florian Gartner, a member of Frey’s group and lead author of the paper.
    The team demonstrated the practicability of the new approach using a well-known example from the field of nanotechnology: The scientists analyzed how to efficiently manufacture a highly symmetrical viral envelope. Computer simulations showed that two different assembly protocols led to high yields in a short window of time.
    A new strategy for self-organization
    When carrying out such experiments before now, scientists have relied on an experimentally complicated method that involves modifying the bond strengths between individual building blocks. “By contrast, our model is based exclusively on controlling the availability of the individual building blocks, thus offering a simpler and more effective option for regulating artificial self-organization processes,” explains Gartner. With regard to its time efficiency, the new technique is comparable, and in some cases better, than established methods. “Most of all, this schema promises to be more versatile and practical than conventional assembly strategies,” reports the physicist.
    “Our work presents a new conceptual approach to self-organization, which we are convinced will be of great interest for physics, chemistry, and biology,” summarizes Frey. “In addition, it puts forward concrete practical suggestions for new experimental protocols in nanotechnology and synthetic and molecular biology.” More

  • in

    New models assess bridge support repairs after earthquakes

    Steel-reinforced concrete columns that support many of the world’s bridges are designed to withstand earthquakes, but always require inspection and often repair once the shaking is over.
    These repairs usually involve replacing loose concrete and fractured steel bars and adding extra materials around the damaged area to further strengthen it against future loads.
    Engineers at Rice University’s George R. Brown School of Engineering and Texas A&M University have developed an innovative computational modeling strategy to make planning these repairs more effective.
    The study by Rice postdoctoral research associate Mohammad Salehi and civil and environmental engineers Reginald DesRoches of Rice and Petros Sideris of Texas A&M appears in the journal Engineering Structures. DesRoches is also the current provost and the incoming president of Rice.
    “When we design bridges and other structures for earthquakes, the goal is collapse prevention,” DesRoches said. “But particularly in larger earthquakes, we fully expect them to be damaged. In this study, we show analytically that those damages can be repaired in a way that the original, or close to the original, performance can be achieved.”
    Their models simulate how columns are likely to respond globally (in terms of base shear and lateral displacement) and locally (with stress and strain) in future earthquakes when using various repair methods. More