More stories

  • in

    Simulating 800,000 years of California earthquake history to pinpoint risks

    Massive earthquakes are, fortunately, rare events. But that scarcity of information blinds us in some ways to their risks, especially when it comes to determining the risk for a specific location or structure.
    “We haven’t observed most of the possible events that could cause large damage,” explained Kevin Milner, a computer scientist and seismology researcher at the Southern California Earthquake Center (SCEC) at the University of Southern California. “Using Southern California as an example, we haven’t had a truly big earthquake since 1857 — that was the last time the southern San Andreas broke into a massive magnitude 7.9 earthquake. A San Andreas earthquake could impact a much larger area than the 1994 Northridge earthquake, and other large earthquakes can occur too. That’s what we’re worried about.”
    The traditional way of getting around this lack of data involves digging trenches to learn more about past ruptures, collating information from lots of earthquakes all around the world and creating a statistical model of hazard, or using supercomputers to simulate a specific earthquake in a specific place with a high degree of fidelity.
    However, a new framework for predicting the likelihood and impact of earthquakes over an entire region, developed by a team of researchers associated with SCEC over the past decade, has found a middle ground and perhaps a better way to ascertain risk.
    A new study led by Milner and Bruce Shaw of Columbia University, published in the Bulletin of the Seismological Society of America in January 2021, presents results from a prototype Rate-State earthquake simulator, or RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. Their results compare well with historical earthquakes and the results of other methods, and display a realistic distribution of earthquake probabilities.
    According to the developers, the new approach improves the ability to pinpoint how big an earthquake might occur in a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

    advertisement

    “For the first time, we have a whole pipeline from start to finish where earthquake occurrence and ground-motion simulation are physics-based,” Milner said. “It can simulate up to 100,000s of years on a really complicated fault system.”
    Applying massive computer power to big problems
    RSQSim transforms mathematical representations of the geophysical forces at play in earthquakes — the standard model of how ruptures nucleate and propagate — into algorithms, and then solves them on some of the most powerful supercomputers on the planet. The computationally-intensive research was enabled over several years by government-sponsored supercomputers at the Texas Advanced Computing Center, including Frontera — the most powerful system at any university in the world — Blue Waters at the National Center for Supercomputing Applications, and Summit at the Oak Ridge Leadership Computing Facility.
    “One way we might be able to do better in predicting risk is through physics-based modeling, by harnessing the power of systems like Frontera to run simulations,” said Milner. “Instead of an empirical statistical distribution, we simulate the occurrence of earthquakes and the propagation of its waves.”
    “We’ve made a lot of progress on Frontera in determining what kind of earthquakes we can expect, on which fault, and how often,” said Christine Goulet, Executive Director for Applied Science at SCEC, also involved in the work. “We don’t prescribe or tell the code when the earthquakes are going to happen. We launch a simulation of hundreds of thousands of years, and just let the code transfer the stress from one fault to another.”
    The simulations began with the geological topography of California and simulated over 800,000 virtual years how stresses form and dissipate as tectonic forces act on the Earth. From these simulations, the framework generated a catalogue — a record that an earthquake occurred at a certain place with a certain magnitude and attributes at a given time. The catalog that the SCEC team produced on Frontera and Blue Waters was among the largest ever made, Goulet said. The outputs of RSQSim were then fed into CyberShake that again used computer models of geophysics to predict how much shaking (in terms of ground acceleration, or velocity, and duration) would occur as a result of each quake.

    advertisement

    “The framework outputs a full slip-time history: where a rupture occurs and how it grew,” Milner explained. “We found it produces realistic ground motions, which tells us that the physics implemented in the model is working as intended.” They have more work planned for validation of the results, which is critical before acceptance for design applications.
    The researchers found that the RSQSim framework produces rich, variable earthquakes overall — a sign it is producing reasonable results — while also generating repeatable source and path effects.
    “For lots of sites, the shaking hazard goes down, relative to state-of-practice estimates” Milner said. “But for a couple of sites that have special configurations of nearby faults or local geological features, like near San Bernardino, the hazard went up. We are working to better understand these results and to define approaches to verify them.”
    The work is helping to determine the probability of an earthquake occurring along any of California’s hundreds of earthquake-producing faults, the scale of earthquake that could be expected, and how it may trigger other quakes.
    Support for the project comes from the U.S. Geological Survey (USGS), National Science Foundation (NSF), and the W.M. Keck Foundation. Frontera is NSF’s leadership-class national resource. Compute time on Frontera was provided through a Large-Scale Community Partnership (LSCP) award to SCEC that allows hundreds of U.S. scholars access to the machine to study many aspects of earthquake science. LSCP awards provide extended allocations of up to three years to support long-lived research efforts. SCEC — which was founded in 1991 and has computed on TACC systems for over a decade — is a premier example of such an effort.
    The creation of the catalog required eight days of continuous computing on Frontera and used more than 3,500 processors in parallel. Simulating the ground shaking at 10 sites across California required a comparable amount of computing on Summit, the second fastest supercomputer in the world.
    “Adoption by the broader community will be understandably slow,” said Milner. “Because such results will impact safety, it is part of our due diligence to make sure these results are technically defensible by the broader community,” added Goulet. But research results such as these are important in order to move beyond generalized building codes that in some cases may be inadequately representing the risk a region face while in other cases being too conservative.
    “The hope is that these types of models will help us better characterize seismic hazard so we’re spending our resources to build strong, safe, resilient buildings where they are needed the most,” Milner said.
    Video: https://www.youtube.com/watch?v=AdGctQsjKpU&feature=emb_logo More

  • in

    Adding or subtracting single quanta of sound

    Researchers perform experiments that can add or subtract a single quantum of sound — with surprising results when applied to noisy sound fields.
    Quantum mechanics tells us that physical objects can have both wave and particle properties. For instance, a single particle — or quantum — of light is known as a photon, and, in a similar fashion, a single quantum of sound is known as a phonon, which can be thought of as the smallest unit of sound energy.
    A team of researchers spanning Imperial College London, University of Oxford, the Niels Bohr Institute, University of Bath, and the Australian National University have performed an experiment that can add or subtract a single phonon to a high-frequency sound field using interactions with laser light.
    The team’s findings aid the development of future quantum technologies, such as hardware components in a future ‘quantum internet’, and help pave the way for tests of quantum mechanics on a more macroscopic scale. The details of their research are published today in the journal Physical Review Letters.
    To add or subtract a single quantum of sound, the team experimentally implement a technique proposed in 2013 that exploits correlations between photons and phonons created inside a resonator. More specifically, laser light is injected into a crystalline microresonator that supports both the light and the high-frequency sound waves.
    The two types of waves then couple to one another via an electromagnetic interaction that creates light at a new frequency. Then, to subtract a single phonon, the team detect a single photon that has been up-shifted in frequency. “Detecting a single photon gives us an event-ready signal that we have subtracted a single phonon,” says lead author of the project Georg Enzian.
    When the experiment is performed at a finite temperature, the sound field has random fluctuations from thermal noise. Thus, at any one time, the exact number of sound quanta present is unknown but on average there will be n phonons initially.
    What happens now when you add or subtract a single phonon? At first thought, you may expect this would simply change the average to n + 1 or n — 1, respectively, however the actual outcome defies this intuition. Indeed, quite counterintuitively, when you subtract a single phonon, the average number of phonons actually goes up to 2n.
    This surprising result where the mean number of quanta doubles has been observed for all-optical photon-subtraction experiments and is observed for the first time outside of optics here. “One way to think of the experiment is to imagine a claw machine that you often see in video arcades, except that you can’t see how many toys there are inside the machine. Before you agree to play, you’ve been told that on average there are n toys inside but the exact number changes randomly each time you play. Then, immediately after a successful grab with the claw, the average number of toys actually goes up to 2n,” describes Michael Vanner, Principal Investigator of the Quantum Measurement Lab at Imperial College London.
    It’s important to note that this result certainly does not violate energy conservation and comes about due to the statistics of thermal phonons.
    The team’s results, combined with their recent experiment that reported strong coupling between light and sound in a microresonator, open a new path for quantum science and technology with sound waves.

    Story Source:
    Materials provided by Imperial College London. Note: Content may be edited for style and length. More

  • in

    Sport may fast-track numeracy skills for Indigenous children

    Greater sports participation among Aboriginal and Torres Strait Islander children is linked with better academic performance, according to new research from the University of South Australia.
    Conducted in partnership with the University of Sydney and the University of Technology Sydney, the world-first study found that Aboriginal and Torres Strait Islander children who played organised sports every year over four years, had numeracy skills which were advanced by seven months, compared to children who did less sport.
    The study used data from four successive waves of Australia’s Longitudinal Study of Indigenous Children, following 303 students (with a baseline age of five to six years old) to assess cumulative sports participation against academic performance in standardised NAPLAN and PAT outcomes.
    Sports participation has been linked with better cognitive function and memory in many child populations, but this is the first study to confirm the beneficial association between ongoing involvement in sport and academic performance among Aboriginal and Torres Strait Islander children.
    Lead researcher, UniSA’s Dr Dot Dumuid, says the study highlights the importance of sports as a strategy to help close the gap* for Australia’s first nations peoples.
    “Playing sport has always had strong cultural importance to Aboriginal and Torres Strait Islanders, so understanding how sports can boost numeracy among Indigenous children is a valuable step towards improving health and reducing disadvantage,” Dr Dumuid says.

    advertisement

    “When children play sport, they’re learning the social structures of a team, how to work within rules, how to focus their attention, and key strategies for success.
    “Interestingly, when children play sport, they’re not only activating parts of the brain that are involved in learning, but they’re also inadvertently practising mathematical computations such as ‘how much time is left in the game?’ and ‘how many points do we need to win?’, and it’s this that may well be contributing to improved numeracy.”
    Aboriginal and Torres Strait Islanders comprise a relatively large proportion of athletes in Australia’s leading sports teams. While only representing about three percent of the population, they make up nine percent of AFL players, and 22 per cent of State of Origin players.
    Encouraging sports in Aboriginal and Torres Strait Islander communities could have many other benefits for health and wellbeing, says co-researcher and Professor of Indigenous Health Education at UTS, John Evans.
    “Playing sport creates a sense of belonging, and builds self-esteem, coherence and purpose,” Professor Evans says.
    “This is especially important for people living in rural and remote areas where opportunities for social interaction and structured activities can be limited.
    “If we can find ways to encourage greater participation among Aboriginal and Torres Strain Islander communities, while removing key barriers — such as financial costs and lack of transport — we could promote healthier living, more cohesive societies while also and boosting academic performance among Indigenous children.”

    Story Source:
    Materials provided by University of South Australia. Note: Content may be edited for style and length. More

  • in

    New blueprint for more stable quantum computers

    Researchers at the Paul Scherrer Institute PSI have put forward a detailed plan of how faster and better defined quantum bits — qubits — can be created. The central elements are magnetic atoms from the class of so-called rare-earth metals, which would be selectively implanted into the crystal lattice of a material. Each of these atoms represents one qubit. The researchers have demonstrated how these qubits can be activated, entangled, used as memory bits, and read out. They have now published their design concept and supporting calculations in the journal PRX Quantum.
    On the way to quantum computers, an initial requirement is to create so-called quantum bits or “qubits”: memory bits that can, unlike classical bits, take on not only the binary values of zero and one, but also any arbitrary combination of these states. “With this, an entirely new kind of computation and data processing becomes possible, which for specific applications means an enormous acceleration of computing power,” explains PSI researcher Manuel Grimm, first author of a new paper on the topic of qubits.
    The authors describe how logical bits and basic computer operations on them can be realised in a magnetic solid: qubits would reside on individual atoms from the class of rare-earth elements, built into the crystal lattice of a host material. On the basis of quantum physics, the authors calculate that the nuclear spin of the rare-earth atoms would be suitable for use as an information carrier, that is, a qubit. They further propose that targeted laser pulses could momentarily transfer the information to the atom’s electrons and thus activate the qubits, whereby their information becomes visible to surrounding atoms. Two such activated qubits communicate with each other and thus can be “entangled.” Entanglement is a special property of quantum systems of multiple particles or qubits that is essential for quantum computers: The result of measuring one qubit directly depends on the measurement results of other qubits, and vice versa.
    Faster means less error-prone
    The researchers demonstrate how these qubits can be used to produce logic gates, most notably the “controlled NOT gate” (CNOT gate). Logic gates are the basic building blocks that also classical computers use to perform calculations. If sufficiently many such CNOT gates as well as single-qubit gates are combined, every conceivable computational operation becomes possible. They thus form the basis for quantum computers.
    This paper is not the first to propose quantum-based logic gates. “Our method of activating and entangling the qubits, however, has a decisive advantage over previous comparable proposals: It is at least ten times faster,” says Grimm. The advantage, though, is not only the speed with which a quantum computer based on this concept could calculate; above all, it addresses the system’s susceptibility to errors. “Qubits are not very stable. If the entanglement processes are too slow, there is a greater probability that some of the qubits will lose their information in the meantime,” Grimm explains. Ultimately, what the PSI researchers have discovered is a way of making this type of quantum computer not only at least ten times as fast as comparable systems, but also less error-prone by the same factor.

    Story Source:
    Materials provided by Paul Scherrer Institute. Original written by Laura Hennemann. Note: Content may be edited for style and length. More

  • in

    AI trained to read electric vehicle charging station reviews to find infrastructure gaps

    Although electric vehicles that reduce greenhouse gas emissions attract many drivers, the lack of confidence in charging services deters others. Building a reliable network of charging stations is difficult in part because it’s challenging to aggregate data from independent station operators. But now, researchers reporting January 22 in the journal Patterns have developed an AI that can analyze user reviews of these stations, allowing it to accurately identify places where there are insufficient or out-of-service stations.
    “We’re spending billions of both public and private dollars on electric vehicle infrastructure,” says Omar Asensio (@AsensioResearch), principal investigator and assistant professor in the School of Public Policy at the Georgia Institute of Technology. “But we really don’t have a good understanding of how well these investments are serving the public and public interest.”
    Electric vehicle drivers have started to solve the problem of uncertain charging infrastructure by forming communities on charge station locator apps, leaving reviews. The researchers sought to analyze these reviews to better understand the problems facing users.
    With the aid of their AI, Asensio and colleagues were able to predict whether a specific station was functional on a particular day. They also found that micropolitan areas, where the population is between 10,000 and 50,000 people, may be underserved, with more frequent reports of station availability issues. These communities are mostly located in states in the West and Midwest, such as Oregon, Utah, South Dakota, and Nebraska, along with Hawaii.
    “When users are engaging and sharing information about charging experiences, they are often engaging in prosocial or pro-environmental behavior, which gives us rich behavioral information for machine learning,” says Asensio. But compared to analyzing data tables, texts can be challenging for computers to process. “A review could be as short as three words. It could also be as long as 25 or 30 words with misspellings and multiple topics,” says co-author Sameer Dharur of Georgia Institute of Technology. Users sometimes even throw smiley faces or emojis into the texts.
    To address the problem, Asensio and his team tailored their algorithm to electric vehicle transportation lingo. They trained it with reviews from 12,720 US charging stations to classify reviews into eight different categories: functionality, availability, cost, location, dealership, user interaction, service time, and range anxiety. The AI achieved a 91% accuracy and high learning efficiency in parsing the reviews in minutes. “That’s a milestone in the transition for us to deploy these AI tools because it’s no longer ‘can the AI do as good as human?'” says Asensio. “In some cases, the AI exceeded the performance of human experts.”
    As opposed to previous charging infrastructure performance evaluation studies that rely on costly and infrequent self-reported surveys, AI can reduce research costs while providing real-time standardized data. The electric vehicle charging market is expected to grow to $27.6 billion by 2027. The new method can give insight into consumers’ behavior, enabling rapid policy analysis and making infrastructure management easier for the government and companies. For instance, the team’s findings suggest that it may be more effective to subsidize infrastructure development as opposed to the sale of an electric car.
    While the technology still faces some limitations — like the need to reduce requirements for computer processing power — before rolling out large-scale implementation to the electric vehicle charging market, Asensio and his team hope that as the science progresses, their research can open doors to more in-depth studies about social equity on top of meeting consumer needs.
    “This is a wake-up call for us because, given the massive investment in electric vehicle infrastructure, we’re doing it in a way that is not necessarily attentive to the social equity and distributional issues of access to this enabling infrastructure,” says Asensio. “That is a topic of discussion that’s not going away and we’re only beginning to understand.”

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Defects may help scientists understand the exotic physics of topology

    Real-world materials are usually messier than the idealized scenarios found in textbooks. Imperfections can add complications and even limit a material’s usefulness. To get around this, scientists routinely strive to remove defects and dirt entirely, pushing materials closer to perfection. Now, researchers at the University of Illinois at Urbana-Champaign have turned this problem around and shown that for some materials defects could act as a probe for interesting physics, rather than a nuisance.
    The team, led by professors Gaurav Bahl and Taylor Hughes, studied artificial materials, or metamaterials, which they engineered to include defects. The team used these customizable circuits as a proxy for studying exotic topological crystals, which are often imperfect, difficult to synthesize, and notoriously tricky to probe directly. In a new study, published in the January 20th issue of Nature, the researchers showed that defects and structural deformations can provide insights into a real material’s hidden topological features.
    “Most studies in this field have focused on materials with perfect internal structure. Our team wanted to see what happens when we account for imperfections. We were surprised to discover that we could actually use defects to our advantage,” said Bahl, an associate professor in the Department of Mechanical Science and Engineering. With that unexpected assist, the team has created a practical and systematic approach for exploring the topology of unconventional materials.
    Topology is a way of mathematically classifying objects according to their overall shape, rather than every small detail of their structure. One common illustration of this is a coffee mug and a bagel, which have the same topology because both objects have only one hole that you can wrap your fingers through.
    Materials can also have topological features related to the classification of their atomic structure and energy levels. These features lead to unusual, yet possibly useful, electron behaviors. But verifying and harnessing topological effects can be tricky, especially if a material is new or unknown. In recent years, scientists have used metamaterials to study topology with a level of control that is nearly impossible to achieve with real materials.
    “Our group developed a toolkit for being able to probe and confirm topology without having any preconceived notions about a material.” says Hughes, who is a professor in the Department of Physics. “This has given us a new window into understanding the topology of materials, and how we should measure it and confirm it experimentally.”
    In an earlier study published in Science, the team established a novel technique for identifying insulators with topological features. Their findings were based on translating experimental measurements made on metamaterials into the language of electronic charge. In this new work, the team went a step further — they used an imperfection in the material’s structure to trap a feature that is equivalent to fractional charges in real materials.

    advertisement

    A single electron by itself cannot carry half a charge or some other fractional amount. But, fragmented charges can show up within crystals, where many electrons dance together in a ballroom of atoms. This choreography of interactions induces odd electronic behaviors that are otherwise disallowed. Fractional charges have not been measured in either naturally occurring or custom-grown crystals, but this team showed that analogous quantities can be measured in a metamaterial.
    The team assembled arrays of centimeter-scale microwave resonators onto a chip. “Each of these resonators plays the role of an atom in a crystal and, similar to an atom’s energy levels, has a specific frequency where it easily absorbs energy — in this case the frequency is similar that of a conventional microwave oven.” said lead author Kitt Peterson, a former graduate student in Bahl’s group.
    The resonators are arranged into squares, repeating across the metamaterial. The team included defects by disrupting this square pattern — either by removing one resonator to make a triangle or adding one to create a pentagon. Since all the resonators are connected together, these singular disclination defects ripple out, warping the overall shape of the material and its topology.
    The team injected microwaves into each resonator of the array and recorded the amount of absorption. Then, they mathematically translated their measurements to predict how electrons act in an equivalent material. From this, they concluded that fractional charges would be trapped on disclination defects in such a crystal. With further analysis, the team also demonstrated that trapped fractional charge signals the presence of certain kinds of topology.
    “In these crystals, fractional charge turns out to be the most fundamental observable signature of interesting underlying topological features” said Tianhe Li, a theoretical physics graduate student in Hughes’ research group and a co-author on the study.
    Observing fractional charges directly remains a challenge, but metamaterials offer an alternative way to test theories and learn about manipulating topological forms of matter. According to the researchers, reliable probes for topology are also critical for developing future applications for topological quantum materials.
    The connection between the topology of a material and its imperfect geometry is also broadly interesting for theoretical physics. “Engineering a perfect material does not necessarily reveal much about real materials,” says Hughes. “Thus, studying the connection between defects, like the ones in this study, and topological matter may increase our understanding of realistic materials, with all of their inherent complexities.” More

  • in

    Record-breaking laser link could help us test whether Einstein was right

    Scientists from the International Centre for Radio Astronomy Research (ICRAR) and The University of Western Australia (UWA) have set a world record for the most stable transmission of a laser signal through the atmosphere.
    In a study published today in the journal Nature Communications, Australian researchers teamed up with researchers from the French National Centre for Space Studies (CNES) and the French metrology lab Systèmes de Référence Temps-Espace (SYRTE) at Paris Observatory.
    The team set the world record for the most stable laser transmission by combining the Aussies’ ‘phase stabilisation’ technology with advanced self-guiding optical terminals.
    Together, these technologies allowed laser signals to be sent from one point to another without interference from the atmosphere.
    Lead author Benjamin Dix-Matthews, a PhD student at ICRAR and UWA, said the technique effectively eliminates atmospheric turbulence.
    “We can correct for atmospheric turbulence in 3D, that is, left-right, up-down and, critically, along the line of flight,” he said.

    advertisement

    “It’s as if the moving atmosphere has been removed and doesn’t exist.
    “It allows us to send highly-stable laser signals through the atmosphere while retaining the quality of the original signal.”
    The result is the world’s most precise method for comparing the flow of time between two separate locations using a laser system transmitted through the atmosphere.
    ICRAR-UWA senior researcher Dr Sascha Schediwy said the research has exciting applications.
    “If you have one of these optical terminals on the ground and another on a satellite in space, then you can start to explore fundamental physics,” he said.

    advertisement

    “Everything from testing Einstein’s theory of general relativity more precisely than ever before, to discovering if fundamental physical constants change over time.”
    The technology’s precise measurements also have practical uses in earth science and geophysics.
    “For instance, this technology could improve satellite-based studies of how the water table changes over time, or to look for ore deposits underground,” Dr Schediwy said.
    There are further potential benefits for optical communications, an emerging field that uses light to carry information.
    Optical communications can securely transmit data between satellites and Earth with much higher data rates than current radio communications.
    “Our technology could help us increase the data rate from satellites to ground by orders of magnitude,” Dr Schediwy said.
    “The next generation of big data-gathering satellites would be able to get critical information to the ground faster.”
    The phase stabilisation technology behind the record-breaking link was originally developed to synchronise incoming signals for the Square Kilometre Array telescope.
    The multi-billion-dollar telescope is set to be built in Western Australia and South Africa from 2021. More

  • in

    Bringing atoms to a standstill: Miniaturizing laser cooling

    It’s cool to be small. Scientists at the National Institute of Standards and Technology (NIST) have miniaturized the optical components required to cool atoms down to a few thousandths of a degree above absolute zero, the first step in employing them on microchips to drive a new generation of super-accurate atomic clocks, enable navigation without GPS, and simulate quantum systems.
    Cooling atoms is equivalent to slowing them down, which makes them a lot easier to study. At room temperature, atoms whiz through the air at nearly the speed of sound, some 343 meters per second. The rapid, randomly moving atoms have only fleeting interactions with other particles, and their motion can make it difficult to measure transitions between atomic energy levels. When atoms slow to a crawl — about 0.1 meters per second — researchers can measure the particles’ energy transitions and other quantum properties accurately enough to use as reference standards in a myriad of navigation and other devices.
    For more than two decades, scientists have cooled atoms by bombarding them with laser light, a feat for which NIST physicist Bill Phillips shared the 1997 Nobel Prize in physics. Although laser light would ordinarily energize atoms, causing them to move faster, if the frequency and other properties of the light are chosen carefully, the opposite happens. Upon striking the atoms, the laser photons reduce the atoms’ momentum until they are moving slowly enough to be trapped by a magnetic field.
    But to prepare the laser light so that it has the properties to cool atoms typically requires an optical assembly as big as a dining-room table. That’s a problem because it limits the use of these ultracold atoms outside the laboratory, where they could become a key element of highly accurate navigation sensors, magnetometers and quantum simulations.
    Now NIST researcher William McGehee and his colleagues have devised a compact optical platform, only about 15 centimeters (5.9 inches) long, that cools and traps gaseous atoms in a 1-centimeter-wide region. Although other miniature cooling systems have been built, this is the first one that relies solely on flat, or planar, optics, which are easy to mass produce.
    “This is important as it demonstrates a pathway for making real devices and not just small versions of laboratory experiments,” said McGehee. The new optical system, while still about 10 times too big to fit on a microchip, is a key step toward employing ultracold atoms in a host of compact, chip-based navigation and quantum devices outside a laboratory setting. Researchers from the Joint Quantum Institute, a collaboration between NIST and the University of Maryland in College Park, along with scientists from the University of Maryland’s Institute for Research in Electronics and Applied Physics, also contributed to the study.

    advertisement

    The apparatus, described online in the New Journal of Physics, consists of three optical elements. First, light is launched from an optical integrated circuit using a device called an extreme mode converter. The converter enlarges the narrow laser beam, initially about 500 nanometers (nm) in diameter (about five thousandths the thickness of a human hair), to 280 times that width. The enlarged beam then strikes a carefully engineered, ultrathin film known as a “metasurface” that’s studded with tiny pillars, about 600 nm in length and 100 nm wide.
    The nanopillars act to further widen the laser beam by another factor of 100. The dramatic widening is necessary for the beam to efficiently interact with and cool a large collection of atoms. Moreover, by accomplishing that feat within a small region of space, the metasurface miniaturizes the cooling process.
    The metasurface reshapes the light in two other important ways, simultaneously altering the intensity and polarization (direction of vibration) of the light waves. Ordinarily, the intensity follows a bell-shaped curve, in which the light is brightest at the center of the beam, with a gradual falloff on either side. The NIST researchers designed the nanopillars so that the tiny structures modify the intensity, creating a beam that has a uniform brightness across its entire width. The uniform brightness allows more efficient use of the available light. Polarization of the light is also critical for laser cooling.
    The expanding, reshaped beam then strikes a diffraction grating that splits the single beam into three pairs of equal and oppositely directed beams. Combined with an applied magnetic field, the four beams, pushing on the atoms in opposing directions, serve to trap the cooled atoms.
    Each component of the optical system — the converter, the metasurface and the grating — had been developed at NIST but was in operation at separate laboratories on the two NIST campuses, in Gaithersburg, Maryland and Boulder, Colorado. McGehee and his team brought the disparate components together to build the new system.
    “That’s the fun part of this story,” he said. “I knew all the NIST scientists who had independently worked on these different components, and I realized the elements could be put together to create a miniaturized laser cooling system.”
    Although the optical system will have to be 10 times smaller to laser-cool atoms on a chip, the experiment “is proof of principle that it can be done,” McGehee added.
    “Ultimately, making the light preparation smaller and less complicated will enable laser-cooling based technologies to exist outside of laboratories,” he said. More