More stories

  • in

    Quantum battles in attoscience: Following three debates

    The field of attoscience has been kickstarted by new advances in laser technology. Research began with studies of three particular processes. Firstly, ‘above-threshold ionization’ (ATI), describing atoms which are ionized by more than the required number of photons. Secondly, ‘high harmonic generation’ (HHG) occurs when a target is illuminated by an intense laser pulse, causing it to emit high-frequency harmonics as a nonlinear response. Finally, ‘laser-induced nonsequential double ionization’ (NSDI) occurs when the laser field induces correlated dynamics within systems of multiple electrons.
    Using powerful, ultrashort laser pulses, researchers can now study how these processes unfold on timescales of just 10-18 seconds. This gives opportunities to study phenomena such as the motions of electrons within atoms, the dynamics of charges within molecules, and oscillations of electric fields within laser pulses.
    Today, many theoretical approaches are used to study attosecond physics. Within this landscape, two broadly opposing viewpoints have emerged: the ‘analytical’ approach, in which systems are studied using suitable approximations of physical processes; and the ‘ab-initio’ approach, where systems are broken down into their elemental parts, then analysed using fundamental physics.
    Using ATI, HHG, and NSDI as case studies, the first of the Quantum Battles papers explores this tension through a dialogue between two hypothetical theorists, each representing viewpoints expressed by the workshop’s discussion panel. The study investigates three main questions: relating to the scope and nature of both approaches, their relative advantages and disadvantages, and their complementary roles in scientific discovery so far.
    Another source of tension within the attoscience community relates to quantum tunnelling — describing how quantum particles can travel directly through energy barriers. Here, a long-standing debate exists over whether tunnelling occurs instantaneously, or if it requires some time; and if so, how much.
    The second paper follows this debate through analysis of the panel’s viewpoints, as they discussed the physical observables of tunnelling experiments; theoretical approaches to assessing tunnelling time; and the nature of tunnelling itself. The study aims to explain why so many approaches reach differing conclusions, given the lack of any universally-agreed definition of tunnelling.
    The wave-like properties of matter are a further key concept in quantum mechanics. On attosecond timescales, intense laser fields can be used to exploit interference between matter waves of electrons. This allows researchers to create images with sub-atomic resolutions, while maintaining the ability to capture dynamics occurring on ultra-short timescales.
    The final ‘battle’ paper explores several questions which are rarely asked about this technique. In particular, it explores the physical differences between the roles of matter waves in HHG — which can be used to extend imaging capabilities; and ATI — which is used to generate packets of electron matter waves.
    The Quantum Battles workshop oversaw a wide variety of lively, highly interactive debates between a diverse range of participants: from leading researchers, to those just starting out in their careers. In many cases, the discussions clarified the points of tension that exist within the attoscience community. This format was seen as particularly innovative by the community and the general public, who could follow the discussions via dedicated social media platforms. One participant even referred to the Quantum Battles as a `breath of fresh air’.
    Quantum Battles promoted the view that while initial discoveries may stem from a specific perspective, scientific progress happens when representatives of many different viewpoints collaborate with each other. One immediate outcome is the “AttoFridays” online seminar series, which arose from the success of the workshop. With their fresh and open approach, Quantum Battles and AttoFridays will lead to more efficient and constructive discussions across institutional, scientific, and national borders.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Novel advanced light design and fabrication process could revolutionize sensing technologies

    Vanderbilt and Penn State engineers have developed a novel approach to design and fabricate thin-film infrared light sources with near-arbitrary spectral output driven by heat, along with a machine learning methodology called inverse design that reduced the optimization time for these devices from weeks or months on a multi-core computer to a few minutes on a consumer-grade desktop.
    The ability to develop inexpensive, efficient, designer infrared light sources could revolutionize molecular sensing technologies. Additional applications include free-space communications, infrared beacons for search and rescue, molecular sensors for monitoring industrial gases, environmental pollutants and toxins.
    The research team’s approach, detailed today in Nature Materials, uses simple thin-film deposition, one of the most mature nano-fabrication techniques, aided by key advances in materials and machine learning.
    Standard thermal emitters, such as incandescent lightbulbs, generate broadband thermal radiation that restricts their use to simple applications. In contrast, lasers and light emitting diodes offer the narrow frequency emission desired for many applications but are typically too inefficient and/or expensive. That has directed research toward wavelength-selective thermal emitters to provide the narrow bandwidth of a laser or LED, but with the simple design of a thermal emitter. However, to date most thermal emitters with user-defined output spectral have required patterned nanostructures fabricated with high-cost, low-throughput methods.
    The research team led by Joshua Caldwell, Vanderbilt associate professor of mechanical engineering, and Jon-Paul Maria, professor of materials science and engineering at Penn State, set out to conquer long-standing challenges and create a more efficient process. Their approach leverages the broad spectral tunability of the semiconductor cadmium oxide in concert with a one-dimensional photonic crystal fabricated with alternating layers of dielectrics referred to as a distributed Bragg reflector.
    The combination of these multiple layers of materials gives rise to a so-called “Tamm-polariton,” where the emission wavelength of the device is dictated by the interactions between these layers. Until now, such designs were limited to a single designed wavelength output. But creating multiple resonances at multiple frequencies with user-controlled wavelength, linewidth, and intensity is imperative for matching the absorption spectra of most molecules. More

  • in

    Physicists describe photons’ characteristics to protect future quantum computing

    Consumers need to be confident that transactions they make online are safe and secure. A main method to protect customer transactions and other information is through encryption, where vital information is encoded with a key using complex mathematical problems that are difficult even for computers to solve.
    But even that may have a weakness: Encrypted information could be decoded by future quantum computers that would try many keys simultaneously and rapidly find the right one.
    To prepare for this future possibility, researchers are working to develop codes that cannot be broken by quantum computers. These codes rely on distributing single photons — single particles of light — that share a quantum character solely among the parties that wish to communicate. The new quantum codes require these photons to have the same color, so they are impossible to distinguish from each other, and the resulting devices, networks, and systems form the backbone of a future “quantum internet.”
    Researchers at the University of Iowa have been studying the properties of photons emitted from solids and are now able to predict how sharp the color of each emitted photon can be. In a new study, the researchers describe theoretically how many of these indistinguishable photons can be sent simultaneously down a fiber-optical cable to establish secure communications, and how rapidly these quantum codes can send information.
    “Up to now, there has not been a well-founded quantitative description of the noise in the color of light emitted by these qubits, and the noise leading to loss of quantum coherence in the qubits themselves that’s essential for calculations,” says Michael Flatté, professor in the Department of Physics and Astronomy and the study’s corresponding author. “This work provides that.”
    Story Source:
    Materials provided by University of Iowa. Original written by Richard Lewis. Note: Content may be edited for style and length. More

  • in

    New photonic chip for isolating light may be key to miniaturizing quantum devices

    Light offers an irreplaceable way to interact with our universe. It can travel across galactic distances and collide with our atmosphere, creating a shower of particles that tell a story of past astronomical events. Here on earth, controlling light lets us send data from one side of the planet to the other.
    Given its broad utility, it’s no surprise that light plays a critical role in enabling 21st century quantum information applications. For example, scientists use laser light to precisely control atoms, turning them into ultra-sensitive measures of time, acceleration, and even gravity. Currently, such early quantum technology is limited by size — state-of-the-art systems would not fit on a dining room table, let alone a chip. For practical use, scientists and engineers need to miniaturize quantum devices, which requires re-thinking certain components for harnessing light.
    Now IQUIST member Gaurav Bahl and his research group have designed a simple, compact photonic circuit that uses sound waves to rein in light. The new study, published in the October 21 issue of the journal Nature Photonics, demonstrates a powerful way to isolate, or control the directionality of light. The team’s measurements show that their approach to isolation currently outperforms all previous on-chip alternatives and is optimized for compatibility with atom-based sensors.
    “Atoms are the perfect references anywhere in nature and provide a basis for many quantum applications,” said Bahl, a professor in Mechanical Science and Engineering (MechSe) at the University of Illinois at Urbana-Champaign. “The lasers that we use to control atoms need isolators that block undesirable reflections. But so far the isolators that work well in large-scale experiments have proved tough to miniaturize.”
    Even in the best of circumstances, light is difficult to control — it will reflect, absorb, and refract when encountering a surface. A mirror sends light back where it came from, a shard of glass bends light while letting it through, and dark rocks absorb light and converts it to heat. Essentially, light will gladly scatter every which way off anything in its path. This unwieldy behavior is why even a smidgen of light is beneficial for seeing in the dark.
    Controlling light within large quantum devices is normally an arduous task that involves a vast sea of mirrors, lenses, fibers, and more. Miniaturization requires a different approach to many of these components. In the last several years, scientists and engineers have made significant advances in designing various light-controlling elements on microchips. They can fabricate waveguides, which are channels for transporting light, and can even change its color using certain materials. But forcing light, which is made from tiny blips called photons, to move in one direction while suppressing undesirable backwards reflections is tricky. More

  • in

    Two beams are better than one

    Han and Leia. George and Amal. Kermit and Miss Piggy. Gomez and Morticia. History’s greatest couples rely on communication to make them so strong their power cannot be denied.
    But that’s not just true for people (or Muppets), it’s also true for lasers.
    According to new research from the USC Viterbi School of Engineering, recently published in Nature Photonics, adding two lasers together as a sort of optical “it couple” promises to make wireless communications faster and more secure than ever before. But first, a little background. Most laser-based communications — think fiber optics, commonly used for things like high-speed internet — is transmitted in the form of a laser (optical) beam traveling through a cable. Optical communications is exceptionally fast but is limited by the fact that it must travel through physical cables. Bringing the high-capacity capabilities of lasers to untethered and roving applications — such as to airplanes, drones, submarines, and satellites — is truly exciting and potentially game-changing.
    The USC Viterbi researchers have gotten us one step closer to that feat by focusing on something called Free Space Optical Communication (FSOC). This is no small feat, and it is a challenge researchers have been working on for some time. One major roadblock has been something called “atmospheric turbulence.”
    As a single optical laser beam carrying information travels through the air, it experiences natural turbulence, much like a plane does. Wind and temperature changes in the atmosphere around it cause the beam to become less stable. Our inability to control that turbulence is what has prevented FSOC from advancing in performance similar to radio and optical fiber systems. Leaving us stuck with slower old radio waves for most wireless communication.
    “While FSOC has been around a while, it has been a fundamental challenge to efficiently recover information from an optical beam that has been affected by atmospheric turbulence,” said Runzhou Zhang, the lead author and a Ph.D. student at USC Viterbi’s Optical Communications Laboratory in the Ming Hsieh Department of Electrical and Computer Engineering.
    The researchers made an advance to solving this problem by sending a second laser beam (called a “pilot” beam) traveling along with the first to act as a partner. Traveling as a couple, the two beams are sent through the same air, experience the same turbulence, and have the same distortion. If only one beam is sent, the receiver must calculate all the distortion the beam experienced along the way before it can decode the data. This severely limits the system’s performance.
    But, when the pilot beam travels alongside the original beam, the distortion is automatically removed. Like Kermit duetting “Rainbow Connection” with Miss Piggy, the information in that beam arrives at its destination clear, crisp and easy to understand. From an engineering perspective, this accomplishment is no small feat. “The problem with radio waves, our current best bet for most wireless communication, is that it is much slower in data rate and much less secure than optical communications,” said Alan Willner, team lead on the paper and USC Viterbi professor of electrical and computer engineering. “With our new approach, we are one step closer to mitigating turbulence in high-capacity optical links.”
    Perhaps most impressively, the researchers did not solve this problem with a new device or material. They simply looked at the physics and changed their perspective. “We used the underlying physics of a well-known device called a photo detector, usually used for detecting intensity of light, and realized it could be used in a new way to make an advance towards solving the turbulence problem for laser communication systems,” said Zhang.
    Think about it this way: When Kermit and Miss Piggy sing their song, both their voices get distorted through the air in a similar way. That makes sense; they’re standing right next to each other, and their sound is traveling through the same atmosphere. What this photo detector does is turn the distortion of Kermit’s voice into the opposite of the distortion for Miss Piggy’s voice. Now, when they are mixed back together, the distortion is automatically canceled in both voices and we hear the song clearly and crisply.
    With this newly realized application of physics, the team plans to continue exploring how to make the performance even better. “We hope that our approach will one day enable higher-performance and secure wireless links,” said Willner. Such links may be used for anything from high-resolution imaging to high-performance computing.
    Story Source:
    Materials provided by University of Southern California. Original written by Ben Paul. Note: Content may be edited for style and length. More

  • in

    Machine learning can be fair and accurate

    Carnegie Mellon University researchers are challenging a long-held assumption that there is a trade-off between accuracy and fairness when using machine learning to make public policy decisions.
    As the use of machine learning has increased in areas such as criminal justice, hiring, health care delivery and social service interventions, concerns have grown over whether such applications introduce new or amplify existing inequities, especially among racial minorities and people with economic disadvantages. To guard against this bias, adjustments are made to the data, labels, model training, scoring systems and other aspects of the machine learning system. The underlying theoretical assumption is that these adjustments make the system less accurate.
    A CMU team aims to dispel that assumption in a new study, recently published in Nature Machine Intelligence. Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department (MLD) and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in MLD; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and found the trade-off was negligible in practice across a range of policy domains.
    “You actually can get both. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”
    Ghani and Rodolfa focused on situations where in-demand resources are limited, and machine learning systems are used to help allocate those resources. The researchers looked at systems in four areas: prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city’s limited housing inspectors; modeling the risk of students not graduating from high school in time to identify those most in need of additional support; and helping teachers reach crowdfunding goals for classroom needs.
    In each context, the researchers found that models optimized for accuracy — standard practice for machine learning — could effectively predict the outcomes of interest but exhibited considerable disparities in recommendations for interventions. However, when the researchers applied adjustments to the outputs of the models that targeted improving their fairness, they discovered that disparities based on race, age or income — depending on the situation — could be removed without a loss of accuracy.
    Ghani and Rodolfa hope this research will start to change the minds of fellow researchers and policymakers as they consider the use of machine learning in decision making.
    “We want the artificial intelligence, computer science and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” Rodolfa said. “We hope policymakers will embrace machine learning as a tool in their decision making to help them achieve equitable outcomes.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Quantum material to boost terahertz frequencies

    They are regarded as one of the most interesting materials for future electronics: Topological insulators conduct electricity in a special way and hold the promise of novel circuits and faster mobile communications. Under the leadership of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), a research team from Germany, Spain and Russia has now unravelled a fundamental property of this new class of materials: How exactly do the electrons in the material respond when they are “startled” by short pulses of so-called terahertz radiation? The results are not just significant for our basic understanding of this novel quantum material, but could herald faster mobile data communication or high-sensitivity detector systems for exploring distant worlds in years to come, the team reports in NPJ Quantum Materials.
    Topological insulators are a very recent class of materials which have a special quantum property: on their surface they can conduct electricity almost loss-free while their interior functions as an insulator — no current can flow there. Looking to the future, this opens up interesting prospects: Topological insulators could form the basis for high efficiency electronic components, which makes them an interesting research field for physicists.
    But a number of fundamental questions are still unanswered. What happens, for example, when you give the electrons in the material a “nudge” using specific electromagnetic waves — so-called terahertz radiation — thus generating an excited state? One thing is clear: the electrons want to rid themselves of the energy boost forced upon them as quickly as possible, such as by heating up the crystal lattice surrounding them. In the case of topological insulators, however, it was previously unclear whether getting rid of this energy happened faster in the conducting surface than in the insulating core. “So far, we simply didn’t have the appropriate experiments to find out,” explains study leader Dr. Sergey Kovalev from the Institute of Radiation Physics at HZDR. “Up to now, at room temperature, it was extremely difficult to differentiate the surface reaction from that in the interior of the material.”
    In order to overcome this hurdle, he and his international team developed an ingenious test set-up: intensive terahertz pulses hit a sample and excite the electrons. Immediately after, laser flashes illuminate the material and register how the sample responds to the terahertz stimulation. In a second test series, special detectors measure to what extent the sample exhibits an unusual non-linear effect and multiplies the frequency of the terahertz pulses applied. Kovalev and his colleagues conducted these experiments using the TELBE terahertz light source at HZDR’s ELBE Center for High-Power Radiation Sources. Researchers from the Catalan Institute of Nanoscience and Nanotechnology in Barcelona, Bielefeld University, the German Aerospace Center (DLR), the Technical University of Berlin, and Lomonosov University and the Kotelnikov Institute of Radio Engineering and Electronics in Moscow were involved.
    Rapid energy transfer
    The decisive thing was that the international team did not only investigate a single material. Instead, the Russian project partners produced three different topological insulators with different, precisely determined properties: in one case, only the electrons on the surface could directly absorb the terahertz pulses. In the others, the electrons were mainly excited in the interior of the sample. “By comparing these three experiments we were able to differentiate precisely between the behavior of the surface and the interior of the material,” Kovalev explains. “And it emerged that the electrons in the surface became excited significantly faster than those in the interior of the material.” Apparently, they were able to transfer their energy to the crystal lattice immediately.
    Put into figures: while the surface electrons reverted to their original energetic state in a few hundred femtoseconds, the “inner” electrons took approximately ten times as long, that is, a few picoseconds. “Topological insulators are highly-complex systems. The theory is anything but easy to understand,” emphasizes Michael Gensch, former head of the TELBE facility at HZDR and now head of department in the Institute of Optical Sensor Systems at the German Aerospace Center (DLR) and professor at TU Berlin. “Our results can help decide which of the theoretical ideas hold true.”
    Highly effective multiplication
    But the experiment also augurs well for interesting developments in digital communication like WLAN and mobile communications. Today, technologies such as 5G function in the gigahertz range. If we could harness higher frequencies in the terahertz range, significantly more data could be transmitted by a single radio channel, whereby frequency multipliers could play an important role: They are able to translate relatively low radio frequencies into significantly higher ones.
    Some time ago, the research team had already realized that, under certain conditions, graphene — a two-dimensional, super thin carbon — can act as an efficient frequency multiplier. It is able to convert 300 gigahertz radiation into frequencies of some terahertz. The problem is that when the applied radiation is extremely intensive, there is a significant drop in the efficiency of the graphene. Topological insulators, on the other hand, even function with the most intensive stimulation, the new study discovered. “This might mean it’s possible to multiply frequencies from a few terahertz to several dozen terahertz,” surmises HZDR physicist Jan-Christoph Deinert, who heads the TELBE team together with Sergey Kovalev. “At the moment, there is no end in sight when it comes to topological insulators.”
    If such a development comes about, the new quantum materials could be used in a much wider frequency range than with graphene. “At DLR, we are very interested in using quantum materials of this kind in high-performance heterodyne receivers for astronomy, especially in space telescopes,” Gensch explains. More

  • in

    Unmasking the magic of superconductivity in twisted graphene

    The discovery in 2018 of superconductivity in two single-atom-thick layers of graphene stacked at a precise angle of 1.1 degrees (called ‘magic’-angle twisted bilayer graphene) came as a big surprise to the scientific community. Since the discovery, physicists have asked whether magic graphene’s superconductivity can be understood using existing theory, or whether fundamentally new approaches are required — such as those being marshalled to understand the mysterious ceramic compound that superconducts at high temperatures. Now, as reported in the journal Nature, Princeton researchers have settled this debate by showing an uncanny resemblance between the superconductivity of magic graphene and that of high temperature superconductors. Magic graphene may hold the key to unlocking new mechanisms of superconductivity, including high temperature superconductivity.
    Ali Yazdani, the Class of 1909 Professor of Physics and Director of the Center for Complex Materials at Princeton University led the research. He and his team have studied many different types of superconductors over the years and have recently turned their attention to magic bilayer graphene.
    “Some have argued that magic bilayer graphene is actually an ordinary superconductor disguised in an extraordinary material,” said Yazdani, “but when we examined it microscopically it has many of the characteristics of high temperature cuprate superconductors. It is a déjà vu moment.”
    Superconductivity is one of nature’s most intriguing phenomena. It is a state in which electrons flow freely without any resistance. Electrons are subatomic particles that carry negative electric charges; they are vital to our way of life because they power our everyday electronics. In normal circumstances, electrons behave erratically, jumping and jostling against each other in a manner that is ultimately inefficient and wastes energy.
    But under superconductivity, electrons suddenly pair up and start to flow in unison, like a wave. In this state the electrons not only do not lose energy, but they also display many novel quantum properties. These properties have allowed for a number of practical applications, including magnets for MRIs and particle accelerators as well as in the making of quantum bits that are being used to build quantum computers. Superconductivity was first discovered at extremely low temperatures in elements such as aluminum and niobium. In recent years, it has been found close to room temperatures under extraordinarily high pressure, and also at temperatures just above the boiling point of liquid nitrogen (77 degrees Kelvin) in ceramic compounds.
    But not all superconductors are created equal. More