More stories

  • in

    Superconductivity: New tricks for finding better materials

    Even after more than 30 years of research, high-temperature superconductivity is still one of the great unsolved mysteries of materials physics. The exact mechanism that causes certain materials to still conduct electric current without any resistance even at relatively high temperatures is still not fully understood.
    Two years ago, a new class of promising superconductors was discovered: so-called layered nickelates. For the first time, a research team at TU Wien has now succeeded in determining important parameters of these novel superconductors by comparing theory and experiment. This means that for the first time a theoretical model is now available that can be used to understand the electronic mechanisms of high-temperature superconductivity in these materials.
    In search of high-temperature superconductors
    Many superconductors are known today, but most of them are only superconducting at extremely low temperatures, close to absolute zero. Materials that remain superconducting at higher temperatures are called “high-temperature superconductors” — even though these “high” temperatures (often in the order of magnitude of less than -200°C) are still extremely cold by human standards.
    Finding a material that still remains superconducting at significantly higher temperatures would be a revolutionary discovery that would open the door to many new technologies. For a long time, the so-called cuprates were considered particularly exciting candidates — a class of materials containing copper atoms. Now, however, another class of materials could turn out to be even more promising: Nickelates, which have a similar structure to cuprates, but with nickel instead of copper.
    “There has been a lot of research on cuprates, and it has been possible to dramatically increase the critical temperature up to which the material remains superconducting. If similar progress can be made with the newly discovered nickelates, it would be a huge step forward,” says Prof. Jan Kuneš from the Institute of Solid State Physics at TU Wien.
    Hard-to-access parameters
    Theoretical models describing the behaviour of such superconductors already exist. The problem, however, is that in order to use these models, one must know certain material parameters that are difficult to determine. “The charge transfer energy plays a key role,” explains Jan Kuneš. “This value tells us how much energy you have to add to the system to transfer an electron from a nickel atom to an oxygen atom.”
    Unfortunately, this value cannot be measured directly, and theoretical calculations are extremely complicated and imprecise. Therefore, Atsushi Hariki, a member of Jan Kuneš’ research group, developed a method to determine this parameter indirectly: When the material is examined with X-rays, the results also depend on the charge transfer energy. “We calculated details of the X-ray spectrum that are particularly sensitive to this parameter and compared our results with measurements of different X-ray spectroscopy methods,” explains Jan Kuneš. “In this way, we can determine the appropriate value — and this value can now be inserted into the computational models used to describe the superconductivity of the material.”
    Important prerequisite for the search for better nickelates
    Thus, for the first time, it has now been possible to explain the electronic structure of the material precisely and to set up a parameterised theoretical model for describing superconductivity in nickelates. “With this, we can now get to the bottom of the question of how the mechanics of the effect can be explained at the electronic level,” says Jan Kuneš. “Which orbitals play a decisive role? Which parameters matter in detail? That’s what you need to know if you want to find out how to improve this material further, so that one day you might be able to produce new nickelates whose superconductivity persists up to even significantly higher temperatures.”
    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    Experiments confirm a quantum material’s unique response to circularly polarized laser light

    When the COVID-19 pandemic shut down experiments at the Department of Energy’s SLAC National Accelerator Laboratory early last year, Shambhu Ghimire’s research group was forced to find another way to study an intriguing research target: quantum materials known as topological insulators, or TIs, which conduct electric current on their surfaces but not through their interiors.
    Denitsa Baykusheva, a Swiss National Science Foundation Fellow, had joined his group at the Stanford PULSE Institute two years earlier with the goal of finding a way to generate high harmonic generation, or HHG, in these materials as a tool for probing their behavior. In HHG, laser light shining through a material shifts to higher energies and higher frequencies, called harmonics, much like pressing on a guitar string produces higher notes. If this could be done in TIs, which are promising building blocks for technologies like spintronics, quantum sensing and quantum computing, it would give scientists a new tool for investigating these and other quantum materials.
    With the experiment shut down midway, she and her colleagues turned to theory and computer simulations to come up with a new recipe for generating HHG in topological insulators. The results suggested that circularly polarized light, which spirals along the direction of the laser beam, would produce clear, unique signals from both the conductive surfaces and the interior of the TI they were studying, bismuth selenide — and would in fact enhance the signal coming from the surfaces.
    When the lab reopened for experiments with covid safety precautions in place, Baykusheva set out to test that recipe for the first time. In a paper published today in Nano Letters, the research team report that those tests went exactly as predicted, producing the first unique signature from the topological surface.
    “This material looks very different than any other material we’ve tried,” said Ghimire, who is a principal investigator at PULSE. “It’s really exciting being able to find a new class of material that has a very different optical response than anything else.”
    Over the past dozen years, Ghimire had done a series of experiments with PULSE Director David Reis showing that HHG can be produced in ways that were previously thought unlikely or even impossible: by beaming laser light into a crystal, a frozen argon gas or an atomically thin semiconductor material. Another study described how to use HHG to generate attosecond laser pulses, which can be used to observe and control the movements of electrons, by shining a laser through ordinary glass. More

  • in

    Quantum battles in attoscience: Following three debates

    The field of attoscience has been kickstarted by new advances in laser technology. Research began with studies of three particular processes. Firstly, ‘above-threshold ionization’ (ATI), describing atoms which are ionized by more than the required number of photons. Secondly, ‘high harmonic generation’ (HHG) occurs when a target is illuminated by an intense laser pulse, causing it to emit high-frequency harmonics as a nonlinear response. Finally, ‘laser-induced nonsequential double ionization’ (NSDI) occurs when the laser field induces correlated dynamics within systems of multiple electrons.
    Using powerful, ultrashort laser pulses, researchers can now study how these processes unfold on timescales of just 10-18 seconds. This gives opportunities to study phenomena such as the motions of electrons within atoms, the dynamics of charges within molecules, and oscillations of electric fields within laser pulses.
    Today, many theoretical approaches are used to study attosecond physics. Within this landscape, two broadly opposing viewpoints have emerged: the ‘analytical’ approach, in which systems are studied using suitable approximations of physical processes; and the ‘ab-initio’ approach, where systems are broken down into their elemental parts, then analysed using fundamental physics.
    Using ATI, HHG, and NSDI as case studies, the first of the Quantum Battles papers explores this tension through a dialogue between two hypothetical theorists, each representing viewpoints expressed by the workshop’s discussion panel. The study investigates three main questions: relating to the scope and nature of both approaches, their relative advantages and disadvantages, and their complementary roles in scientific discovery so far.
    Another source of tension within the attoscience community relates to quantum tunnelling — describing how quantum particles can travel directly through energy barriers. Here, a long-standing debate exists over whether tunnelling occurs instantaneously, or if it requires some time; and if so, how much.
    The second paper follows this debate through analysis of the panel’s viewpoints, as they discussed the physical observables of tunnelling experiments; theoretical approaches to assessing tunnelling time; and the nature of tunnelling itself. The study aims to explain why so many approaches reach differing conclusions, given the lack of any universally-agreed definition of tunnelling.
    The wave-like properties of matter are a further key concept in quantum mechanics. On attosecond timescales, intense laser fields can be used to exploit interference between matter waves of electrons. This allows researchers to create images with sub-atomic resolutions, while maintaining the ability to capture dynamics occurring on ultra-short timescales.
    The final ‘battle’ paper explores several questions which are rarely asked about this technique. In particular, it explores the physical differences between the roles of matter waves in HHG — which can be used to extend imaging capabilities; and ATI — which is used to generate packets of electron matter waves.
    The Quantum Battles workshop oversaw a wide variety of lively, highly interactive debates between a diverse range of participants: from leading researchers, to those just starting out in their careers. In many cases, the discussions clarified the points of tension that exist within the attoscience community. This format was seen as particularly innovative by the community and the general public, who could follow the discussions via dedicated social media platforms. One participant even referred to the Quantum Battles as a `breath of fresh air’.
    Quantum Battles promoted the view that while initial discoveries may stem from a specific perspective, scientific progress happens when representatives of many different viewpoints collaborate with each other. One immediate outcome is the “AttoFridays” online seminar series, which arose from the success of the workshop. With their fresh and open approach, Quantum Battles and AttoFridays will lead to more efficient and constructive discussions across institutional, scientific, and national borders.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Novel advanced light design and fabrication process could revolutionize sensing technologies

    Vanderbilt and Penn State engineers have developed a novel approach to design and fabricate thin-film infrared light sources with near-arbitrary spectral output driven by heat, along with a machine learning methodology called inverse design that reduced the optimization time for these devices from weeks or months on a multi-core computer to a few minutes on a consumer-grade desktop.
    The ability to develop inexpensive, efficient, designer infrared light sources could revolutionize molecular sensing technologies. Additional applications include free-space communications, infrared beacons for search and rescue, molecular sensors for monitoring industrial gases, environmental pollutants and toxins.
    The research team’s approach, detailed today in Nature Materials, uses simple thin-film deposition, one of the most mature nano-fabrication techniques, aided by key advances in materials and machine learning.
    Standard thermal emitters, such as incandescent lightbulbs, generate broadband thermal radiation that restricts their use to simple applications. In contrast, lasers and light emitting diodes offer the narrow frequency emission desired for many applications but are typically too inefficient and/or expensive. That has directed research toward wavelength-selective thermal emitters to provide the narrow bandwidth of a laser or LED, but with the simple design of a thermal emitter. However, to date most thermal emitters with user-defined output spectral have required patterned nanostructures fabricated with high-cost, low-throughput methods.
    The research team led by Joshua Caldwell, Vanderbilt associate professor of mechanical engineering, and Jon-Paul Maria, professor of materials science and engineering at Penn State, set out to conquer long-standing challenges and create a more efficient process. Their approach leverages the broad spectral tunability of the semiconductor cadmium oxide in concert with a one-dimensional photonic crystal fabricated with alternating layers of dielectrics referred to as a distributed Bragg reflector.
    The combination of these multiple layers of materials gives rise to a so-called “Tamm-polariton,” where the emission wavelength of the device is dictated by the interactions between these layers. Until now, such designs were limited to a single designed wavelength output. But creating multiple resonances at multiple frequencies with user-controlled wavelength, linewidth, and intensity is imperative for matching the absorption spectra of most molecules. More

  • in

    Physicists describe photons’ characteristics to protect future quantum computing

    Consumers need to be confident that transactions they make online are safe and secure. A main method to protect customer transactions and other information is through encryption, where vital information is encoded with a key using complex mathematical problems that are difficult even for computers to solve.
    But even that may have a weakness: Encrypted information could be decoded by future quantum computers that would try many keys simultaneously and rapidly find the right one.
    To prepare for this future possibility, researchers are working to develop codes that cannot be broken by quantum computers. These codes rely on distributing single photons — single particles of light — that share a quantum character solely among the parties that wish to communicate. The new quantum codes require these photons to have the same color, so they are impossible to distinguish from each other, and the resulting devices, networks, and systems form the backbone of a future “quantum internet.”
    Researchers at the University of Iowa have been studying the properties of photons emitted from solids and are now able to predict how sharp the color of each emitted photon can be. In a new study, the researchers describe theoretically how many of these indistinguishable photons can be sent simultaneously down a fiber-optical cable to establish secure communications, and how rapidly these quantum codes can send information.
    “Up to now, there has not been a well-founded quantitative description of the noise in the color of light emitted by these qubits, and the noise leading to loss of quantum coherence in the qubits themselves that’s essential for calculations,” says Michael Flatté, professor in the Department of Physics and Astronomy and the study’s corresponding author. “This work provides that.”
    Story Source:
    Materials provided by University of Iowa. Original written by Richard Lewis. Note: Content may be edited for style and length. More

  • in

    New photonic chip for isolating light may be key to miniaturizing quantum devices

    Light offers an irreplaceable way to interact with our universe. It can travel across galactic distances and collide with our atmosphere, creating a shower of particles that tell a story of past astronomical events. Here on earth, controlling light lets us send data from one side of the planet to the other.
    Given its broad utility, it’s no surprise that light plays a critical role in enabling 21st century quantum information applications. For example, scientists use laser light to precisely control atoms, turning them into ultra-sensitive measures of time, acceleration, and even gravity. Currently, such early quantum technology is limited by size — state-of-the-art systems would not fit on a dining room table, let alone a chip. For practical use, scientists and engineers need to miniaturize quantum devices, which requires re-thinking certain components for harnessing light.
    Now IQUIST member Gaurav Bahl and his research group have designed a simple, compact photonic circuit that uses sound waves to rein in light. The new study, published in the October 21 issue of the journal Nature Photonics, demonstrates a powerful way to isolate, or control the directionality of light. The team’s measurements show that their approach to isolation currently outperforms all previous on-chip alternatives and is optimized for compatibility with atom-based sensors.
    “Atoms are the perfect references anywhere in nature and provide a basis for many quantum applications,” said Bahl, a professor in Mechanical Science and Engineering (MechSe) at the University of Illinois at Urbana-Champaign. “The lasers that we use to control atoms need isolators that block undesirable reflections. But so far the isolators that work well in large-scale experiments have proved tough to miniaturize.”
    Even in the best of circumstances, light is difficult to control — it will reflect, absorb, and refract when encountering a surface. A mirror sends light back where it came from, a shard of glass bends light while letting it through, and dark rocks absorb light and converts it to heat. Essentially, light will gladly scatter every which way off anything in its path. This unwieldy behavior is why even a smidgen of light is beneficial for seeing in the dark.
    Controlling light within large quantum devices is normally an arduous task that involves a vast sea of mirrors, lenses, fibers, and more. Miniaturization requires a different approach to many of these components. In the last several years, scientists and engineers have made significant advances in designing various light-controlling elements on microchips. They can fabricate waveguides, which are channels for transporting light, and can even change its color using certain materials. But forcing light, which is made from tiny blips called photons, to move in one direction while suppressing undesirable backwards reflections is tricky. More

  • in

    Two beams are better than one

    Han and Leia. George and Amal. Kermit and Miss Piggy. Gomez and Morticia. History’s greatest couples rely on communication to make them so strong their power cannot be denied.
    But that’s not just true for people (or Muppets), it’s also true for lasers.
    According to new research from the USC Viterbi School of Engineering, recently published in Nature Photonics, adding two lasers together as a sort of optical “it couple” promises to make wireless communications faster and more secure than ever before. But first, a little background. Most laser-based communications — think fiber optics, commonly used for things like high-speed internet — is transmitted in the form of a laser (optical) beam traveling through a cable. Optical communications is exceptionally fast but is limited by the fact that it must travel through physical cables. Bringing the high-capacity capabilities of lasers to untethered and roving applications — such as to airplanes, drones, submarines, and satellites — is truly exciting and potentially game-changing.
    The USC Viterbi researchers have gotten us one step closer to that feat by focusing on something called Free Space Optical Communication (FSOC). This is no small feat, and it is a challenge researchers have been working on for some time. One major roadblock has been something called “atmospheric turbulence.”
    As a single optical laser beam carrying information travels through the air, it experiences natural turbulence, much like a plane does. Wind and temperature changes in the atmosphere around it cause the beam to become less stable. Our inability to control that turbulence is what has prevented FSOC from advancing in performance similar to radio and optical fiber systems. Leaving us stuck with slower old radio waves for most wireless communication.
    “While FSOC has been around a while, it has been a fundamental challenge to efficiently recover information from an optical beam that has been affected by atmospheric turbulence,” said Runzhou Zhang, the lead author and a Ph.D. student at USC Viterbi’s Optical Communications Laboratory in the Ming Hsieh Department of Electrical and Computer Engineering.
    The researchers made an advance to solving this problem by sending a second laser beam (called a “pilot” beam) traveling along with the first to act as a partner. Traveling as a couple, the two beams are sent through the same air, experience the same turbulence, and have the same distortion. If only one beam is sent, the receiver must calculate all the distortion the beam experienced along the way before it can decode the data. This severely limits the system’s performance.
    But, when the pilot beam travels alongside the original beam, the distortion is automatically removed. Like Kermit duetting “Rainbow Connection” with Miss Piggy, the information in that beam arrives at its destination clear, crisp and easy to understand. From an engineering perspective, this accomplishment is no small feat. “The problem with radio waves, our current best bet for most wireless communication, is that it is much slower in data rate and much less secure than optical communications,” said Alan Willner, team lead on the paper and USC Viterbi professor of electrical and computer engineering. “With our new approach, we are one step closer to mitigating turbulence in high-capacity optical links.”
    Perhaps most impressively, the researchers did not solve this problem with a new device or material. They simply looked at the physics and changed their perspective. “We used the underlying physics of a well-known device called a photo detector, usually used for detecting intensity of light, and realized it could be used in a new way to make an advance towards solving the turbulence problem for laser communication systems,” said Zhang.
    Think about it this way: When Kermit and Miss Piggy sing their song, both their voices get distorted through the air in a similar way. That makes sense; they’re standing right next to each other, and their sound is traveling through the same atmosphere. What this photo detector does is turn the distortion of Kermit’s voice into the opposite of the distortion for Miss Piggy’s voice. Now, when they are mixed back together, the distortion is automatically canceled in both voices and we hear the song clearly and crisply.
    With this newly realized application of physics, the team plans to continue exploring how to make the performance even better. “We hope that our approach will one day enable higher-performance and secure wireless links,” said Willner. Such links may be used for anything from high-resolution imaging to high-performance computing.
    Story Source:
    Materials provided by University of Southern California. Original written by Ben Paul. Note: Content may be edited for style and length. More

  • in

    Machine learning can be fair and accurate

    Carnegie Mellon University researchers are challenging a long-held assumption that there is a trade-off between accuracy and fairness when using machine learning to make public policy decisions.
    As the use of machine learning has increased in areas such as criminal justice, hiring, health care delivery and social service interventions, concerns have grown over whether such applications introduce new or amplify existing inequities, especially among racial minorities and people with economic disadvantages. To guard against this bias, adjustments are made to the data, labels, model training, scoring systems and other aspects of the machine learning system. The underlying theoretical assumption is that these adjustments make the system less accurate.
    A CMU team aims to dispel that assumption in a new study, recently published in Nature Machine Intelligence. Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department (MLD) and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in MLD; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and found the trade-off was negligible in practice across a range of policy domains.
    “You actually can get both. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”
    Ghani and Rodolfa focused on situations where in-demand resources are limited, and machine learning systems are used to help allocate those resources. The researchers looked at systems in four areas: prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city’s limited housing inspectors; modeling the risk of students not graduating from high school in time to identify those most in need of additional support; and helping teachers reach crowdfunding goals for classroom needs.
    In each context, the researchers found that models optimized for accuracy — standard practice for machine learning — could effectively predict the outcomes of interest but exhibited considerable disparities in recommendations for interventions. However, when the researchers applied adjustments to the outputs of the models that targeted improving their fairness, they discovered that disparities based on race, age or income — depending on the situation — could be removed without a loss of accuracy.
    Ghani and Rodolfa hope this research will start to change the minds of fellow researchers and policymakers as they consider the use of machine learning in decision making.
    “We want the artificial intelligence, computer science and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” Rodolfa said. “We hope policymakers will embrace machine learning as a tool in their decision making to help them achieve equitable outcomes.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More