More stories

  • in

    Quantum matter breakthrough: Tuning density waves

    Scientists at EPFL have found a new way to create a crystalline structure called a “density wave” in an atomic gas. The findings can help us better understand the behavior of quantum matter, one of the most complex problems in physics.
    “Cold atomic gases were well known in the past for the ability to ‘program’ the interactions between atoms,” says Professor Jean-Philippe Brantut at EPFL. “Our experiment doubles this ability!” Working with the group of Professor Helmut Ritsch at the University of Innsbruck, they have made a breakthrough that can impact not only quantum research but quantum-based technologies in the future.
    Density waves
    Scientists have long been interested in understanding how materials self-organize into complex structures, such as crystals. In the often-arcane world of quantum physics, this sort of self-organization of particles is seen in ‘density waves’, where particles arrange themselves into a regular, repeating pattern or ‘order’; like a group of people with different colored shirts on standing in a line but in a pattern where no two people with the same color shirt stand next to each other.
    Density waves are observed in a variety of materials, including metals, insulators, and superconductors. However, studying them has been difficult, especially when this order (the patterns of particles in the wave) occurs with other types of organization such as superfluidity — a property that allows particles to flow without resistance.
    It’s worth noting that superfluidity is not just a theoretical curiosity; it is of immense interest for developing materials with unique properties, such as high-temperature superconductivity, which could lead to more efficient energy transfer and storage, or for building quantum computers.
    Tuning a Fermi gas with light
    To explore this interplay, Brantut and his colleagues, the researchers created a “unitary Fermi gas,” a thin gas of lithium atoms cooled to extremely low temperatures, and where atoms collide with each other very often.
    The researchers then placed this gas in an optical cavity, a device used to confine light in a small space for an extended period of time. Optical cavities are made of two facing mirrors that reflect incoming light back and forth between them thousands of times, allowing light particles, photons, to build up inside the cavity.
    In the study, the researchers used the cavity to cause the particles in the Fermi gas to interact at long distance: a first atom would emit a photon that bounces onto the mirrors, which is then reabsorbed by second atom of the gas, regardless how far it is from the first. When enough photons are emitted and reabsorbed — easily tuned in the experiment — the atoms collectively organize into a density wave pattern.
    “The combination of atoms colliding directly with each other in the Fermi gas, while simultaneously exchanging photons over long distance, is a new type of matter where the interactions are extreme,” says Brantut. “We hope what we will see there will improve our understanding of some of the most complex materials encountered in physics.” More

  • in

    ‘Segment-jumping’ ridgecrest earthquakes explored in new study

    On the morning of July 4, 2019, a magnitude 6.4 earthquake struck the Searles Valley in California’s Mojave Desert, with impacts felt across Southern California. About 34 hours later on July 5, the nearby city of Ridgecrest was struck by a magnitude 7.1 earthquake, a jolt felt by millions across the state of California and throughout neighboring communities in Arizona, Nevada, and even Baja California, Mexico.
    Known as the Ridgecrest earthquakes — the biggest earthquakes to hit California in more than 20 years — these seismic events resulted in extensive structural damage, power outages, and injuries. The M6.4 event in Searles Valley was later deemed to be the foreshock to the M7.1 event in Ridgecrest, which is now considered to be the mainshock. Both earthquakes were followed by a multitude of aftershocks.
    Researchers were baffled by the sequence of seismic activity. Why did it take 34 hours for the foreshock to trigger the mainshock? How did these earthquakes “jump” from one segment of a geologic fault system to another? Can earthquakes “talk” to one another in a dynamic sense?
    To address these questions, a team of seismologists at Scripps Institution of Oceanography at UC San Diego and Ludwig Maximilian University of Munich (LMU) led a new study focused on the relationship between the two big earthquakes, which occurred along a multi-fault system. The team used a powerful supercomputer that incorporated data-infused and physics-based models to identify the link between the earthquakes.
    Scripps Oceanography seismologist Alice Gabriel, who previously worked at LMU, led the study along with her former PhD student at LMU, Taufiq Taufiqurrahman, and several co-authors. Their findings were published May 24 in the journal Nature online, and will appear in the print edition June 8.
    “We used the largest computers that are available and perhaps the most advanced algorithms to try and understand this really puzzling sequence of earthquakes that happened in California in 2019,” said Gabriel, currently an associate professor at the Institute of Geophysics and Planetary Physics at Scripps Oceanography. “High-performance computing has allowed us to understand the driving factors of these large events, which can help inform seismic hazard assessment and preparedness.”
    Understanding the dynamics of multi-fault ruptures is important, said Gabriel, because these types of earthquakes are typically more powerful than those that occur on a single fault. For example, the Turkey-Syria earthquake doublet that occurred on Feb. 6, 2023, resulted in significant loss of life and widespread damage. This event was characterized by two separate earthquakes that occurred only nine hours apart, with both breaking across multiple faults.

    During the 2019 Ridgecrest earthquakes, which originated in the Eastern California Shear Zone along a strike-slip fault system, the two sides of each fault moved mainly in a horizontal direction, with no vertical motion. The earthquake sequence cascaded across interlaced and previously unknown “antithetic” faults, minor or secondary faults that move at high (close to 90 degrees) angles to the major fault. Within the seismological community, there remains an ongoing debate on which fault segments actively slipped, and what conditions promote the occurrence of cascading earthquakes.
    The new study presents the first multi-fault model that unifies seismograms, tectonic data, field mapping, satellite data, and other space-based geodetic datasets with earthquake physics, whereas previous models on this type of earthquake have been purely data-driven.
    “Through the lens of data-infused modeling, enhanced by the capabilities of supercomputing, we unravel the intricacies of multi-fault conjugate earthquakes, shedding light on the physics governing cascading rupture dynamics,” said Taufiqurrahman.
    Using the supercomputer SuperMUC-NG at the Leibniz Supercomputing Centre (LRZ) in Germany, the researchers revealed that the Searles Valley and Ridgecrest events were indeed connected. The earthquakes interacted across a statically strong yet dynamically weak fault system driven by complex fault geometries and low dynamic friction.
    The team’s 3-D rupture simulation illustrates how the faults considered strong prior to an earthquake can become very weak as soon as there is fast earthquake movement and explain the dynamics of how multiple faults can rupture together.

    “When fault systems are rupturing, we see unexpected interactions. For example, earthquake cascades, which can jump from segment to segment, or one earthquake causing the next one to take an unusual path. The earthquake may become much larger than what we would’ve expected,” said Gabriel. “This is something that is challenging to build into seismic hazard assessments.”
    According to the authors, their models have the potential to have a “transformative impact” on the field of seismology by improving the assessment of seismic hazards in active multi-fault systems that are often underestimated.
    “Our findings suggest that similar kinds of models could incorporate more physics into seismic hazard assessment and preparedness,” said Gabriel. “With the help of supercomputers and physics, we have unraveled arguably the most detailed data set of a complex earthquake rupture pattern.”
    The study was supported by the European Union’s Horizon 2020 Research and Innovation Programme, Horizon Europe, the National Science Foundation, the German Research Foundation, and the Southern California Earthquake Center.
    In addition to Gabriel and Taufiqurrahman, the study was co-authored by Duo Li, Thomas Ulrich, Bo Li, and Sara Carena of Ludwig Maximilian University of Munich, Germany; Alessandro Verdecchia with McGill University in Montreal, Canada, and Ruhr-University Bochum in Germany; and Frantisek Gallovic of Charles University in Prague, Czech Republic. More

  • in

    Scientists find evidence for new superconducting state in Ising superconductor

    In a ground-breaking experiment, scientists from the University of Groningen, together with colleagues from the Dutch universities of Nijmegen and Twente and the Harbin Institute of Technology (China), have discovered the existence of a superconductive state that was first predicted in 2017. They present evidence for a special variant of the FFLO superconductive state on 24 May in the journal Nature. This discovery could have significant applications, particularly in the field of superconducting electronics.
    The lead author of the paper is Professor Justin Ye, who heads the Device Physics of Complex Materials group at the University of Groningen. Ye and his team have been working on the Ising superconducting state. This is a special state that can resist magnetic fields that generally destroy superconductivity, and that was described by the team in 2015. In 2019, they created a device comprising a double layer of molybdenum disulfide that could couple the Ising superconductivity states residing in the two layers. Interestingly, the device created by Ye and his team makes it possible to switch this protection on or off using an electric field, resulting in a superconducting transistor.
    Elusive
    The coupled Ising superconductor device sheds light on a long-standing challenge in the field of superconductivity. In 1964, four scientists (Fulde, Ferrell, Larkin, and Ovchinnikov) predicted a special superconducting state that could exist under conditions of low temperature and strong magnetic field, referred to as the FFLO state. In standard superconductivity, electrons travel in opposite directions as Cooper pairs. Since they travel at the same speed, these electrons have a total kinetic momentum of zero. However, in the FFLO state, there is a small speed difference between the electrons in the Cooper pairs, which means that there is a net kinetic momentum.
    ‘This state is very elusive and there are only a handful of articles claiming its existence in normal superconductors,’ says Ye. ‘However, none of these are conclusive.’ To create the FFLO state in a conventional superconductor, a strong magnetic field is needed. But the role played by the magnetic field needs careful tweaking. Simply put, for two roles to be played by the magnetic field, we need to use the Zeeman effect. This separates electrons in Cooper pairs based on the direction of their spins (a magnetic moment), but not on the orbital effect — the other role that normally destroys superconductivity. ‘It is a delicate negotiation between superconductivity and the external magnetic field,’ explains Ye.
    Fingerprint
    Ising superconductivity, which Ye and his collaborators introduced and published in the journal Science in 2015, suppresses the Zeeman effect. ‘By filtering out the key ingredient that makes conventional FFLO possible, we provided ample space for the magnetic field to play its other role, namely the orbital effect,’ says Ye.
    ‘What we have demonstrated in our paper is a clear fingerprint of the orbital effect-driven FFLO state in our Ising superconductor,’ explains Ye. ‘This is an unconventional FFLO state, first described in theory in 2017.’ The FFLO state in conventional superconductors requires extremely low temperatures and a very strong magnetic field, which makes it difficult to create. However, in Ye’s Ising superconductor, the state is reached with a weaker magnetic field and at higher temperatures.
    Transistors
    In fact, Ye first observed signs of an FFLO state in his molybdenum disulfide superconducting device in 2019. ‘At that time, we could not prove this, because the samples were not good enough,’ says Ye. However, his PhD student Puhua Wan has since succeeded in producing samples of the material that fulfilled all the requirements to show that there is indeed a finite momentum in the Cooper pairs. ‘The actual experiments took half a year, but the analysis of the results added another year,’ says Ye. Wan is the first author of the Nature paper.
    This new superconducting state needs further investigation. Ye: ‘There is a lot to learn about it. For example, how does the kinetic momentum influence the physical parameters? Studying this state will provide new insights into superconductivity. And this may enable us to control this state in devices such as transistors. That is our next challenge.’ More

  • in

    Breakthrough in computer chip energy efficiency could cut data center electricity use

    Researchers at Oregon State University and Baylor University have made a breakthrough toward reducing the energy consumption of the photonic chips used in data centers and supercomputers.
    The findings are important because a data center can consume up to 50 times more energy per square foot of floor space than a typical office building, according to the U.S. Department of Energy.
    A data center houses an organization’s information technology operations and equipment; it stores, processes and disseminates data and applications. Data centers account for roughly 2% of all electricity use in the United States, the DOE says.
    According to the U.S. International Trade Commission, the number of data centers has risen rapidly as data demand has soared. In the United States, home to many firms that produce and consume vast amounts of data including Facebook, Amazon, Microsoft and Google, there are more than 2,600 data centers.
    The advance by John Conley of the OSU College of Engineering, former Oregon State colleague Alan Wang, now of Baylor, and OSU graduate students Wei-Che Hsu, Ben Kupp and Nabila Nujhat involves a new, ultra-energy-efficient method to compensate for temperature variations that degrade photonic chips. Such chips “will form the high-speed communication backbone of future data centers and supercomputers,” Conley said.
    The circuitry in photonic chips uses photons — particles of light — rather than the electrons that course through conventional computer chips. Moving at the speed of light, photons enable the extremely rapid, energy-efficient transmission of data.
    The issue with photonic chips is that up until now, significant energy has been required to keep their temperature stable and performance high. The team led by Wang, however, has shown that it’s possible to reduce the energy needed for temperature control by a factor of more than 1 million.
    “Alan is an expert in photonic materials and devices and my area of expertise is atomic layer deposition and electronic devices,” Conley said. “We were able to make working prototypes that show temperature can be controlled via gate voltage, which means using virtually no electric current.”
    Presently, Wang said, the photonics industry exclusively relies on components known as “thermal heaters” to fine tune the working wavelengths of high-speed, electro-optic devices and optimize their performance. These thermal heaters consume several milliwatts of electricity per device.
    “That might not sound like much considering that a typical LED lightbulb uses 6 to 10 watts,” Wang said. “However, multiply those several milliwatts by millions of devices and they add up quickly, so that approach faces challenges as systems scale up and become bigger and more powerful.”
    “Our method is much more acceptable for the planet,” Conley added. “It will one day allow data centers to keep getting faster and more powerful while using less energy so that we can access ever more powerful applications driven by machine learning, such as ChatGPT, without feeling guilty.” More

  • in

    Calcium rechargeable battery with long cycle life

    A research group has developed a prototype calcium (Ca) metal rechargeable battery capable of 500 cycles of repeated charge-discharge — the benchmark for practical use.
    The breakthrough was reported in the journal Advanced Science on May 19, 2023.
    With the use of electric vehicles and grid-scale energy storage systems on the rise, the need to explore alternatives to lithium-ion batteries (LIBs) has never been greater. One such replacement is Ca metal batteries. As the fifth most abundant element in earth’s crust, calcium is widely available and inexpensive, and has higher energy density potential than LIBs. Its properties are also thought to help accelerate ion transport and diffusion in electrolytes and cathode materials, giving it an edge over other LIB-alternatives such as magnesium and zinc.
    But many hurdles remain in the way of Ca metal batteries’ commercial viability. The lack of an efficient electrolyte and the absence of cathode materials with sufficient Ca2+ storage capabilities have proved to be the main stumbling blocks.
    Back in 2021, some members of the current research group provided a solution to the former problem when they realized a new fluorine-free calcium (Ca) electrolyte based on a hydrogen (monocarborane) cluster. The electrolyte demonstrated markedly improved electrochemical performances such as high conductivity and high electrochemical stabilities.
    “For our current research, we tested the long-term operation of a Ca metal battery with a copper sulfide (CuS) nanoparticle/carbon composite cathode and a hydride-based electrolyte,” says Kazuaki Kisu, assistant professor at Tohoku University’s Institute for Materials Research (IMR).
    Also a natural mineral, CuS has favorable electrochemical properties. Its layered structure enables it to store a variety of cations, including lithium, sodium and magnesium. It has a large theoretical capacity of 560 mAh g-1 — two to three times higher than present cathode materials for lithium-ion batteries.
    Through nanoparticulation and compositing with carbon materials, Kisu and his collegues were able to create a cathode capable of storing large amounts of calcium ions. When employed with the hydride-type electrolyte, they produce a battery with a highly stable cycling performance. The prototype battery maintained 92% capacity retention over 500 cycles based on the capacity of the 10th cycle.
    The group is confident that their breakthrough will help advance research into cathode materials for Ca-based batteries. “Our study confirms the feasibility of Ca metal anodes for long-term operations, and we are hopeful the results will expedite the development of Ca metal batteries,” says Kisu. More

  • in

    Researchers build bee robot that can twist

    A robotic bee that can fly fully in all directions has been developed by Washington State University researchers.
    With four wings made out of carbon fiber and mylar as well as four light-weight actuators to control each wing, the Bee++ prototype is the first to fly stably in all directions. That includes the tricky twisting motion known as yaw, with the Bee++ fully achieving the six degrees of free movement that a typical flying insect displays.
    Led by Néstor O. Pérez-Arancibia, Flaherty associate professor in WSU’s School of Mechanical and Materials Engineering, the researchers report on their work in the journal, IEEE Transactions on Robotics. Pérez-Arancibia will present the results at the IEEE International Conference on Robotics and Automation at the end of this month.
    Researchers have been trying to develop artificial flying insects for more than 30 years, said Pérez-Arancibia. They could someday be used for many applications, including for artificial pollination, search and rescue efforts in tight spaces, biological research, or environmental monitoring, including in hostile environments.
    But just getting the tiny robots to take off and land required development of controllers that act the way an insect brain does.
    “It’s a mixture of robotic design and control,” he said. “Control is highly mathematical, and you design a sort of artificial brain. Some people call it the hidden technology, but without those simple brains, nothing would work.”
    Researchers initially developed a two-winged robotic bee, but it was limited in its movement. In 2019, Pérez-Arancibia and two of his PhD students for the first time built a four-winged robot light enough to take off. To do two maneuvers known as pitching or rolling, the researchers make the front wings flap in a different way than the back wings for pitching and the right wings flap in a different way than the left wings for rolling, creating torque that rotates the robot about its two main horizontal axes.

    But being able to control the complex yaw motion is tremendously important, he said. Without it, robots spin out of control, unable to focus on a point. Then they crash.
    “If you can’t control yaw, you’re super limited,” he said. “If you’re a bee, here is the flower, but if you can’t control the yaw, you are spinning all the time as you try to get there.”
    Having all degrees of movement is also critically important for evasive maneuvers or tracking objects.
    “The system is highly unstable, and the problem is super hard,” he said. “For many years, people had theoretical ideas about how to control yaw, but nobody could achieve it due to actuation limitations.”
    To allow their robot to twist in a controlled manner, the researchers took a cue from insects and moved the wings so that they flap in an angled plane. They also increased the amount of times per second their robot can flap its wings — from 100 to 160 times per second.
    “Part of the solution was the physical design of the robot, and we also invented a new design for the controller — the brain that tells the robot what to do,” he said.
    Weighing in at 95 mg with a 33-millimeter wingspan, the Bee++ is still bigger than real bees, which weigh around 10 milligrams. Unlike real insects, it can only fly autonomously for about five minutes at a time, so it is mostly tethered to a power source through a cable. The researchers are also working to develop other types of insect robots, including crawlers and water striders.
    Pérez-Arancibia’s former PhD students at the University of Southern California, Ryan M. Bena, Xiufeng Yang, and Ariel A. Calderón, co-authored the article. The work was funded by the National Science Foundation and DARPA. The WSU Foundation and the Palouse Club through WSU’s Cougar Cage program has also provided support. More

  • in

    Boost for the quantum internet

    A quarter of a century ago, theoretical physicists at the University of Innsbruck made the first proposal on how to transmit quantum information via quantum repeaters over long distances which would open the door to the construction of a worldwide quantum information network. Now, a new generation of Innsbruck researchers has built a quantum repeater node for the standard wavelength of telecommunication networks and transmitted quantum information over tens of kilometers.
    Quantum networks connect quantum processors or quantum sensors with each other. This allows tap-proof communication and high-performance distributed sensor networks. Between network nodes, quantum information is exchanged by photons that travel through optical waveguides. Over long distances, however, the likelihood of photons being lost increases dramatically. As quantum information cannot simply be copied and amplified, 25 years ago Hans Briegel, Wolfgang Dür, Ignacio Cirac and Peter Zoller, then all at the University of Innsbruck, provided the blueprints for a quantum repeater. These feature light-matter entanglement sources and memories to create entanglement in independent network links that are connected between them by a so-called entanglement swap to finally distribute entanglement over long distances.
    Even transmission over 800 kilometers possible
    Quantum physicists led by Ben Lanyon from the Department of Experimental Physics at the University of Innsbruck have now succeeded in building the core parts of a quantum repeater — a fully functioning network node made with two single matter systems enabling entanglement creation with a photon at the standard frequency of the telecommunications network and entanglement swapping operations. The repeater node consists of two calcium ions captured in an ion trap within an optical resonator as well as single photon conversion to the telecom wavelength. The scientists thus demonstrated the transfer of quantum information over a 50-kilometer-long optical fiber, with the quantum repeater placed exactly halfway between starting and end point. The researchers were also able to calculate which improvements of this design would be necessary to make transmission over 800 kilometers possible which would allow to connect Innsbruck to Vienna.
    The current results were published in Physical Review Letters. Funding for the research was provided by a START award from the Austrian Science Fund FWF, the Austrian Academy of Sciences and the European Union, among others. Lanyon’s team is part of the Quantum Internet Alliance, an international project under the EU Quantum Flagship. More

  • in

    Effects of crypto mining on Texas power grid

    Cryptocurrency transactions may be costing more than just transaction fees. The electricity used for these transactions is more than what some countries, like Argentina and Australia, use in an entire year.
    Published estimates of the total global electricity usage for cryptocurrency assets such as Bitcoin are between 120 and 240 billion kilowatt-hours per year, according to the White House Office of Science and Technology. The United States leads these numbers.
    Finance and business experts have debated the ramifications of cryptocurrency and mining, but little focus has been placed on the impact of these activities on the power grid and energy consumption until now.
    Dr. Le Xie, professor in the Department of Electrical and Computer Engineering at Texas A&M University and associate director of the Texas A&M Energy Institute, is at the center of this effort to understand how cryptocurrency mining impacts the power grid and how to use this information for further research, education and policymaking.
    Even as technology improves, allowing users to do more while using less energy, cryptocurrency mining is computationally intensive, and the measure of power on the blockchain network, or hash rate, is still rising.
    During the summer heatwave of 2022 in Texas, Xie and his collaborators found an 18% reduction in worldwide cryptocurrency mining. The decrease was linked to the stress on the Texas power grid, which led the Electric Reliability Council of Texas to issue a request for energy consumers to conserve energy.

    “There seems to be a very strong negative correlation between the mining demand and the systemwide total net demand,” Xie said. “When the grid is stressed, crypto miners are shutting down, which demonstrates a potential for demand flexibility.”
    For example, when the grid is under stress due to a heat wave, homeowners consume more air conditioning and, in turn, more power. Compared to these types of firm demand, the cryptocurrency mining demand shows good potential for providing flexibilities during times when peak energy usage in other areas is vital.
    Their findings are published in the March issue of the Institute of Electrical and Electronics Engineers Transactions on Energy Markets, Policy and Regulation and the June issue of Advances in Applied Energy.
    In these papers, Xie and his students provide data to allow a first step into studying these mining facilities’ carbon footprint and the impact on grid reliability and wholesale electricity prices. Ultimately, location matters, and many factors play a part in this complex discussion.
    “Increasing firm demand will invariably result in a decrease in grid reliability,” Xie said. “However, with crypto mining modeled as a flexible load that can be turned off during the stressed moments, it can be a positive contributor to the grid reliability.”
    Xie is the lead for the Blockchain and Energy Research Consortium at Texas A&M, which is a collaboration between a team of Texas A&M researchers and industry partners. Their mission is to provide an unbiased multidisciplinary resource to communicate recent developments in the intersection of blockchain and energy.
    Although cryptocurrency is still in its infancy, one thing is certain — increasing energy usage will be critical as this emerging industry for transactions continues to advance. With that in mind, Xie is continuing his research to find a solution that helps take advantage of blockchain-enabled technologies while ensuring a sustainable grid operation. More