More stories

  • in

    Quantum scientists accurately measure power levels one trillion times lower than usual

    Scientists in Finland have developed a nanodevice that can measure the absolute power of microwave radiation down to the femtowatt level at ultra-low temperatures — a scale trillion times lower than routinely used in verifiable power measurements. The device has the potential to significantly advance microwave measurements in quantum technology.
    Measuring extremely low power
    Quantum science takes place mostly at ultra-low temperatures using devices called dilution refrigerators. The experiments also have to be done at tiny energy levels — down to the energy level of single photons or even less. Researchers have to measure these extremely low energy levels as accurately as possible, which means also accounting for heat — a persistent problem for quantum devices.
    To measure heat in quantum experiments, scientists use a special type of thermometer called a bolometer. An exceptionally accurate bolometer was recently developed at Aalto by a team led by Mikko Möttönen, associate professor of quantum technology at Aalto and VTT, but the device had more uncertainty than they had hoped for. Although it enabled them to observe the relative power level, they couldn’t determine the absolute amount of energy very accurately.
    In the new study, Möttönen’s team worked with researchers at the quantum-technology companies Bluefors and IQM, and VTT Technical Research Centre of Finland to improve the bolometer.
    ‘We added a heater to the bolometer, so we can apply a known heater current and measure the voltage. Since we know the precise amount of power we’re putting into the heater, we can calibrate the power of the input radiation against the heater power. The result is a self-calibrating bolometer working at low temperatures, which allows us to accurately measure absolute powers at cryogenic temperatures,’ Möttönen says.

    According to Russell Lake, director of quantum applications at Bluefors, the new bolometer is a significant step forward in measuring microwave power.
    ‘Commercial power sensors typically measure power at the scale of one milliwatt. This bolometer does that accurately and reliably at one femtowatt or below. That’s a trillion times less power than used in typical power calibrations.’
    Covering both deep and wide scales
    Möttönen explains that the new bolometer could improve the performance of quantum computers. ‘For accurate results, the measurement lines used to control qubits should be at very low temperatures, void of any thermal photons and excess radiation. Now with this bolometer, we can actually measure that radiation temperature without interference from the qubit circuitry,’ he says.
    The bolometer also covers a very broad range of frequencies.

    ‘The sensor is broadband, which means that it can measure what is the power absorbed in various frequencies. This is not a given in quantum technology as usually the sensors are limited to a very narrow band,’ says Jean-Philippe Girard, a scientist at Bluefors who also previously worked at Aalto on the device.
    The team says the bolometer provides a major boost to quantum technology fields.
    ‘Measuring microwaves happens in wireless communications, radar technology, and many other fields. They have their ways of performing accurate measurements, but there was no way to do the same when measuring very weak microwave signals for quantum technology. The bolometer is an advanced diagnostic instrument that has been missing from the quantum technology toolbox until now,’ Lake says.
    The work is a result of seamless collaboration between Aalto University and Bluefors, a perfect example of academy and industry complementing each other’s strengths. The device was developed at Aalto’s Quantum Computing and Devices (QCD) group, which is part of the Academy of Finland Centre of Excellence in Quantum Technology (QTF). They used Micronova cleanrooms that belong to the national research infrastructure OtaNano. Since the first experiments at Aalto, Bluefors has also successfully tested these devices in their own industrial facilities.
    ‘That shows that this is not just a lucky break in a university lab, but something that both the industrial and the academic professionals working in quantum technology can benefit from,’ Möttönen says. More

  • in

    The metaverse can lead to better science

    In 2021, Facebook made “metaverse” the buzziest word on the web, rebranding itself as Meta and announcing a plan to build “a set of interconnected digital spaces that lets you do things you can’t do in the physical world.” Since then, the metaverse has been called many different things. Some say it is the “future of the internet.” Others call it “an amorphous concept that no one really wants.”
    For Diego Gómez-Zará, an assistant professor in the University of Notre Dame’s Department of Computer Science and Engineering, the metaverse is something else: a tool for better science.
    In “The Promise and Pitfalls of the Metaverse for Science,” published in Nature Human Behavior, Gómez-Zará argues that scientists should take advantage of the metaverse for research while also guarding against the potential hazards that come with working in virtual reality.
    Virtual environments, real benefits
    Along with co-authors Peter Schiffer (Department of Applied Physics and Department of Physics, Yale University) and Dashun Wang (McCormick School of Engineering, Northwestern University), Gómez-Zará defines the metaverse as a virtual space where users can interact in a three-dimensional environment and take actions that affect the world outside.
    The researchers say the metaverse stands to benefit science in four main ways.

    First, it could remove barriers and make science more accessible. To understand these opportunities, Gómez-Zará says, we need not speculate about the distant future. Instead, we can point to ways researchers have already begun using virtual environments in their work.
    At the University College London School of Pharmacy, for example, scientists have made a digital replica of their lab that can be visited in virtual reality. This digital replica allows scientists at various points around the world to meet, collaborate and make decisions together about how to move a research project forward.
    Similarly, a virtual laboratory training developed by the Centers for Disease Control and Prevention teaches young scientists in many different locations to identify the parts of a lab and even conduct emergency procedures.
    This example shows a second benefit: improving teaching and learning.
    Gómez-Zará explains, “For someone training to become a surgeon, it is very hard to perform a procedure for the first time without any mistakes. And if you are working with a real patient, a mistake can be very harmful. Experiential learning in a virtual environment can help you try something and make mistakes along the way without harmful consequences, and the freedom from harmful consequences can improve research in other fields as well.”
    Gómez-Zará is also working with a team at Notre Dame’s Virtual Reality Lab to understand a third potential benefit, one related to the social side of science. The research team studies the effects of online environments on a team’s work processes. They find that virtual environments can help teams collaborate more effectively than videoconferencing.

    “Since the pandemic, we have all become comfortable videoconferencing,” says Gómez-Zará. “But that doesn’t mean getting on a video call is the most effective tool for every task. Especially for intense social activities like team building and innovation, virtual reality is a much closer replica of what we would have offline and could prove much more effective.”
    Gómez-Zará says the metaverse could also be used to create wholly new experimental environments.
    “If you can get data and images from somewhere, you can create a virtual replica of that place in virtual reality,” Gómez-Zará explains. For example, he says, we have images of Mars captured by satellites and robots. “These could be used to create a virtual reality version of the environment where scientists can experience what it is like there. Eventually they could even interact with the environment from a distance.”
    Potential pitfalls
    Gómez-Zará emphasizes that realizing the full benefits of the metaverse will also require us to avoid several pitfalls associated with it.
    There are still barriers to using virtual reality. Virtual reality goggles and related equipment, while becoming more affordable, still require a significant investment.
    This issue relates to a larger one: Who owns the metaverse? Currently, a few technology companies control the metaverse, but Gómez-Zará notes that there have been calls for agencies and others who support research to invest in building an open, public metaverse. In the meantime, he says, it is important for researchers to think through questions of ownership and privacy any time they work in the metaverse.
    His overall message, though, is a hopeful one. “We still tend to associate the metaverse with entertainment and casual socialization. This makes it all too easy to dismiss,” he says. “But look at how quickly we have all adapted to technologies we used rarely before the pandemic. It could be the same way with the metaverse. We need the research community exploring it. That is the best way to plan for the risks while also recognizing all of the possibilities.” More

  • in

    Scientists propose revolution in complex systems modelling with quantum technologies

    Scientists have made a significant advancement with quantum technologies that could transform complex systems modelling with an accurate and effective approach that requires significantly reduced memory.
    Complex systems play a vital role in our daily lives, whether that be predicting traffic patterns, weather forecasts, or understanding financial markets. However, accurately predicting these behaviours and making informed decisions relies on storing and tracking vast information from events in the distant past — a process which presents huge challenges.
    Current models using artificial intelligence see their memory requirements increase by more than a hundredfold every two years and can often involve optimisation over billions — or even trillions — of parameters. Such immense amounts of information lead to a bottleneck where we must trade-off memory cost against predictive accuracy.
    A collaborative team of researchers from The University of Manchester, the University of Science and Technology of China (USTC), the Centre for Quantum Technologies (CQT) at the National University of Singapore and Nanyang Technological University (NTU) propose that quantum technologies could provide a way to mitigate this trade-off.
    The team have successfully implemented quantum models that can simulate a family of complex processes with only a single qubit of memory — the basic unit of quantum information — offering substantially reduced memory requirements.
    Unlike classical models that rely on increasing memory capacity as more data from past events are added, these quantum models will only ever need one qubit of memory.

    The development, published in the journal Nature Communications, represents a significant advancement in the application of quantum technologies in complex system modelling.
    Dr Thomas Elliott, project leader and Dame Kathleen Ollerenshaw Fellow at The University of Manchester, said: “Many proposals for quantum advantage focus on using quantum computers to calculate things faster. We take a complementary approach and instead look at how quantum computers can help us reduce the size of the memory we require for our calculations.
    “One of the benefits of this approach is that by using as few qubits as possible for the memory, we get closer to what is practical with near-future quantum technologies. Moreover, we can use any extra qubits we free up to help mitigate against errors in our quantum simulators.”
    The project builds on an earlier theoretical proposal by Dr Elliott and the Singapore team. To test the feasibility of the approach, they joined forces with USTC, who used a photon-based quantum simulator to implement the proposed quantum models.
    The team achieved higher accuracy than is possible with any classical simulator equipped with the same amount of memory. The approach can be adapted to simulate other complex processes with different behaviours.
    Dr Wu Kang-Da, post-doctoral researcher at USTC and joint first author of the research, said: “Quantum photonics represents one of the least error-prone architectures that has been proposed for quantum computing, particularly at smaller scales. Moreover, because we are configuring our quantum simulator to model a particular process, we are able to finely-tune our optical components and achieve smaller errors than typical of current universal quantum computers.”
    Dr Chengran Yang, Research Fellow at CQT and also joint first author of the research, added: “This is the first realisation of a quantum stochastic simulator where the propagation of information through the memory over time is conclusively demonstrated, together with proof of greater accuracy than possible with any classical simulator of the same memory size.”
    Beyond the immediate results, the scientists say that the research presents opportunities for further investigation, such as exploring the benefits of reduced heat dissipation in quantum modelling compared to classical models. Their work could also find potential applications in financial modelling, signal analysis and quantum-enhanced neural networks.
    Next steps include plans to explore these connections, and to scale their work to higher-dimensional quantum memories. More

  • in

    Medical ‘microrobots’ could one day treat bladder disease, other human illnesses

    A team of engineers at the University of Colorado Boulder has designed a new class of tiny, self-propelled robots that can zip through liquid at incredible speeds — and may one day even deliver prescription drugs to hard-to-reach places inside the human body.
    The researchers describe their mini healthcare providers in a paper published last month in the journal Small.
    “Imagine if microrobots could perform certain tasks in the body, such as non-invasive surgeries,” said Jin Lee, lead author of the study and a postdoctoral researcher in the Department of Chemical and Biological Engineering. “Instead of cutting into the patient, we can simply introduce the robots to the body through a pill or an injection, and they would perform the procedure themselves.”
    Lee and his colleagues aren’t there yet, but the new research is big step forward for tiny robots.
    The group’s microrobots are really small. Each one measures only 20 micrometers wide, several times smaller than the width of a human hair. They’re also really fast, capable of traveling at speeds of about 3 millimeters per second, or roughly 9,000 times their own length per minute. That’s many times faster than a cheetah in relative terms.
    They have a lot of potential, too. In the new study, the group deployed fleets of these machines to transport doses of dexamethasone, a common steroid medication, to the bladders of lab mice. The results suggest that microrobots may be a useful tool for treating bladder diseases and other illnesses in people.

    “Microscale robots have garnered a lot of excitement in scientific circles, but what makes them interesting to us is that we can design them to perform useful tasks in the body,” said C. Wyatt Shields, a co-author of the new study and assistant professor of chemical and biological engineering.
    Fantastic Voyage
    If that sounds like something ripped from science fiction, that’s because it is. In the classic film Fantastic Voyage, a group of adventurers travels via a shrunken-down submarine into the body of a man in a coma.
    “The movie was released in 1966. Today, we are living in an era of micrometer- and nanometer-scale robots,” Lee said.
    He imagines that, just like in the movie, microrobots could swirl through a person’s blood stream, seeking out targeted areas to treat for various ailments.

    The team makes its microrobots out of materials called biocompatible polymers using a technology similar to 3D printing. The machines look a bit like small rockets and come complete with three tiny fins. They also include a little something extra: Each of the robots carries a small bubble of trapped air, similar to what happens when you dunk a glass upside-down in water. If you expose the machines to an acoustic field, like the kind used in ultrasound, the bubbles will begin to vibrate wildly, pushing water away and shooting the robots forward.
    Other CU Boulder co-authors of the new study include Nick Bottenus, assistant professor of mechanical engineering; Ankur Gupta, assistant professor of chemical and biological engineering; and engineering graduate students Ritu Raj, Cooper Thome, Nicole Day and Payton Martinez.
    To take their microrobots for a test drive, the researchers set their sights on a common problem for humans: bladder disease.
    Bringing relief
    Interstitial cystitis, also known as painful bladder syndrome, affects millions of Americans and, as its name suggests, can cause severe pelvic pain. Treating the disease can be equally uncomfortable. Often, patients have to come into a clinic several times over a period of weeks where a doctor injects a harsh solution of dexamethasone into the bladder through a catheter.
    Lee believes that microrobots may be able to provide some relief.
    In laboratory experiments, the researchers fabricated schools of microrobots encapsulating high concentrations of dexamethasone. They then introduced thousands of those bots into the bladders of lab mice. The result was a real-life Fantastic Voyage: The microrobots dispersed through the organs before sticking onto the bladder walls, which would likely make them difficult to pee out.
    Once there, the machines slowly released their dexamethasone over the course of about two days. Such a steady flow of medicine could allow patients to receive more drugs over a longer span of time, Lee said, improving outcomes for patients.
    He added that the team has a lot of work to do before microrobots can travel through real human bodies. For a start, the group wants to make the machines fully biodegradable so that they would eventually dissolve in the body.
    “If we can make these particles work in the bladder,” Lee said, “then we can achieve a more sustained drug release, and maybe patients wouldn’t have to come into the clinic as often.” More

  • in

    New method predicts extreme events more accurately

    With the rise of extreme weather events, which are becoming more frequent in our warming climate, accurate predictions are becoming more critical for all of us, from farmers to city-dwellers to businesses around the world. To date, climate models have failed to accurately predict precipitation intensity, particularly extremes. While in nature, precipitation can be very varied, with many extremes of precipitation, climate models predict a smaller variance in precipitation with a bias toward light rain.
    Missing piece in current algorithms: cloud organization
    Researchers have been working to develop algorithms that will improve prediction accuracy but, as Columbia Engineering climate scientists report, there has been a missing piece of information in traditional climate model parameterizations — a way to describe cloud structure and organization that is so fine-scale it is not captured on the computational grid being used. These organization measurements affect predictions of both precipitation intensity and its stochasticity, the variability of random fluctuations in precipitation intensity. Up to now, there has not been an effective, accurate way to measure cloud structure and quantify its impact.
    A new study from a team led by Pierre Gentine, director of the Learning the Earth with Artificial Intelligence and Physics (LEAP) Center, used global storm-resolving simulations and machine learning to create an algorithm that can deal separately with two different scales of cloud organization: those resolved by a climate model, and those that cannot be resolved as they are too small. This new approach addresses the missing piece of information in traditional climate model parameterizations and provides a way to predict precipitation intensity and variability more precisely.
    “Our findings are especially exciting because, for many years, the scientific community has debated whether to include cloud organization in climate models,” said Gentine, Maurice Ewing and J. Lamar Worzel Professor of Geophysics in the Departments of Earth and Environmental Engineering and Earth Environmental Sciences and a member of the Data Science Institute. “Our work provides an answer to the debate and a novel solution for including organization, showing that including this information can significantly improve our prediction of precipitation intensity and variability.”
    Using AI to design neural network algorithm
    Sarah Shamekh, a PhD student working with Gentine, developed a neural network algorithm that learns the relevant information about the role of fine-scale cloud organization (unresolved scales) on precipitation. Because Shamekh did not define a metric or formula in advance, the model learns implicitly — on its own — how to measure the clustering of clouds, a metric of organization, and then uses this metric to improve the prediction of precipitation. Shamekh trained the algorithm on a high-resolution moisture field, encoding the degree of small-scale organization.
    “We discovered that our organization metric explains precipitation variability almost entirely and could replace a stochastic parameterization in climate models,” said Shamekh, lead author of the study, published May 8, 2023, by PNAS. “Including this information significantly improved precipitation prediction at the scale relevant to climate models, accurately predicting precipitation extremes and spatial variability.”
    Machine-learning algorithm will improve future projections
    The researchers are now using their machine-learning approach, which implicitly learns the sub-grid cloud organization metric, in climate models. This should significantly improve the prediction of precipitation intensity and variability, including extreme precipitation events, and enable scientists to better project future changes in the water cycle and extreme weather patterns in a warming climate.
    Future work
    This research also opens up new avenues for investigation, such as exploring the possibility of precipitation creating memory, where the atmosphere retains information about recent weather conditions, which in turn influences atmospheric conditions later on, in the climate system. This new approach could have wide-ranging applications beyond just precipitation modeling, including better modeling of the ice sheet and ocean surface. More

  • in

    Quantum matter breakthrough: Tuning density waves

    Scientists at EPFL have found a new way to create a crystalline structure called a “density wave” in an atomic gas. The findings can help us better understand the behavior of quantum matter, one of the most complex problems in physics.
    “Cold atomic gases were well known in the past for the ability to ‘program’ the interactions between atoms,” says Professor Jean-Philippe Brantut at EPFL. “Our experiment doubles this ability!” Working with the group of Professor Helmut Ritsch at the University of Innsbruck, they have made a breakthrough that can impact not only quantum research but quantum-based technologies in the future.
    Density waves
    Scientists have long been interested in understanding how materials self-organize into complex structures, such as crystals. In the often-arcane world of quantum physics, this sort of self-organization of particles is seen in ‘density waves’, where particles arrange themselves into a regular, repeating pattern or ‘order’; like a group of people with different colored shirts on standing in a line but in a pattern where no two people with the same color shirt stand next to each other.
    Density waves are observed in a variety of materials, including metals, insulators, and superconductors. However, studying them has been difficult, especially when this order (the patterns of particles in the wave) occurs with other types of organization such as superfluidity — a property that allows particles to flow without resistance.
    It’s worth noting that superfluidity is not just a theoretical curiosity; it is of immense interest for developing materials with unique properties, such as high-temperature superconductivity, which could lead to more efficient energy transfer and storage, or for building quantum computers.
    Tuning a Fermi gas with light
    To explore this interplay, Brantut and his colleagues, the researchers created a “unitary Fermi gas,” a thin gas of lithium atoms cooled to extremely low temperatures, and where atoms collide with each other very often.
    The researchers then placed this gas in an optical cavity, a device used to confine light in a small space for an extended period of time. Optical cavities are made of two facing mirrors that reflect incoming light back and forth between them thousands of times, allowing light particles, photons, to build up inside the cavity.
    In the study, the researchers used the cavity to cause the particles in the Fermi gas to interact at long distance: a first atom would emit a photon that bounces onto the mirrors, which is then reabsorbed by second atom of the gas, regardless how far it is from the first. When enough photons are emitted and reabsorbed — easily tuned in the experiment — the atoms collectively organize into a density wave pattern.
    “The combination of atoms colliding directly with each other in the Fermi gas, while simultaneously exchanging photons over long distance, is a new type of matter where the interactions are extreme,” says Brantut. “We hope what we will see there will improve our understanding of some of the most complex materials encountered in physics.” More

  • in

    ‘Segment-jumping’ ridgecrest earthquakes explored in new study

    On the morning of July 4, 2019, a magnitude 6.4 earthquake struck the Searles Valley in California’s Mojave Desert, with impacts felt across Southern California. About 34 hours later on July 5, the nearby city of Ridgecrest was struck by a magnitude 7.1 earthquake, a jolt felt by millions across the state of California and throughout neighboring communities in Arizona, Nevada, and even Baja California, Mexico.
    Known as the Ridgecrest earthquakes — the biggest earthquakes to hit California in more than 20 years — these seismic events resulted in extensive structural damage, power outages, and injuries. The M6.4 event in Searles Valley was later deemed to be the foreshock to the M7.1 event in Ridgecrest, which is now considered to be the mainshock. Both earthquakes were followed by a multitude of aftershocks.
    Researchers were baffled by the sequence of seismic activity. Why did it take 34 hours for the foreshock to trigger the mainshock? How did these earthquakes “jump” from one segment of a geologic fault system to another? Can earthquakes “talk” to one another in a dynamic sense?
    To address these questions, a team of seismologists at Scripps Institution of Oceanography at UC San Diego and Ludwig Maximilian University of Munich (LMU) led a new study focused on the relationship between the two big earthquakes, which occurred along a multi-fault system. The team used a powerful supercomputer that incorporated data-infused and physics-based models to identify the link between the earthquakes.
    Scripps Oceanography seismologist Alice Gabriel, who previously worked at LMU, led the study along with her former PhD student at LMU, Taufiq Taufiqurrahman, and several co-authors. Their findings were published May 24 in the journal Nature online, and will appear in the print edition June 8.
    “We used the largest computers that are available and perhaps the most advanced algorithms to try and understand this really puzzling sequence of earthquakes that happened in California in 2019,” said Gabriel, currently an associate professor at the Institute of Geophysics and Planetary Physics at Scripps Oceanography. “High-performance computing has allowed us to understand the driving factors of these large events, which can help inform seismic hazard assessment and preparedness.”
    Understanding the dynamics of multi-fault ruptures is important, said Gabriel, because these types of earthquakes are typically more powerful than those that occur on a single fault. For example, the Turkey-Syria earthquake doublet that occurred on Feb. 6, 2023, resulted in significant loss of life and widespread damage. This event was characterized by two separate earthquakes that occurred only nine hours apart, with both breaking across multiple faults.

    During the 2019 Ridgecrest earthquakes, which originated in the Eastern California Shear Zone along a strike-slip fault system, the two sides of each fault moved mainly in a horizontal direction, with no vertical motion. The earthquake sequence cascaded across interlaced and previously unknown “antithetic” faults, minor or secondary faults that move at high (close to 90 degrees) angles to the major fault. Within the seismological community, there remains an ongoing debate on which fault segments actively slipped, and what conditions promote the occurrence of cascading earthquakes.
    The new study presents the first multi-fault model that unifies seismograms, tectonic data, field mapping, satellite data, and other space-based geodetic datasets with earthquake physics, whereas previous models on this type of earthquake have been purely data-driven.
    “Through the lens of data-infused modeling, enhanced by the capabilities of supercomputing, we unravel the intricacies of multi-fault conjugate earthquakes, shedding light on the physics governing cascading rupture dynamics,” said Taufiqurrahman.
    Using the supercomputer SuperMUC-NG at the Leibniz Supercomputing Centre (LRZ) in Germany, the researchers revealed that the Searles Valley and Ridgecrest events were indeed connected. The earthquakes interacted across a statically strong yet dynamically weak fault system driven by complex fault geometries and low dynamic friction.
    The team’s 3-D rupture simulation illustrates how the faults considered strong prior to an earthquake can become very weak as soon as there is fast earthquake movement and explain the dynamics of how multiple faults can rupture together.

    “When fault systems are rupturing, we see unexpected interactions. For example, earthquake cascades, which can jump from segment to segment, or one earthquake causing the next one to take an unusual path. The earthquake may become much larger than what we would’ve expected,” said Gabriel. “This is something that is challenging to build into seismic hazard assessments.”
    According to the authors, their models have the potential to have a “transformative impact” on the field of seismology by improving the assessment of seismic hazards in active multi-fault systems that are often underestimated.
    “Our findings suggest that similar kinds of models could incorporate more physics into seismic hazard assessment and preparedness,” said Gabriel. “With the help of supercomputers and physics, we have unraveled arguably the most detailed data set of a complex earthquake rupture pattern.”
    The study was supported by the European Union’s Horizon 2020 Research and Innovation Programme, Horizon Europe, the National Science Foundation, the German Research Foundation, and the Southern California Earthquake Center.
    In addition to Gabriel and Taufiqurrahman, the study was co-authored by Duo Li, Thomas Ulrich, Bo Li, and Sara Carena of Ludwig Maximilian University of Munich, Germany; Alessandro Verdecchia with McGill University in Montreal, Canada, and Ruhr-University Bochum in Germany; and Frantisek Gallovic of Charles University in Prague, Czech Republic. More

  • in

    Scientists find evidence for new superconducting state in Ising superconductor

    In a ground-breaking experiment, scientists from the University of Groningen, together with colleagues from the Dutch universities of Nijmegen and Twente and the Harbin Institute of Technology (China), have discovered the existence of a superconductive state that was first predicted in 2017. They present evidence for a special variant of the FFLO superconductive state on 24 May in the journal Nature. This discovery could have significant applications, particularly in the field of superconducting electronics.
    The lead author of the paper is Professor Justin Ye, who heads the Device Physics of Complex Materials group at the University of Groningen. Ye and his team have been working on the Ising superconducting state. This is a special state that can resist magnetic fields that generally destroy superconductivity, and that was described by the team in 2015. In 2019, they created a device comprising a double layer of molybdenum disulfide that could couple the Ising superconductivity states residing in the two layers. Interestingly, the device created by Ye and his team makes it possible to switch this protection on or off using an electric field, resulting in a superconducting transistor.
    Elusive
    The coupled Ising superconductor device sheds light on a long-standing challenge in the field of superconductivity. In 1964, four scientists (Fulde, Ferrell, Larkin, and Ovchinnikov) predicted a special superconducting state that could exist under conditions of low temperature and strong magnetic field, referred to as the FFLO state. In standard superconductivity, electrons travel in opposite directions as Cooper pairs. Since they travel at the same speed, these electrons have a total kinetic momentum of zero. However, in the FFLO state, there is a small speed difference between the electrons in the Cooper pairs, which means that there is a net kinetic momentum.
    ‘This state is very elusive and there are only a handful of articles claiming its existence in normal superconductors,’ says Ye. ‘However, none of these are conclusive.’ To create the FFLO state in a conventional superconductor, a strong magnetic field is needed. But the role played by the magnetic field needs careful tweaking. Simply put, for two roles to be played by the magnetic field, we need to use the Zeeman effect. This separates electrons in Cooper pairs based on the direction of their spins (a magnetic moment), but not on the orbital effect — the other role that normally destroys superconductivity. ‘It is a delicate negotiation between superconductivity and the external magnetic field,’ explains Ye.
    Fingerprint
    Ising superconductivity, which Ye and his collaborators introduced and published in the journal Science in 2015, suppresses the Zeeman effect. ‘By filtering out the key ingredient that makes conventional FFLO possible, we provided ample space for the magnetic field to play its other role, namely the orbital effect,’ says Ye.
    ‘What we have demonstrated in our paper is a clear fingerprint of the orbital effect-driven FFLO state in our Ising superconductor,’ explains Ye. ‘This is an unconventional FFLO state, first described in theory in 2017.’ The FFLO state in conventional superconductors requires extremely low temperatures and a very strong magnetic field, which makes it difficult to create. However, in Ye’s Ising superconductor, the state is reached with a weaker magnetic field and at higher temperatures.
    Transistors
    In fact, Ye first observed signs of an FFLO state in his molybdenum disulfide superconducting device in 2019. ‘At that time, we could not prove this, because the samples were not good enough,’ says Ye. However, his PhD student Puhua Wan has since succeeded in producing samples of the material that fulfilled all the requirements to show that there is indeed a finite momentum in the Cooper pairs. ‘The actual experiments took half a year, but the analysis of the results added another year,’ says Ye. Wan is the first author of the Nature paper.
    This new superconducting state needs further investigation. Ye: ‘There is a lot to learn about it. For example, how does the kinetic momentum influence the physical parameters? Studying this state will provide new insights into superconductivity. And this may enable us to control this state in devices such as transistors. That is our next challenge.’ More