More stories

  • in

    Why GPS fails in cities. And how it was brilliantly fixed

    Most of us rarely question the accuracy of the GPS dot that shows our location on a map.
    Yet when visiting a new city and using our phone to navigate, it can seem as if we are jumping from one spot to another, even though we are walking steadily along the same sidewalk.
    “Cities are brutal for satellite navigation,” explained Ardeshir Mohamadi.
    Mohamadi, a doctoral fellow at the Norwegian University of Science and Technology (NTNU), is researching how to make affordable GPS receivers (like those found in smartphones and fitness watches) much more precise without depending on expensive external correction services.
    High accuracy is especially vital for vehicles that drive themselves – autonomous or self-driving cars.
    Urban canyons
    Mohamadi and his team at NTNU have developed a new system that allows autonomous vehicles to navigate safely through dense city environments.

    “In cities, glass and concrete make satellite signals bounce back and forth. Tall buildings block the view, and what works perfectly on an open motorway is not so good when you enter a built-up area,” said Mohamadi.
    When GPS signals reflect off buildings, they take longer to reach the receiver. This delay throws off the calculation of distance to the satellites, which makes the reported position inaccurate.
    Such complex urban environments are known as ‘urban canyons’. It is similar to being at the bottom of a deep gorge, where signals reach you only after multiple reflections from the walls.
    “For autonomous vehicles, this makes the difference between confident, safe behavior and hesitant, unreliable driving. That is why we developed SmartNav, a type of positioning technology designed for ‘urban canyons’,” explained Mohamadi.
    Almost down to the centimetrt
    Not only are the satellite signals disrupted down between the tall buildings, but the signals that are correct do not have sufficient precision.

    In order to solve this problem, the researchers have combined several different technologies to correct the signal. The result is a computer program that can be integrated into the navigation system of autonomous vehicles.
    To achieve this, they received help from a new Google service, but before we go any further, it might be helpful to know how GPS works:
    GPS – the Global Positioning System – comprises many small satellites orbiting the Earth. The satellites send out signals using radio waves, which are received by a GPS receiver. When the receiver receives these signals from at least four satellites, it is able to calculate its position.
    The signal consists of a message with a code indicating the satellite’s position and the exact time the signal was transmitted – like a text message from the satellite.
    Replacing the code with the wave
    It is this code that often becomes incorrect when the signal bounces around between buildings in a city. The first solution the NTNU researchers studied was dropping the code altogether. Instead, information about the radio wave can be used.
    Is the wave traveling upwards or downwards when it reaches the receiver? This is called the carrier phase of the wave.
    “Using only the carrier phase can provide very high accuracy, but it takes time, which is not very practical when the receiver is moving,” said Mohamadi.
    The problem is that you have to stay still until the calculation is good enough – not just a microsecond, but for several minutes.
    However, there are other ways to improve a GPS signal. The user can use a service that corrects the signal using base stations called RTK (Real Time Kinetics).
    RTK works fine as long as the user is in the vicinity of one of these stations. This solution, however, is expensive and intended for professional users.
    An alternative approach is PPP-RTK (Precise Point Positioning – Real-Time Kinematic), which combines precise corrections with satellite signals. The European Galileo system now supports this by broadcasting its corrections free of charge.
    But there is even more help available.
    Google and the wrong-side-of-the-street problem
    While the researchers in Trondheim were working on finding better solutions, Google launched a new service for its Android customers.
    Imagine you are planning a holiday to, say, London. You open Google Maps on your tablet. You then enter the address of your hotel and you can immediately zoom in on the street environment, study the hotel’s façade and the height of the surrounding buildings.
    Google now has these types of 3D models of buildings in almost 4000 cities around the world. The company is using these models to predict how satellite signals will be reflected between the buildings. This is how they will solve the problem of it appearing as if you are walking on the wrong side of the road when using the map app, for example when trying to find your way back to your hotel.
    “They combine data from sensors, Wi-Fi, mobile networks and 3D building models to produce smooth position estimates that can withstand errors caused by reflections,” Mohamadi said.
    Precision you can rely on
    The researchers were now able to combine all these different correction systems with algorithms they had developed themselves. When they tested it in the streets of Trondheim, they achieved an accuracy that was better than ten centimeters 90 percent of the time.
    The researchers say this provides precision that can be relied upon in cities.
    The use of PPP-RTK will also make the technology accessible to the general public because it is a relatively affordable service.
    “PPP-RTK reduces the need for dense networks of local base stations and expensive subscriptions, enabling cheap, large-scale implementation on mass-market receivers,” concluded Mohamadi. More

  • in

    Scientists suggest the brain may work best with 7 senses, not just 5

    Skoltech scientists have devised a mathematical model of memory. By analyzing its new model, the team came to surprising conclusions that could prove useful for robot design, artificial intelligence, and for better understanding of human memory. Published in Scientific Reports, the study suggests there may be an optimal number of senses — if so, those of us with five senses could use a couple more!
    “Our conclusion is of course highly speculative in application to human senses, although you never know: It could be that humans of the future would evolve a sense of radiation or magnetic field. But in any case, our findings may be of practical importance for robotics and the theory of artificial intelligence,” said study co-author Professor Nikolay Brilliantov of Skoltech AI. “It appears that when each concept retained in memory is characterized in terms of seven features — as opposed to, say, five or eight — the number of distinct objects held in memory is maximized.”
    In line with a well-established approach, which originated in the early 20th century, the team models the fundamental building blocks of memory: the memory “engrams.” An engram can be viewed as a sparse ensemble of neurons across multiple regions in the brain that fire together. The conceptual content of an engram is an ideal abstract object characterized with regard to multiple features. In the context of human memory, the features correspond to sensory inputs, so that the notion of a banana would match up with a visual image, a smell, the taste of a banana, and so on. This results in a five-dimensional object that exists and evolves in a five-dimensional space populated by all the other concepts retained in memory.
    The evolution of engrams refers to concepts becoming more focused or blurred with time, depending on how often the engrams get activated by a stimulus acting from the outer world via the senses, triggering the memory of the respective object. This models learning and forgetting as a result of interaction with the environment.
    “We have mathematically demonstrated that the engrams in the conceptual space tend to evolve toward a steady state, which means that after some transient period, a ‘mature’ distribution of engrams emerges, which then persists in time,” Brilliantov commented. “As we consider the ultimate capacity of a conceptual space of a given number of dimensions, we somewhat surprisingly find that the number of distinct engrams stored in memory in the steady state is the greatest for a concept space of seven dimensions. Hence the seven senses claim.”
    In other words, let the objects that exist out there in the world be described by a finite number of features corresponding to the dimensions of some conceptual space. Suppose that we want to maximize the capacity of the conceptual space expressed as the number of distinct concepts associated with these objects. The greater the capacity of the conceptual space, the deeper the overall understanding of the world. It turns out that the maximum is attained when the dimension of the conceptual space is seven. From this the researchers conclude that seven is the optimal number of senses.
    According to the researchers, this number does not depend on the details of the model — the properties of the conceptual space and the stimuli providing the sense impressions. The number seven appears to be a robust and persistent feature of memory engrams as such. One caveat is that multiple engrams of differing sizes existing around a common center are deemed to represent similar concepts and are therefore treated as one when calculating memory capacity.
    The memory of humans and other living beings is an enigmatic phenomenon tied to the property of consciousness, among other things. Advancing the theoretical models of memory will be instrumental to gaining new insights into the human mind and recreating humanlike memory in AI agents. More

  • in

    Scientists unlock the quantum magic hidden in diamonds

    Researchers at the Hebrew University of Jerusalem and the Humboldt University in Berlin have developed a way to capture nearly all the light emitted from tiny diamond defects known as color centers. By placing nanodiamonds into specially designed hybrid nanoantennas with extreme precision, the team achieved record photon collection at room temperature — a necessary step for quantum technologies such as quantum sensors, and quantum-secured communications. The article was selected as a Featured Article in APL Quantum.
    Diamonds have long been prized for their sparkle, but researchers at the Hebrew University of Jerusalem in collaboration with colleagues from the Humboldt University in Berlin are showing they achieve an almost optimal “sparkling,” a key requirement for using diamonds also for quantum technology. The team has approached an almost perfect collection of the faintest light signals, single photons, from tiny diamond defects, known as nitrogen-vacancy (NV) centers, which are vital for developing next-generation quantum computers, sensors, and communication networks.
    NV centers are microscopic imperfections in the diamond structure that can act like quantum “light switches.” They emit single particles of light (photons) that carry quantum information. The problem, until now, has been that much of this light is lost in all directions, making it hard to capture and use.
    The Hebrew University team, together with their research partners from Berlin, solved this challenge by embedding nanodiamonds containing NV centers into specially designed hybrid nanoantennas. These antennas, built from layers of metal and dielectric materials in a precise bullseye pattern, guide the light in a well-defined direction instead of letting it scatter. Using ultra-precise positioning, the researchers placed the nanodiamonds exactly at the antenna center — within a few billionths of a meter.
    Featured in APL Quantum, the results are significant: the new system can collect up to 80% of the emitted photons at room temperature. This is a dramatic improvement compared to previous attempts, where only a small fraction of the light was usable.
    Prof. Rapaport explained, “Our approach brings us much closer to practical quantum devices. By making photon collection more efficient, we’re opening the door to technologies such as secure quantum communication and ultra-sensitive sensors.”
    Dr. Lubotzky added, “What excites us is that this works in a simple, chip-based design and at room temperature. That means it can be integrated into real-world systems much more easily than before.”
    The research demonstrates not just clever engineering, but also the potential of diamonds beyond jewelry. With quantum technologies racing toward real-world applications, this advance could help pave the way for faster, more reliable quantum networks. More

  • in

    A strange quantum metal just rewrote the rules of electricity

    Quantum metals are metals where quantum effects — behaviors that normally only matter at atomic scales — become powerful enough to control the metal’s macroscopic electrical properties.
    Researchers in Japan have explained how electricity behaves in a special group of quantum metals called kagome metals. The study is the first to show how weak magnetic fields reverse tiny loop electrical currents inside these metals. This switching changes the material’s macroscopic electrical properties and reverses which direction has easier electrical flow, a property known as the diode effect, where current flows more easily in one direction than the other.
    Notably, the research team found that quantum geometric effects amplify this switching by about 100 times. The study, published in Proceedings of the National Academy of Sciences, provides the theoretical foundation that could eventually lead to new electronic devices controlled by simple magnets.
    Scientists had observed this strange magnetic switching behavior in experiments since around 2020 but could not explain why it happened and why the effect was so strong. This study provides the first theoretical framework explaining both.
    When frustrated electrons cannot settle
    The name “kagome metal” comes from the Japanese word “kagome,” meaning “basket eyes” or “basket pattern,” which refers to a traditional bamboo weaving technique that creates interlocking triangular designs.
    These metals are special because their atoms are arranged in this unique basket-weave pattern that creates what scientists call “geometric frustration” — electrons cannot settle into simple, organized patterns and are forced into more complex quantum states that include the loop currents.

    When the loop currents inside these metals change direction, the electrical behavior of the metal changes. The research team showed that loop currents and wave-like electron patterns (charge density waves) work together to break fundamental symmetries in the electronic structure. They also discovered that quantum geometric effects — unique behaviors that only occur at the smallest scales of matter — significantly enhance the switching effect.
    “Every time we saw the magnetic switching, we knew something extraordinary was happening, but we couldn’t explain why,” Hiroshi Kontani, senior author and professor from the Graduate School of Science at Nagoya University, recalled.
    “Kagome metals have built-in amplifiers that make the quantum effects much stronger than they would be in ordinary metals. The combination of their crystal structure and electronic behavior allows them to break certain core rules of physics simultaneously, a phenomenon known as spontaneous symmetry breaking. This is extremely rare in nature and explains why the effect is so powerful.”
    The research method involved cooling the metals to extremely low temperatures of about -190°C. At this temperature, the kagome metal naturally develops quantum states where electrons form circulating currents and create wave-like patterns throughout the material. When scientists apply weak magnetic fields, they reverse the direction these currents spin, and as a result, the preferred direction of current flow in the metal changes.
    New materials meet new theory
    This breakthrough in quantum physics was not possible until recently because kagome metals were only discovered around 2020. While scientists quickly observed the mysterious electrical switching effect in experiments, they could not explain how it worked.
    The quantum interactions involved are very complex and require advanced understanding of how loop currents, quantum geometry, and magnetic fields work together — knowledge that has only developed in recent years. These effects are also very sensitive to impurities, strain, and external conditions, which makes them difficult to study.
    “This discovery happened because three things came together at just the right time: we finally had the new materials, the advanced theories to understand them, and the high-tech equipment to study them properly. None of these existed together until very recently, which is why no one could solve this puzzle before now,” Professor Kontani added.
    “The magnetic control of electrical properties in these metals could potentially enable new types of magnetic memory devices or ultra-sensitive sensors. Our study provides the fundamental understanding needed to begin developing the next generation of quantum-controlled technology,” he said. More

  • in

    Physicists just built a quantum lie detector. It works

    Can you prove whether a large quantum system truly behaves according to the weird and wonderful rules of quantum mechanics — or if it just looks like it does? In a groundbreaking study, physicists from Leiden, Beijing en Hangzhou found the answer to this question.
    You could call it a ‘quantum lie detector’: Bell’s test designed by famous physicist John Bell. This test shows whether a machine, like a quantum computer, is truly using quantum effects or just mimics them.
    As quantum technologies become more mature, ever more stringent tests of quantumness become necessary. In this new study, the researchers took things to the next level, testing Bell correlations in systems with up to 73 qubits — the basic building blocks of a quantum computer.
    The study involved a global team: theoretical physicists Jordi Tura, Patrick Emonts, PhD candidate Mengyao Hu from Leiden University, together with colleagues from Tsinghua University (Beijing) and experimental physicists from Zhejiang University (Hangzhou).The world of quantum physics
    Quantum mechanics is the science that explains how the tiniest particles in the universe — like atoms and electrons — behave. It’s a world full of strange and counterintuitive ideas.
    One of those is quantum nonlocality, where particles appear to instantly affect each other, even when far apart. Although it sounds strange, it’s a real effect, and it won the Nobel Prize in Physics in 2022. This research is focused on proving the occurrence of nonlocal correlation, also known as Bell correlations.

    Clever experimenting
    It was an extremely ambitious plan, but the team’s well-optimized strategy made all the difference. Instead of trying to directly measure the complex Bell correlations, they focused on something quantum devices are already good at: minimizing energy.
    And it paid off. The team created a special quantum state using 73 qubits in a superconducting quantum processor and measured energies far below what would be possible in a classical system. The difference was striking — 48 standard deviations — making it almost impossible that the result was due to chance.
    But the team didn’t stop there. They went on to certify a rare and more demanding type of nonlocality – known as genuine multipartite Bell correlations. In this kind of quantum correlation, all qubits in the system must be involved, making it much harder to generate — and even harder to verify. Remarkably, the researchers succeeded in preparing a whole series of low-energy states that passed this test up to 24 qubits, confirming these special correlations efficiently.
    This result shows that quantum computers are not just getting bigger — they are also becoming better at displaying and proving truly quantum behaviour.
    Why this matters
    This study proves that it’s possible to certify deep quantum behaviour in large, complex systems — something never done at this scale before. It’s a big step toward making sure quantum computers are truly quantum.
    These insights are more than just theoretical. Understanding and controlling Bell correlations could improve quantum communication, make cryptography more secure, and help develop new quantum algorithms. More

  • in

    Scientists accidentally create a tiny “rainbow chip” that could supercharge the internet

    A few years ago, researchers in Michal Lipson’s lab noticed something remarkable.
    They were working on a project to improve LiDAR, a technology that uses lightwaves to measure distance. The lab was designing high-power chips that could produce brighter beams of light.
    “As we sent more and more power through the chip, we noticed that it was creating what we call a frequency comb,” says Andres Gil-Molina, a former postdoctoral researcher in Lipson’s lab.
    A frequency comb is a special type of light that contains many colors lined up next to each other in an orderly pattern, kind of like a rainbow. Dozens of colors — or frequencies of light — shine brightly, while the gaps between them remain dark. When you look at a frequency comb on a spectrogram, these bright frequencies appear as spikes, or teeth on a comb. This offers the tremendous opportunity of sending dozens of streams of data simultaneously. Because the different colors of light don’t interfere with each other, each tooth acts as its own channel.
    Today, creating a powerful frequency comb requires large and expensive lasers and amplifiers. In their new paper in Nature Photonics, Lipson, Eugene Higgins Professor of Electrical Engineering and professor of Applied Physics, and her collaborators show how to do the same thing on a single chip.
    “Data centers have created tremendous demand for powerful and efficient sources of light that contain many wavelengths,” says Gil-Molina, who is now a principal engineer at Xscape Photonics. “The technology we’ve developed takes a very powerful laser and turns it into dozens of clean, high-power channels on a chip. That means you can replace racks of individual lasers with one compact device, cutting cost, saving space, and opening the door to much faster, more energy-efficient systems.”
    “This research marks another milestone in our mission to advance silicon photonics,” Lipson said. “As this technology becomes increasingly central to critical infrastructure and our daily lives, this type of progress is essential to ensuring that data centers are as efficient as possible.”
    Cleaning up messy light

    The breakthrough started with a simple question: What’s the most powerful laser we can put on a chip?
    The team chose a type called a multimode laser diode, which is used widely in applications like medical devices and laser cutting tools. These lasers can produce enormous amounts of light, but the beam is “messy,” which makes it hard to use for precise applications.
    Integrating such a laser into a silicon photonics chip, where the light pathways are just a few microns — even hundreds of nanometers — wide, required careful engineering.
    “We used something called a locking mechanism to purify this powerful but very noisy source of light,” Gil-Molina says. The method relies on silicon photonics to reshape and clean up the laser’s output, producing a much cleaner, more stable beam, a property scientists call high coherence.
    Once the light is purified, the chip’s nonlinear optical properties take over, splitting that single powerful beam into dozens of evenly spaced colors, a defining feature of a frequency comb. The result is a compact, high-efficiency light source that combines the raw power of an industrial laser with the precision and stability needed for advanced communications and sensing.
    Why it matters now
    The timing for this breakthrough is no accident. With the explosive growth of artificial intelligence, the infrastructure inside data centers is straining to move information fast enough, for example, between processors and memory. State-of-the-art data centers are already using fiber optic links to transport data, but most of these still rely on single-wavelength lasers.

    Frequency combs change that. Instead of one beam carrying one data stream, dozens of beams can run in parallel through the same fiber. That’s the principle behind wavelength-division multiplexing (WDM), the technology that turned the internet into a global high-speed network in the late 1990s.
    By making high-power, multi-wavelength combs small enough to fit directly on a chip, Lipson’s team has made it possible to bring this capability into the most compact, cost-sensitive parts of modern computing systems. Beyond data centers, the same chips could enable portable spectrometers, ultra-precise optical clocks, compact quantum devices, and even advanced LiDAR systems.
    “This is about bringing lab-grade light sources into real-world devices,” says Gil-Molina. “If you can make them powerful, efficient, and small enough, you can put them almost anywhere.” More

  • in

    These little robots literally walk on water

    Imagine a tiny robot, no bigger than a leaf, gliding across a pond’s surface like a water strider. One day, devices like this could track pollutants, collect water samples or scout flooded areas too risky for people.
    Baoxing Xu, professor of mechanical and aerospace engineering at the University of Virginia’s School of Engineering and Applied Science, is pioneering a way to build them. In a new study published in Science Advances, Xu’s research introduces HydroSpread, a first-of-its-kind fabrication method that has great potential to impact the growing field of soft robotics. This innovation allows scientists to make soft, floating devices directly on water, a technology that could be utilized in fields from health care to electronics to environmental monitoring.
    Until now, the thin, flexible films used in soft robotics had to be manufactured on rigid surfaces like glass and then peeled off and transferred to water, a delicate process that often caused films to tear. HydroSpread sidesteps this issue by letting liquid itself serve as the “workbench.” Droplets of liquid polymer could naturally spread into ultrathin, uniform sheets on the water’s surface. With a finely tuned laser, Xu’s team can then carve these sheets into complex patterns — circles, strips, even the UVA logo — with remarkable precision.
    Using this approach, the researchers built two insect-like prototypes: HydroFlexor, which paddles across the surface using fin-like motions. HydroBuckler, which “walks” forward with buckling legs, inspired by water striders.In the lab, the team powered these devices with an overhead infrared heater. As the films warmed, their layered structure bent or buckled, creating paddling or walking motions. By cycling the heat on and off, the devices could adjust their speed and even turn — proof that controlled, repeatable movement is possible. Future versions could be designed to respond to sunlight, magnetic fields or tiny embedded heaters, opening the door to autonomous soft robots that can move and adapt on their own.
    “Fabricating the film directly on liquid gives us an unprecedented level of integration and precision,” Xu said. “Instead of building on a rigid surface and then transferring the device, we let the liquid do the work to provide a perfectly smooth platform, reducing failure at every step.”
    The potential reaches beyond soft robots. By making it easier to form delicate films without damaging them, HydroSpread could open new possibilities for creating wearable medical sensors, flexible electronics and environmental monitors — tools that need to be thin, soft and durable in settings where traditional rigid materials don’t work.
    About the Researcher
    Baoxing Xu is a nationally recognized expert in mechanics, compliant structures and bioinspired engineering. His lab at UVA Engineering focuses on translating strategies from nature — such as the delicate mechanics of insect locomotion — into resilient, functional devices for human use.
    This work, supported by the National Science Foundation and 4-VA, was carried out within UVA’s Department of Mechanical and Aerospace Engineering. Graduate and undergraduate researchers in Xu’s group played a central role in the experiments, gaining hands-on experience with state-of-the-art fabrication and robotics techniques. More

  • in

    Scientists finally found the “dark matter” of electronics

    In a world-first, researchers from the Femtosecond Spectroscopy Unit at the Okinawa Institute of Science and Technology (OIST) have directly observed the evolution of the elusive dark excitons in atomically thin materials, laying the foundation for new breakthroughs in both classical and quantum information technologies. Their findings have been published in Nature Communications. Professor Keshav Dani, head of the unit, highlights the significance: “Dark excitons have great potential as information carriers, because they are inherently less likely to interact with light, and hence less prone to degradation of their quantum properties. However, this invisibility also makes them very challenging to study and manipulate. Building on a previous breakthrough at OIST in 2020, we have opened a route to the creation, observation, and manipulation of dark excitons.”
    “In the general field of electronics, one manipulates electron charge to process information,” explains Xing Zhu, co-first author and PhD student in the unit. “In the field of spintronics, we exploit the spin of electrons to carry information. Going further, in valleytronics, the crystal structure of unique materials enables us to encode information into distinct momentum states of the electrons, known as valleys.” The ability to use the valley dimension of dark excitons to carry information positions them as promising candidates for quantum technologies. Dark excitons are by nature more resistant to environmental factors like thermal background than the current generation of qubits, potentially requiring less extreme cooling and making them less prone to decoherence, where the unique quantum state breaks down.
    Defining landscapes of energy with bright and dark excitons
    Over the past decade, progress has been made in the development of a class of atomically thin semiconducting materials known as TMDs (transition metal dichalcogenides). As with all semiconductors, atoms in TMDs are aligned in a crystal lattice, which confines electrons to a specific level (or band) of energy, such as the valence band. When exposed to light, the negatively charged electrons are excited to a higher energy state – the conduction band – leaving behind a positively charged hole in the valence band. The electrons and holes are bound together by electrostatic attraction, forming hydrogen-like quasiparticles called excitons. If certain quantum properties of the electron and hole match, i.e. they have the same spin configuration and they inhabit the same ‘valley’ in momentum space (the energy minima that electrons and holes can occupy in the atomic crystal structure) the two recombine within a picosecond (1ps = 10−12 second), emitting light in the process. These are ‘bright’ excitons.
    However, if the quantum properties of the electron and hole do not match up, the electron and hole are forbidden from recombining on their own and do not emit light. These are characterized as ‘dark’ excitons. “There are two ‘species’ of dark excitons,” explains Dr. David Bacon, co-first author who is now at University College London, “momentum-dark and spin-dark, depending on where the properties of electron and hole are in conflict. The mismatch in properties not only prevents immediate recombination, allowing them to exist up to several nanoseconds (1ns = 10−9 second – a much more useful timescale), but also makes dark excitons more isolated from environmental interactions.”
    “The unique atomic symmetry of TMDs means that when exposed to a state of light with a circular polarization, one can selectively create bright excitons only in a specific valley. This is the fundamental principle of valleytronics. However, bright excitons rapidly turn into numerous dark excitons that can potentially preserve the valley information. Which species of dark excitons are involved and to what degree they can sustain the valley information is unclear, but this is a key step in the pursuit of valleytronic applications,” explains Dr. Vivek Pareek, co-first author and OIST graduate who is now a Presidential Postdoctoral Fellow at the California Institute of Technology.
    Observing electrons at the femtosecond scale
    Using the world-leading TR-ARPES (time- and angle resolved photoemission spectroscopy) setup at OIST, which includes a proprietary, table-top XUV (extreme ultraviolet) source, the team has managed to track the characteristics of all excitons after the creation of bright excitons in a specific valley in a TMD semiconductor over time by simultaneously quantifying momentum, spin state, and population levels of electrons and holes – these properties have never been simultaneously quantified before.
    Their findings show that within a picosecond, some bright excitons are scattered by phonons (quantized crystal lattice vibrations) into different momentum valleys, rendering them momentum-dark. Later, spin-dark excitons dominate, where electrons have flipped spin within the same valley, persisting on nanosecond scales.
    With this, the team has overcome the fundamental challenge of how to access and track dark excitons, laying the foundation for dark valleytronics as a field. Dr. Julien Madéo of the unit summarizes: “Thanks to the sophisticated TR-ARPES setup at OIST, we have directly accessed and mapped how and what dark excitons keep long-lived valley information. Future developments to read out the dark excitons valley properties will unlock broad dark valleytronic applications across information systems.” More