More stories

  • in

    The hidden forces inside diamonds that could make tech 1,000x faster

    Understanding what happens inside a material when it is hit by ultrashort light pulses is one of the great challenges of matter physics and modern photonics. A new study published in Nature Photonics and led by Politecnico di Milano reveals a hitherto neglected but essential aspect, precisely the contribution of virtual charges, charge carriers that exist only during interaction with light, but which profoundly influence the material’s response.
    The research, conducted in partnership with the University of Tsukuba, the Max Planck Institute for the Structure and Dynamics of Matter, and the Institute of Photonics and Nanotechnology (Cnr-Ifn) investigated the behavior of monocrystalline diamonds subjected to light pulses lasting a few attoseconds (billionths of a billionth of a second), using an advanced technique called attosecond-scale transient reflection spectroscopy.
    By comparing experimental data with state-of-the-art numerical simulations, researchers were able to isolate the effect of so-called virtual vertical transitions between the electronic bands of the material. Such an outcome changes the perspective on how light interacts with solids, even in extreme conditions hitherto attributed only to the movement of actual charges.
    “Our work shows that virtual carrier excitation, which develops in a few billionths of a billionth of a second, are indispensable to correctly predict the rapid optical response in solids,” said Matteo Lucchini, professor at the Department of Physics, senior author of the study, and associate at CNR-Ifn.
    “These results mark a key step in the development of ultra-fast technologies in electronics,” adds Rocío Borrego Varillas, researcher at CNR-IFN.
    The progress achieved offers new insights into the creation of ultra-fast optical devices, such as switches and modulators capable of operating at petahertz frequencies, a thousand times faster than current electronic devices. This requires a deep understanding of both the behavior of actual charges, and of virtual charges, as demonstrated by this study.
    Research was carried out at the Attosecond Research Center (ARC) of the Politecnico di Milano, in the framework of the European and national projects ERC AuDACE (Attosecond Dynamics in AdvanCed matErials) and MIUR FARE PHorTUNA (PHase Transition Ultrafast dyNAmics in Mott insulators). More

  • in

    Black hole discovery confirms Einstein and Hawking were right

    A decade ago, scientists first detected ripples in the fabric of space-time, called gravitational waves, from the collision of two black holes. Now, thanks to improved technology and a bit of luck, a newly detected black hole merger is providing the clearest evidence yet of how black holes work — and, in the process, offering long-sought confirmation of fundamental predictions by Albert Einstein and Stephen Hawking.
    The new measurements were made by the Laser Interferometer Gravitational-Wave Observatory (LIGO), with analyses led by astrophysicists Maximiliano Isi and Will Farr of the Flatiron Institute’s Center for Computational Astrophysics in New York City. The results reveal insights into the properties of black holes and the fundamental nature of space-time, hinting at how quantum physics and Einstein’s general relativity fit together.
    “This is the clearest view yet of the nature of black holes,” says Isi, who is also an assistant professor at Columbia University. “We’ve found some of the strongest evidence yet that astrophysical black holes are the black holes predicted from Albert Einstein’s theory of general relativity.”
    The results were reported in a paper published September 10 in Physical Review Letters by the LIGO-Virgo-KAGRA Collaboration.
    For massive stars, black holes are the final stage in their evolution. Black holes are so dense that even light cannot escape their gravity. When two black holes collide, the event distorts space itself, creating ripples in space-time that fan out across the universe, like sound waves ringing out from a struck bell.
    Those space-deforming ripples, called gravitational waves, can tell scientists a great deal about the objects that created them. Just as a large iron bell makes different sounds than a smaller aluminum bell, the “sound” a black hole merger makes is specific to the properties of the black holes involved.
    Scientists can detect gravitational waves with special instruments at observatories such as LIGO in the United States, Virgo in Italy and KAGRA in Japan. These instruments carefully measure how long it takes a laser to travel a given path. As gravitational waves stretch and compress space-time, the length of the instrument, and thus the light’s travel time, changes minutely. By measuring those tiny changes with great precision, scientists can use them to determine the black holes’ characteristics.

    The newly reported gravitational waves were found to be created by a merger that formed a black hole with the mass of 63 suns and spinning at 100 revolutions per second. The findings come 10 years after LIGO made the first black hole merger detection. Since that landmark discovery, improvements in equipment and techniques have enabled scientists to get a much clearer look at these space-shaking events.
    “The new pair of black holes are almost twins to the historic first detection in 2015,” Isi says. “But the instruments are much better, so we’re able to analyze the signal in ways that just weren’t possible 10 years ago.”
    With these new signals, Isi and his colleagues got a complete look at the collision from the moment the black holes first careened into each other until the final reverberations as the merged black hole settled into its new state, which happened only milliseconds after first contact.
    Previously, the final reverberations were difficult to capture, as by that point, the ringing of the black hole would be very faint. As a result, scientists couldn’t separate the ringing of the collision from that of the final black hole itself.
    In 2021, Isi led a study showcasing a cutting-edge method that he, Farr and others developed to isolate certain frequencies — or ‘tones’ — using data from the 2015 black hole merger. This method proved powerful, but the 2015 measurements weren’t clear enough to confirm key predictions about black holes. With the new, more precise measurements, though, Isi and his colleagues were more confident they had successfully isolated the milliseconds-long signal of the final, settled black hole. This enabled more unambiguous tests of the nature of black holes.
    “Ten milliseconds sounds really short, but our instruments are so much better now that this is enough time for us to really analyze the ringing of the final black hole,” Isi says. “With this new detection, we have an exquisitely detailed view of the signal both before and after the black hole merger.”
    The new observations allowed scientists to test a key conjecture dating back decades that black holes are fundamentally simple objects. In 1963, physicist Roy Kerr used Einstein’s general relativity to mathematically describe black holes with one equation. The equation showed that astrophysical black holes can be described by just two characteristics: spin and mass. With the new, higher-quality data, the scientists were able to measure the frequency and duration of the ringing of the merged black hole more precisely than ever before. This allowed them to see that, indeed, the merged black hole is a simple object, described by just its mass and spin.

    The observations were also used to test a foundational idea proposed by Stephen Hawking called Hawking’s area theorem. It states that the size of a black hole’s event horizon — the line past which nothing, not even light, can return — can only ever grow. Testing whether this theorem applies requires exceptional measurements of black holes before and after their merger. Following the first black hole merger detection in 2015, Hawking wondered if the merger signature could be used to confirm his theorem. At the time, no one thought it was possible.
    By 2019, a year after Hawking’s death, methods had improved enough that a first tentative confirmation came using techniques developed by Isi, Farr, and colleagues. With four times better resolution, the new data gives scientists much more confidence that Hawking’s theorem is correct.
    In confirming Hawking’s theorem, the results also hint at connections to the second law of thermodynamics. This law states that a property that measures a system’s disorder, known as entropy, must increase, or at least remain constant, over time. Understanding the thermodynamics of black holes could lead to advances in other areas of physics, including quantum gravity, which aims to merge general relativity with quantum physics.
    “It’s really profound that the size of a black hole’s event horizon behaves like entropy,” Isi says. “It has very deep theoretical implications and means that some aspects of black holes can be used to mathematically probe the true nature of space and time.”
    Many suspect that future black hole merger detections will only reveal more about the nature of these objects. In the next decade, detectors are expected to become 10 times more sensitive than today, allowing for more rigorous tests of black hole characteristics.
    “Listening to the tones emitted by these black holes is our best hope for learning about the properties of the extreme space-times they produce,” says Farr, who is also a professor at Stony Brook University. “And as we build more and better gravitational wave detectors, the precision will continue to improve.”
    “For so long this field has been pure mathematical and theoretical speculation,” Isi says. “But now we’re in a position of actually seeing these amazing processes in action, which highlights how much progress there’s been — and will continue to be — in this field.” More

  • in

    Quantum chips just proved they’re ready for the real world

    UNSW Sydney nano-tech startup Diraq has shown its quantum chips aren’t just lab-perfect prototypes – they also hold up in real-world production, maintaining the 99% accuracy needed to make quantum computers viable.
    Diraq, a pioneer of silicon-based quantum computing, achieved this feat by teaming up with European nanoelectronics institute Interuniversity Microelectronics Centre (imec). Together they demonstrated the chips worked just as reliably coming off a semiconductor chip fabrication line as they do in the experimental conditions of a research lab at UNSW.
    UNSW Engineering Professor Andrew Dzurak, who is the founder and CEO of Diraq, said up until now it hadn’t been proven that the processors’ lab-based fidelity – meaning accuracy in the quantum computing world – could be translated to a manufacturing setting.
    “Now it’s clear that Diraq’s chips are fully compatible with manufacturing processes that have been around for decades.”
    In a paper published on Sept. 24 in Nature, the teams report that Diraq-designed, imec-fabricated devices achieved over 99% fidelity in operations involving two quantum bits – or ‘qubits’. The result is a crucial step towards Diraq’s quantum processors achieving utility scale, the point at which a quantum computer’s commercial value exceeds its operational cost. This is the key metric set out in the Quantum Benchmarking Initiative, a program run by the United States’ Defense Advanced Research Projects Agency (DARPA) to gauge whether Diraq and 17 other companies can reach this goal.
    Utility-scale quantum computers are expected to be able to solve problems that are out of reach of the most advanced high-performance computers available today. But breaching the utility-scale threshold requires storing and manipulating quantum information in millions of qubits to overcome the errors associated with the fragile quantum state.
    “Achieving utility scale in quantum computing hinges on finding a commercially viable way to produce high-fidelity quantum bits at scale,” said Prof. Dzurak.

    “Diraq’s collaboration with imec makes it clear that silicon-based quantum computers can be built by leveraging the mature semiconductor industry, which opens a cost-effective pathway to chips containing millions of qubits while still maximizing fidelity.”
    Silicon is emerging as the front-runner among materials being explored for quantum computers – it can pack millions of qubits onto a single chip and works seamlessly with today’s trillion-dollar microchip industry, making use of the methods that put billions of transistors onto modern computer chips.
    Diraq has previously shown that qubits fabricated in an academic laboratory can achieve high fidelity when performing two-qubit logic gates, the basic building block of future quantum computers. However, it was unclear whether this fidelity could be reproduced in qubits manufactured in a semiconductor foundry environment.
    “Our new findings demonstrate that Diraq’s silicon qubits can be fabricated using processes that are widely used in semiconductor foundries, meeting the threshold for fault tolerance in a way that is cost-effective and industry-compatible,” Prof. Dzurak said.
    Diraq and imec previously showed that qubits manufactured using CMOS processes – the same technology used to build everyday computer chips – could perform single-qubit operations with 99.9% accuracy. But more complex operations using two qubits that are critical to achieving utility scale had not yet been demonstrated.
    “This latest achievement clears the way for the development of a fully fault-tolerant, functional quantum computer that is more cost effective than any other qubit platform,” Prof. Dzurak said. More

  • in

    Mysterious “quantum echo” in superconductors could unlock new tech

    Scientists at the U. S. Department of Energy Ames National Laboratory and Iowa State University have discovered an unexpected “quantum echo” in a superconducting material. This discovery provides insight into quantum behaviors that could be used for next-generation quantum sensing and computing technologies.
    Superconductors are materials that carry electricity without resistance. Within these superconductors are collective vibrations known as “Higgs modes.” A Higgs mode is a quantum phenomenon that occurs when its electron potential fluctuates in a similar way to a Higgs boson. They appear when a material is undergoing a superconducting phase transition.
    Observing these vibrations has been a long-time challenge for scientists because they exist for a very short time. They also have complex interactions with quasiparticles, which are electron-like excitations that emerge from the breakdown of superconductivity.
    However, using advanced terahertz (THz) spectroscopy techniques, the research team discovered a novel type of quantum echo, called the “Higgs echo,” in superconducting niobium materials used in quantum computing circuits.
    “Unlike conventional echoes observed in atoms or semiconductors, the Higgs echo arises from a complex interaction between the Higgs modes and quasiparticles, leading to unusual signals with distinct characteristics,” explained Jigang Wang, a scientist at Ames Lab and lead of the research team.
    According to Wang, the Higgs echo can remember and reveal hidden quantum pathways within the material. By using precisely timed pulses of THz radiation, his team was able to observe these echoes. Using these THz radiation pulses, they can also use the echoes to encode, store, and retrieve quantum information embedded within this superconducting material.
    This research demonstrates the ability to control and observe quantum coherence in superconductors and paves the way for potential new methods of quantum information storage and processing.
    “Understanding and controlling these unique quantum echoes brings us a step closer to practical quantum computing and advanced quantum sensing technologies,” said Wang.
    This project was partially supported through the Superconducting Quantum Materials and Systems Center (SQMS). More

  • in

    Could your smartphone detect mental health risks before you notice them?

    Data passively collected from cell phone sensors can identify behaviors associated with a host of mental health disorders, from agoraphobia to generalized anxiety disorder to narcissistic personality disorder. New findings show that the same data can identify behaviors associated with a wider array of mental disorder symptoms.
    Colin E. Vize, assistant professor in the Department of Psychology in Pitt’s Kenneth P. Dietrich School of Arts and Sciences, is co-PI on this research, which broadens the scope of how clinicians might one day use this data to treat their patients.
    The work was led by first author Whitney Ringwald (SOC WK ’18G, A&S ’21G), professor at the University of Minnesota who completed her graduate training at Pitt. Also on their team were former Pitt Professor Aiden Wright, now at the University of Michigan, and Grant King, one of Wright’s graduate students.
    “This is an important step in the right direction,” Vize said, “but there is a lot of work to be done before we can potentially realize any of the clinical promises of using sensors on smartphones to help inform assessment and treatment.”
    In theory, an app that could make use of such data would give clinicians access to substantially more, and more reliable, data about their patients’ lives between visits.
    “We’re not always the best reporters, we often forget things,” Vize said of filling out self-assessments. “But with passive sensing, we might be able to collect data unobtrusively, as people are going about their daily lives, without having to ask a lot of questions.”
    As the first steps to realizing such a tool, researchers investigated whether they could infer if people were behaving in ways associated with certain mental health conditions. Previous research has connected passive sensor readings with behaviors that point to specific illnesses, including depression and post-traumatic stress disorder. This new work, published July 3 in the journal JAMA Network Open, expands upon that research, showing that it can be linked to symptoms that are not specific to any one mental health condition.

    This is important, Vize said, because many behaviors are associated with more than one disorder, and different people with the same disorder can look, act and feel very differently.
    “The disorder categories tend to not carve nature at its joints,” he said. “We can think more transdiagnostically, and that gives us a little more accurate picture of some of the symptoms that people are experiencing.”
    For this study, Vize and a team of researchers used a statistical analysis tool called Mplus to find correlations between sensor data and mental health symptoms reported at baseline. The scientists then had to determine whether sensor data correlated with a set of broad, evidence-based symptom dimensions: internalizing, detachment, disinhibition, antagonism, thought disorder and somatoform, or unexplained physical symptoms.
    In addition to the six dimensions, they also looked at what has been called the p-factor. This is not a specific behavior or symptom, rather it represents an ineffable, shared feature that runs across all kinds of mental health symptoms.
    “You can think about it sort of like a Venn diagram,” Vize said. If all the symptoms associated with all mental health issues were circles, the p-factor is the space where they all overlap. It is not a behavior in and of itself. “It’s essentially what’s shared across all dimensions.”
    The researchers made use of the Intensive Longitudinal Investigation of Alternative Diagnostic Dimensions study (ILIADD), which was conducted in Pittsburgh in the spring of 2023. From ILIADD, they analyzed the data of 557 people who had filled out self-assessments and shared data from their cell phones, including (but not limited to): GPS data that indicated how long people stayed home and the maximum distance they traveled from home Time spent walking, running and stationary How long their screens were on How many calls they received and made Battery status Sleep timeUsing an app developed by researchers at the University of Oregon, the team was able to relate the sensor data to various mental health symptoms. Comparing the app’s findings to questionnaires filled out by participants, Vize and team determined that the six dimensions of mental health symptoms, which reflect symptoms represented among many disorders, did correlate to the sensor data.

    Interestingly, they also found sensor data correlated to the p-factor, a general marker of mental health problems. The implications of these findings are several-fold — ultimately, it may one day be possible to use this kind of technology to better understand symptoms in a patient whose presentation doesn’t fit the category of any single disorder.
    But for now, these data do not say anything about individuals’ mental health; they deal in averages. Mental health is complex. Behavior varies wildly. “These sensor analyses may more accurately describe some people than others.”
    That’s one of the reasons Vize doesn’t see this kind of technology ever replacing a human clinician. “A lot of work in this area is focused on getting to the point where we can talk about, ‘How does this potentially enhance or supplement existing clinical care?’
    “Because I definitely don’t think it can replace treatment. It would be more of an additional tool in the clinician’s toolbox.” More

  • in

    This new camera sees the invisible in 3D without lenses

    Researchers have used the centuries-old idea of pinhole imaging to create a high-performance mid-infrared imaging system without lenses. The new camera can capture extremely clear pictures over a large range of distances and in low light, making it useful for situations that are challenging for traditional cameras.
    “Many useful signals are in the mid-infrared, such as heat and molecular fingerprints, but cameras working at these wavelengths are often noisy, expensive or require cooling,” said research team leader Heping Zeng from East China Normal University. “Moreover, traditional lens-based setups have a limited depth of field and need careful design to minimize optical distortions. We developed a high-sensitivity, lens-free approach that delivers a much larger depth of field and field of view than other systems.”
    In Optica, Optica Publishing Group’s journal for high-impact research, the researchers describe how they use light to form a tiny “optical pinhole” inside a nonlinear crystal, which also turns the infrared image into a visible one. Using this setup, they acquired clear mid-infrared images with a depth of field of over 35 cm and a field of view of more than 6 cm. They were also able to use the system to acquire 3D images.
    “This approach can enhance night-time safety, industrial quality control and environmental monitoring,” said research team member Kun Huang from East China Normal University. “And because it uses simpler optics and standard silicon sensors, it could eventually make infrared imaging systems more affordable, portable and energy efficient. It can even be applied with other spectral bands such as the far-infrared or terahertz wavelengths, where lenses are hard to make or perform poorly.”
    Pinhole imaging reimagined
    Pinhole imaging is one of the oldest image-making methods, first described by the Chinese philosopher Mozi in the 4th century BC. A traditional pinhole camera works by letting light pass through a tiny hole in a lightproof box, projecting an inverted image of the outside scene onto the opposite surface inside. Unlike lens-based imaging, pinhole imaging avoids distortion, has an infinite depth of field and works across a wide range of wavelengths.
    To bring these advantages to a modern infrared imaging system, the researchers used an intense laser to form an optical hole, or artificial aperture, inside a nonlinear crystal. Because of its special optical properties, the crystal converts the infrared image into visible light, so that a standard silicon camera can record it.

    The researchers say that the use of a specially designed crystal with a chirped-period structure, which can accept light rays from a broad range of directions, was key to achieving a large field of view. Also, the upconversion detection method naturally suppresses noise, which allows it to work even in very low light conditions.
    “Lensless nonlinear pinhole imaging is a practical way to achieve distortion-free, large-depth, wide-field-of-view mid-infrared imaging with high sensitivity,” said Huang. “The ultrashort synchronized laser pulses also provide a built-in ultrafast optical time gate that can be used for sensitive, time-of-flight depth imaging, even with very few photons.”
    After figuring out that an optical pinhole radius of about 0.20 mm produced sharp, well-defined details, the researchers used this aperture size to image targets that were 11 cm, 15 cm and 19 cm away. They achieved sharp imaging at the mid-infrared wavelength of 3.07 μm, across all the distances, confirming a large depth range. They were also able to keep images sharp for objects placed up to 35 cm away, demonstrating a large depth of field.
    3D imaging without lenses
    The investigators then used their setup for two types of 3D imaging. For 3D time-of-flight imaging, they imaged a matte ceramic rabbit by using synchronized ultrafast pulses as an optical gate and were able to reconstruct the 3D shape with micron-level axial precision. Even when the input was reduced to about 1.5 photons per pulse — simulating very low-light conditions — the method still produced 3D images after correlation-based denoising.
    They also performed two-snapshot depth imaging by taking two pictures of a stacked “ECNU” target at slightly different object distances and using those to calculate the true sizes and depths. With this method, they were able to measure the depth of the objects over a range of about 6 centimeters, without using complex pulsed timing techniques.
    The researchers note that the mid-infrared nonlinear pinhole imaging system is still a proof-of-concept that requires a relatively complex and bulky laser setup. However, as new nonlinear materials and integrated light sources are developed, the technology should become far more compact and easier to deploy.
    They are now working to make the system faster, more sensitive and adaptable to different imaging scenarios. Their plans include boosting conversion efficiency, adding dynamic control to reshape the optical pinhole for different scenes, and extending the camera’s operation across a wider mid-infrared range. More

  • in

    The quantum internet just went live on Verizon’s network

    In a first-of-its-kind experiment, engineers at the University of Pennsylvania brought quantum networking out of the lab and onto commercial fiber-optic cables using the same Internet Protocol (IP) that powers today’s web. Reported in Science, the work shows that fragile quantum signals can run on the same infrastructure that carries everyday online traffic. The team tested their approach on Verizon’s campus fiber-optic network.
    The Penn team’s tiny “Q-chip” coordinates quantum and classical data and, crucially, speaks the same language as the modern web. That approach could pave the way for a future “quantum internet,” which scientists believe may one day be as transformative as the dawn of the online era.
    Quantum signals rely on pairs of “entangled” particles, so closely linked that changing one instantly affects the other. Harnessing that property could allow quantum computers to link up and pool their processing power, enabling advances like faster, more energy-efficient AI or designing new drugs and materials beyond the reach of today’s supercomputers.
    Penn’s work shows, for the first time on live commercial fiber, that a chip can not only send quantum signals but also automatically correct for noise, bundle quantum and classical data into standard internet-style packets, and route them using the same addressing system and management tools that connect everyday devices online.
    “By showing an integrated chip can manage quantum signals on a live commercial network like Verizon’s, and do so using the same protocols that run the classical internet, we’ve taken a key step toward larger-scale experiments and a practical quantum internet,” says Liang Feng, Professor in Materials Science and Engineering (MSE) and in Electrical and Systems Engineering (ESE), and the Science paper’s senior author.
    The Challenges of Scaling the Quantum Internet
    Erwin Schrodinger, who coined the term “quantum entanglement,” famously related the concept to a cat hidden in a box. If the lid is closed, and the box also contains radioactive material, the cat could be alive or dead. One way to interpret the situation is that the cat is both alive and dead. Only opening the box confirms the cat’s state.

    That paradox is roughly analogous to the unique nature of quantum particles. Once measured, they lose their unusual properties, which makes scaling a quantum network extremely difficult.
    “Normal networks measure data to guide it towards the ultimate destination,” says Robert Broberg, a doctoral student in ESE and coauthor of the paper. “With purely quantum networks, you can’t do that, because measuring the particles destroys the quantum state.”
    Coordinating Classical and Quantum Signals
    To get around this obstacle, the team developed the “Q-Chip” (short for “Quantum-Classical Hybrid Internet by Photonics”) to coordinate “classical” signals, made of regular streams of light, and quantum particles. “The classical signal travels just ahead of the quantum signal,” says Yichi Zhang, a doctoral student in MSE and the paper’s first author. “That allows us to measure the classical signal for routing, while leaving the quantum signal intact.”
    In essence, the new system works like a railway, pairing regular light locomotives with quantum cargo. “The classical ‘header’ acts like the train’s engine, while the quantum information rides behind in sealed containers,” says Zhang. “You can’t open the containers without destroying what’s inside, but the engine ensures the whole train gets where it needs to go.”
    Because the classical header can be measured, the entire system can follow the same “IP” or “Internet Protocol” that governs today’s internet traffic. “By embedding quantum information in the familiar IP framework, we showed that a quantum internet could literally speak the same language as the classical one,” says Zhang. “That compatibility is key to scaling using existing infrastructure.”
    Adapting Quantum Technology to the Real World

    One of the greatest challenges to transmitting quantum particles on commercial infrastructure is the variability of real-world transmission lines. Unlike laboratory environments, which can maintain ideal conditions, commercial networks frequently encounter changes in temperature, thanks to weather, as well as vibrations from human activities like construction and transportation, not to mention seismic activity.
    To counteract this, the researchers developed an error-correction method that takes advantage of the fact that interference to the classical header will affect the quantum signal in a similar fashion. “Because we can measure the classical signal without damaging the quantum one,” says Feng, “we can infer what corrections need to be made to the quantum signal without ever measuring it, preserving the quantum state.”
    In testing, the system maintained transmission fidelities above 97%, showing that it could overcome the noise and instability that usually destroy quantum signals outside the lab. And because the chip is made of silicon and fabricated using established techniques, it could be mass produced, making the new approach easy to scale.
    “Our network has just one server and one node, connecting two buildings, with about a kilometer of fiber-optic cable installed by Verizon between them,” says Feng. “But all you need to do to expand the network is fabricate more chips and connect them to Philadelphia’s existing fiber-optic cables.”
    The Future of the Quantum Internet
    The main barrier to scaling quantum networks beyond a metro area is that quantum signals cannot yet be amplified without destroying their entanglement.
    While some teams have shown that “quantum keys,” special codes for ultra-secure communication, can travel long distances over ordinary fiber, those systems use weak coherent light to generate random numbers that cannot be copied, a technique that is highly effective for security applications but not sufficient to link actual quantum processors.
    Overcoming this challenge will require new devices, but the Penn study provides an important early step: showing how a chip can run quantum signals over existing commercial fiber using internet-style packet routing, dynamic switching and on-chip error mitigation that work with the same protocols that manage today’s networks.
    “This feels like the early days of the classical internet in the 1990s, when universities first connected their networks,” says Broberg. “That opened the door to transformations no one could have predicted. A quantum internet has the same potential.”
    This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and was supported by the Gordon and Betty Moore Foundation (GBMF12960 and DOI 10.37807), Office of Naval Research (N00014-23-1-2882), National Science Foundation (DMR-2323468), Olga and Alberico Pompa endowed professorship, and PSC-CUNY award (ENHC-54-93).
    Additional co-authors include Alan Zhu, Gushi Li and Jonathan Smith of the University of Pennsylvania, and Li Ge of the City University of New York. More

  • in

    Scientists unveil breakthrough pixel that could put holograms on your smartphone

    New research from the University of St Andrews paves the way for holographic technology, with the potential to transform smart devices, communication, gaming and entertainment.
    In a study published recently in Light, Science and Application, researchers from the school of Physics and Astronomy created a new optoelectronic device from the combined use of Holographic Metasurfaces (HMs) and Organic Light Emmitting Diodes (OLEDs).
    Until now, holograms have are created using lasers, however researchers have found  that using OLEDs and HMs gives a simpler and more compact approach that is potentially cheaper and easier to apply, overcoming the main barriers to hologram technology being used more widely.
    Organic light-emitting diodes are thin film devices widely used to make the colored pixels in mobile phone displays and some TVs. As a flat and surface-emitting light source, OLEDs are also used in emerging applications such as optical wireless communications, biophotonics and sensing, where the ability to integrate with other technologies makes them good candidates to realize miniaturized light-based platforms.
    A holographic metasurface is a thin, flat array of tiny structures called meta-atoms – the size of roughly a thousand of the width of a strand of hair – they are designed to manipulate light’s properties. They can make holograms and their uses span diverse fields, such as data storage, anti-counterfeiting, optical displays, high numerical aperture lenses – for example optical microscopy, and sensing.
    This, however, is the first time both have been used together to produce the basic building block of a holographic display.
    Researchers found that when each meta- atom is carefully shaped to control the properties of the beam of light that goes through it, it behaves as a pixel of the HM. When light goes through the HM, at each pixel, the properties of the light are slightly modified.

    Thanks to these modifications, it is possible to create a pre-designed image on the other side, exploiting the principle of light interference, whereby light waves create complicated patterns when they interact with each other.
    Professor Ifor Samuel, from the School of Physics and Astronomy, said: “We are excited to demonstrate this new direction for OLEDs.  By combining OLEDs with metasurfaces, we also open a new way of generating holograms and shaping light.”
    Andrea Di Falco, professor in nano-photonics at the School of Physics and Astronomy, said: “Holographic metasurfaces are one of the most versatile material platforms to control light. With this work, we have removed one of the technological barriers that prevent the adoption of metamaterials in everyday applications. This breakthrough will enable a step change in the architecture of holographic displays for emerging applications, for example, in virtual and augmented reality.”
    Professor Graham Turnbull, from the School of Physics and Astronomy, said: “OLED displays normally need thousands of pixels to create a simple picture. This new approach allows a complete image to be projected from a single OLED pixel!”
    Until now, researchers could only make very simple shapes with OLEDs, which limited their usability in some applications. However, this breakthrough provides a path toward a miniaturized and highly integrated metasurface display. More