More stories

  • in

    These little robots literally walk on water

    Imagine a tiny robot, no bigger than a leaf, gliding across a pond’s surface like a water strider. One day, devices like this could track pollutants, collect water samples or scout flooded areas too risky for people.
    Baoxing Xu, professor of mechanical and aerospace engineering at the University of Virginia’s School of Engineering and Applied Science, is pioneering a way to build them. In a new study published in Science Advances, Xu’s research introduces HydroSpread, a first-of-its-kind fabrication method that has great potential to impact the growing field of soft robotics. This innovation allows scientists to make soft, floating devices directly on water, a technology that could be utilized in fields from health care to electronics to environmental monitoring.
    Until now, the thin, flexible films used in soft robotics had to be manufactured on rigid surfaces like glass and then peeled off and transferred to water, a delicate process that often caused films to tear. HydroSpread sidesteps this issue by letting liquid itself serve as the “workbench.” Droplets of liquid polymer could naturally spread into ultrathin, uniform sheets on the water’s surface. With a finely tuned laser, Xu’s team can then carve these sheets into complex patterns — circles, strips, even the UVA logo — with remarkable precision.
    Using this approach, the researchers built two insect-like prototypes: HydroFlexor, which paddles across the surface using fin-like motions. HydroBuckler, which “walks” forward with buckling legs, inspired by water striders.In the lab, the team powered these devices with an overhead infrared heater. As the films warmed, their layered structure bent or buckled, creating paddling or walking motions. By cycling the heat on and off, the devices could adjust their speed and even turn — proof that controlled, repeatable movement is possible. Future versions could be designed to respond to sunlight, magnetic fields or tiny embedded heaters, opening the door to autonomous soft robots that can move and adapt on their own.
    “Fabricating the film directly on liquid gives us an unprecedented level of integration and precision,” Xu said. “Instead of building on a rigid surface and then transferring the device, we let the liquid do the work to provide a perfectly smooth platform, reducing failure at every step.”
    The potential reaches beyond soft robots. By making it easier to form delicate films without damaging them, HydroSpread could open new possibilities for creating wearable medical sensors, flexible electronics and environmental monitors — tools that need to be thin, soft and durable in settings where traditional rigid materials don’t work.
    About the Researcher
    Baoxing Xu is a nationally recognized expert in mechanics, compliant structures and bioinspired engineering. His lab at UVA Engineering focuses on translating strategies from nature — such as the delicate mechanics of insect locomotion — into resilient, functional devices for human use.
    This work, supported by the National Science Foundation and 4-VA, was carried out within UVA’s Department of Mechanical and Aerospace Engineering. Graduate and undergraduate researchers in Xu’s group played a central role in the experiments, gaining hands-on experience with state-of-the-art fabrication and robotics techniques. More

  • in

    Scientists finally found the “dark matter” of electronics

    In a world-first, researchers from the Femtosecond Spectroscopy Unit at the Okinawa Institute of Science and Technology (OIST) have directly observed the evolution of the elusive dark excitons in atomically thin materials, laying the foundation for new breakthroughs in both classical and quantum information technologies. Their findings have been published in Nature Communications. Professor Keshav Dani, head of the unit, highlights the significance: “Dark excitons have great potential as information carriers, because they are inherently less likely to interact with light, and hence less prone to degradation of their quantum properties. However, this invisibility also makes them very challenging to study and manipulate. Building on a previous breakthrough at OIST in 2020, we have opened a route to the creation, observation, and manipulation of dark excitons.”
    “In the general field of electronics, one manipulates electron charge to process information,” explains Xing Zhu, co-first author and PhD student in the unit. “In the field of spintronics, we exploit the spin of electrons to carry information. Going further, in valleytronics, the crystal structure of unique materials enables us to encode information into distinct momentum states of the electrons, known as valleys.” The ability to use the valley dimension of dark excitons to carry information positions them as promising candidates for quantum technologies. Dark excitons are by nature more resistant to environmental factors like thermal background than the current generation of qubits, potentially requiring less extreme cooling and making them less prone to decoherence, where the unique quantum state breaks down.
    Defining landscapes of energy with bright and dark excitons
    Over the past decade, progress has been made in the development of a class of atomically thin semiconducting materials known as TMDs (transition metal dichalcogenides). As with all semiconductors, atoms in TMDs are aligned in a crystal lattice, which confines electrons to a specific level (or band) of energy, such as the valence band. When exposed to light, the negatively charged electrons are excited to a higher energy state – the conduction band – leaving behind a positively charged hole in the valence band. The electrons and holes are bound together by electrostatic attraction, forming hydrogen-like quasiparticles called excitons. If certain quantum properties of the electron and hole match, i.e. they have the same spin configuration and they inhabit the same ‘valley’ in momentum space (the energy minima that electrons and holes can occupy in the atomic crystal structure) the two recombine within a picosecond (1ps = 10−12 second), emitting light in the process. These are ‘bright’ excitons.
    However, if the quantum properties of the electron and hole do not match up, the electron and hole are forbidden from recombining on their own and do not emit light. These are characterized as ‘dark’ excitons. “There are two ‘species’ of dark excitons,” explains Dr. David Bacon, co-first author who is now at University College London, “momentum-dark and spin-dark, depending on where the properties of electron and hole are in conflict. The mismatch in properties not only prevents immediate recombination, allowing them to exist up to several nanoseconds (1ns = 10−9 second – a much more useful timescale), but also makes dark excitons more isolated from environmental interactions.”
    “The unique atomic symmetry of TMDs means that when exposed to a state of light with a circular polarization, one can selectively create bright excitons only in a specific valley. This is the fundamental principle of valleytronics. However, bright excitons rapidly turn into numerous dark excitons that can potentially preserve the valley information. Which species of dark excitons are involved and to what degree they can sustain the valley information is unclear, but this is a key step in the pursuit of valleytronic applications,” explains Dr. Vivek Pareek, co-first author and OIST graduate who is now a Presidential Postdoctoral Fellow at the California Institute of Technology.
    Observing electrons at the femtosecond scale
    Using the world-leading TR-ARPES (time- and angle resolved photoemission spectroscopy) setup at OIST, which includes a proprietary, table-top XUV (extreme ultraviolet) source, the team has managed to track the characteristics of all excitons after the creation of bright excitons in a specific valley in a TMD semiconductor over time by simultaneously quantifying momentum, spin state, and population levels of electrons and holes – these properties have never been simultaneously quantified before.
    Their findings show that within a picosecond, some bright excitons are scattered by phonons (quantized crystal lattice vibrations) into different momentum valleys, rendering them momentum-dark. Later, spin-dark excitons dominate, where electrons have flipped spin within the same valley, persisting on nanosecond scales.
    With this, the team has overcome the fundamental challenge of how to access and track dark excitons, laying the foundation for dark valleytronics as a field. Dr. Julien Madéo of the unit summarizes: “Thanks to the sophisticated TR-ARPES setup at OIST, we have directly accessed and mapped how and what dark excitons keep long-lived valley information. Future developments to read out the dark excitons valley properties will unlock broad dark valleytronic applications across information systems.” More

  • in

    Scientists just recreated a wildfire that made its own weather

    On September 5, 2020, California’s Creek Fire grew so severe that it began producing it’s own weather system. The fire’s extreme heat produced an explosive thunderhead that spewed lightning strikes and further fanned the roaring flames, making containment elusive and endangering the lives of firefighters on the ground. These wildfire-born storms have become a growing part of fire seasons across the West, with lasting impacts on air quality, weather, and climate. Until now, scientists have struggled to replicate them in Earth system models, hindering our ability to predict their occurrence and understand their impacts on the global climate. Now, a new study provides a breakthrough by developing a novel wildfire-Earth system modeling framework.
    The research, published September 25th in Geophysical Research Letters, represents the first successful simulation of these wildfire-induced storms, known as pyrocumulonimbus clouds, within an Earth system model. Led by DRI scientist Ziming Ke, the study successfully reproduced the observed timing, height, and strength of the Creek Fire’s thunderhead – one of the largest known pyrocumulonimbus clouds seen in the U.S., according to NASA. The model also replicated multiple thunderstorms produced by the 2021 Dixie Fire, which occurred under very different conditions. Accounting for the way that cloud development is aided by moisture lofted into the higher reaches of the atmosphere by terrain and winds is key to their findings.
    “This work is a first-of-its-kind breakthrough in Earth system modeling,” Ke said. “It not only demonstrates how extreme wildfire events can be studied within Earth system models, but also establishes DRI’s growing capability in Earth system model development — a core strength that positions the institute to lead future advances in wildfire-climate science.”
    When a pyrocumulonimbus cloud forms, it injects smoke and moisture into the upper atmosphere at magnitudes comparable to those of small volcanic eruptions, impacting the way Earth’s atmosphere receives and reflects sunlight. These fire aerosols can persist for months or longer, altering stratospheric composition. When transported to polar regions, they affect Antarctic ozone dynamics, modify clouds and albedo, and accelerate ice and snow melt, reshaping polar climate feedbacks. Scientists estimate that tens to hundreds of these storms occur globally each year, and that the trend of increasingly severe wildfires will only grow their numbers. Until now, failing to incorporate these storms into Earth system models has hindered our ability to understand this natural disturbance’s impact on global climate.
    The research team also included scientists from Lawrence Livermore National Laboratory, U.C. Irvine, and Pacific Northwest National Laboratory. Their breakthrough leveraged the Department of Energy’s (DOE) Energy Exascale Earth System Model (E3SM) to successfully capture the complex interplay between wildfires and the atmosphere.
    “Our team developed a novel wildfire-Earth system modeling framework that integrates high-resolution wildfire emissions, a one-dimensional plume-rise model, and fire-induced water vapor transport into DOE’s cutting-edge Earth system model,” Ke said. “This breakthrough advances high-resolution modeling of extreme hazards to improve national resilience and preparedness, and provides the framework for future exploration of these storms at regional and global scales within Earth system models.” More

  • in

    DOLPHIN AI uncovers hundreds of invisible cancer markers

    McGill University researchers have developed an artificial intelligence tool that can detect previously invisible disease markers inside single cells.
    In a study published in Nature Communications, the researchers demonstrate how the tool, called DOLPHIN, could one day be used by doctors to catch diseases earlier and guide treatment options.
    “This tool has the potential to help doctors match patients with the therapies most likely to work for them, reducing trial-and-error in treatment,” said senior author Jun Ding, assistant professor in McGill’s Department of Medicine and a junior scientist at the Research Institute of the McGill University Health Centre.
    Zooming in on genetic building blocks
    Disease markers are often subtle changes in RNA expression that can indicate when a disease is present, how severe it may become or how it might respond to treatment.
    Conventional gene-level methods of analysis collapse these markers into a single count per gene, masking critical variation and capturing only the tip of the iceberg, said the researchers.
    Now, advances in artificial intelligence have made it possible to capture the fine-grained complexity of single-cell data. DOLPHIN moves beyond gene-level, zooming in to see how genes are spliced together from smaller pieces called exons to provide a clearer view of cell states.

    “Genes are not just one block, they’re like Lego sets made of many smaller pieces,” said first author Kailu Song, a PhD student in McGill’s Quantitative Life Sciences program. “By looking at how those pieces are connected, our tool reveals important disease markers that have long been overlooked.”
    In one test case, DOLPHIN analyzed single-cell data from pancreatic cancer patients and found more than 800 disease markers missed by conventional tools. It was able to distinguish patients with high-risk, aggressive cancers from those with less severe cases, information that would help doctors choose the right treatment path.
    A step toward ‘virtual cells’
    More broadly, the breakthrough lays the foundation for achieving the long-term goal of building digital models of human cells. DOLPHIN generates richer single-cell profiles than conventional methods, enabling virtual simulations of how cells behave and respond to drugs before moving to lab or clinical trials, saving time and money.
    The researchers’ next step will be to expand the tool’s reach from a few datasets to millions of cells, paving the way for more accurate virtual cell models in the future.
    About the study
    “DOLPHIN advances single-cell transcriptomics beyond gene level by leveraging exon and junction reads” by Kailu Song and Jun Ding et al., was published inNature Communications.
    This research was supported the Meakins-Christie Chair in Respiratory Research, the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada and the Fonds de recherche du Québec. More

  • in

    Princeton’s AI reveals what fusion sensors can’t see

    Imagine watching a favorite movie when suddenly the sound stops. The data representing the audio is missing. All that’s left are images. What if artificial intelligence (AI) could analyze each frame of the video and provide the audio automatically based on the pictures, reading lips and noting each time a foot hits the ground?
    That’s the general concept behind a new AI that fills in missing data about plasma, the fuel of fusion, according to Azarakhsh Jalalvand of Princeton University. Jalalvand is the lead author on a paper about the AI, known as Diag2Diag, that was recently published in Nature Communications. “We have found a way to take the data from a bunch of sensors in a system and generate a synthetic version of the data for a different kind of sensor in that system,” he said. The synthetic data aligns with real-world data and is more detailed than what an actual sensor could provide. This could increase the robustness of control while reducing the complexity and cost of future fusion systems. “Diag2Diag could also have applications in other systems such as spacecraft and robotic surgery by enhancing detail and recovering data from failing or degraded sensors, ensuring reliability in critical environments.”
    The research is the result of an international collaboration between scientists at Princeton University, the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), Chung-Ang University, Columbia University and Seoul National University. All of the sensor data used in the research to develop the AI was gathered from experiments at the DIII-D National Fusion Facility, a DOE user facility.
    The new AI enhances the way scientists can monitor and control the plasma inside a fusion system and could help keep future commercial fusion systems a reliable source of electricity. “Fusion devices today are all experimental laboratory machines, so if something happens to a sensor, the worst thing that can happen is that we lose time before we can restart the experiment. But if we are thinking about fusion as a source of energy, it needs to work 24/7, without interruption,” Jalalvand said.
    AI could lead to compact, economical fusion systems
    The name Diag2Diag originates from the word “diagnostic,” which refers to the technique used to analyze a plasma and includes sensors that measure the plasma. Diagnostics take measurements at regular intervals, often as fast as a fraction of a second apart. But some don’t measure the plasma often enough to detect particularly fast-evolving plasma instabilities: sudden changes in the plasma that can make it hard to produce power reliably.
    There are many diagnostics in a fusion system that measure different characteristics of the plasma. Thomson scattering, for example, is a diagnostic technique used in doughnut-shaped fusion systems called tokamaks. The Thomson scattering diagnostic measures the temperature of negatively charged particles known as electrons, as well as the density: the number of electrons packed into a unit of space. It takes measurements quickly but not fast enough to provide details that plasma physicists need to keep the plasma stable and at peak performance.

    “Diag2Diag is kind of giving your diagnostics a boost without spending hardware money,” said Egemen Kolemen, principal investigator of the research who is jointly appointed at PPPL and Princeton University’s Andlinger Center for Energy and the Environment and the Department of Mechanical and Aerospace Engineering.
    This is particularly important for Thomson scattering because the other diagnostics can’t take measurements at the edge of the plasma, which is also known as the pedestal. It is the most important part of the plasma to monitor, but it’s very hard to measure. Carefully monitoring the pedestal helps scientists enhance plasma performance so they can learn the best ways to get the most energy out of the fusion reaction efficiently.
    For fusion energy to be a major part of the U.S. power system, it must be both economical and reliable. PPPL Staff Research Scientist SangKyeun Kim, who was part of the Diag2Diag research team, said the AI moves the U.S. toward those goals. “Today’s experimental tokamaks have a lot of diagnostics, but future commercial systems will likely need to have far fewer,” Kim said. “This will help make fusion reactors more compact by minimizing components not directly involved in producing energy.” Fewer diagnostics also frees up valuable space inside the machine, and simplifying the system also makes it more robust and reliable, with fewer chances for error. Plus, it lowers maintenance costs.
    PPPL: A leader in AI approaches to stabilizing fusion plasma
    The research team also found that the AI data supports a leading theory about how one method for stopping plasma disruptions works. Fusion scientists around the world are working on ways to control edge-localized modes (ELMs), which are powerful energy bursts in fusion reactors that can severely damage the reactor’s inner walls. One promising method to stop ELMs involves applying resonant magnetic perturbations (RMPs): small changes made to the magnetic fields used to hold a plasma inside a tokamak. PPPL is a leader in ELM-suppression research, with recent papers on AI and traditional approaches to stopping these problematic disruptions. One theory suggests that RMPs create “magnetic islands” at the plasma’s edge. These islands cause the plasma’s temperature and density to flatten, meaning the measurements were more uniform across the edge of the plasma.
    “Due to the limitation of the Thomson diagnostic, we cannot normally observe this flattening,” said PPPL Principal Research Scientist Qiming Hu, who also worked on the project. “Diag2Diag provided much more details on how this happens and how it evolves.”
    While magnetic islands can lead to ELMs, a growing body of research suggests they can also be fine-tuned using RMPs to improve plasma stability. Diag2Diag generated data that provided new evidence of this simultaneous flattening of both temperature and density in the pedestal region of the plasma. This strongly supports the magnetic island theory for ELM suppression. Understanding this mechanism is crucial for the development of commercial fusion reactors.
    The scientists are already pursuing plans to expand the scope of Diag2Diag. Kolemen noted that several researchers have already expressed interest in trying the AI. “Diag2Diag could be applied to other fusion diagnostics and is broadly applicable to other fields where diagnostic data is missing or limited,” he said.
    This research was supported by DOE under awards DE-FC02-04ER54698, DE-SC0022270, DE-SC0022272, DE-SC0024527, DE-SC0020413, DE-SC0015480 and DE-SC0024626, as well as the National Research Foundation of Korea award RS-2024-00346024 funded by the Korean government (MSIT). The authors also received financial support from the Princeton Laboratory for Artificial Intelligence under award 2025-97. More

  • in

    The hidden forces inside diamonds that could make tech 1,000x faster

    Understanding what happens inside a material when it is hit by ultrashort light pulses is one of the great challenges of matter physics and modern photonics. A new study published in Nature Photonics and led by Politecnico di Milano reveals a hitherto neglected but essential aspect, precisely the contribution of virtual charges, charge carriers that exist only during interaction with light, but which profoundly influence the material’s response.
    The research, conducted in partnership with the University of Tsukuba, the Max Planck Institute for the Structure and Dynamics of Matter, and the Institute of Photonics and Nanotechnology (Cnr-Ifn) investigated the behavior of monocrystalline diamonds subjected to light pulses lasting a few attoseconds (billionths of a billionth of a second), using an advanced technique called attosecond-scale transient reflection spectroscopy.
    By comparing experimental data with state-of-the-art numerical simulations, researchers were able to isolate the effect of so-called virtual vertical transitions between the electronic bands of the material. Such an outcome changes the perspective on how light interacts with solids, even in extreme conditions hitherto attributed only to the movement of actual charges.
    “Our work shows that virtual carrier excitation, which develops in a few billionths of a billionth of a second, are indispensable to correctly predict the rapid optical response in solids,” said Matteo Lucchini, professor at the Department of Physics, senior author of the study, and associate at CNR-Ifn.
    “These results mark a key step in the development of ultra-fast technologies in electronics,” adds Rocío Borrego Varillas, researcher at CNR-IFN.
    The progress achieved offers new insights into the creation of ultra-fast optical devices, such as switches and modulators capable of operating at petahertz frequencies, a thousand times faster than current electronic devices. This requires a deep understanding of both the behavior of actual charges, and of virtual charges, as demonstrated by this study.
    Research was carried out at the Attosecond Research Center (ARC) of the Politecnico di Milano, in the framework of the European and national projects ERC AuDACE (Attosecond Dynamics in AdvanCed matErials) and MIUR FARE PHorTUNA (PHase Transition Ultrafast dyNAmics in Mott insulators). More

  • in

    Black hole discovery confirms Einstein and Hawking were right

    A decade ago, scientists first detected ripples in the fabric of space-time, called gravitational waves, from the collision of two black holes. Now, thanks to improved technology and a bit of luck, a newly detected black hole merger is providing the clearest evidence yet of how black holes work — and, in the process, offering long-sought confirmation of fundamental predictions by Albert Einstein and Stephen Hawking.
    The new measurements were made by the Laser Interferometer Gravitational-Wave Observatory (LIGO), with analyses led by astrophysicists Maximiliano Isi and Will Farr of the Flatiron Institute’s Center for Computational Astrophysics in New York City. The results reveal insights into the properties of black holes and the fundamental nature of space-time, hinting at how quantum physics and Einstein’s general relativity fit together.
    “This is the clearest view yet of the nature of black holes,” says Isi, who is also an assistant professor at Columbia University. “We’ve found some of the strongest evidence yet that astrophysical black holes are the black holes predicted from Albert Einstein’s theory of general relativity.”
    The results were reported in a paper published September 10 in Physical Review Letters by the LIGO-Virgo-KAGRA Collaboration.
    For massive stars, black holes are the final stage in their evolution. Black holes are so dense that even light cannot escape their gravity. When two black holes collide, the event distorts space itself, creating ripples in space-time that fan out across the universe, like sound waves ringing out from a struck bell.
    Those space-deforming ripples, called gravitational waves, can tell scientists a great deal about the objects that created them. Just as a large iron bell makes different sounds than a smaller aluminum bell, the “sound” a black hole merger makes is specific to the properties of the black holes involved.
    Scientists can detect gravitational waves with special instruments at observatories such as LIGO in the United States, Virgo in Italy and KAGRA in Japan. These instruments carefully measure how long it takes a laser to travel a given path. As gravitational waves stretch and compress space-time, the length of the instrument, and thus the light’s travel time, changes minutely. By measuring those tiny changes with great precision, scientists can use them to determine the black holes’ characteristics.

    The newly reported gravitational waves were found to be created by a merger that formed a black hole with the mass of 63 suns and spinning at 100 revolutions per second. The findings come 10 years after LIGO made the first black hole merger detection. Since that landmark discovery, improvements in equipment and techniques have enabled scientists to get a much clearer look at these space-shaking events.
    “The new pair of black holes are almost twins to the historic first detection in 2015,” Isi says. “But the instruments are much better, so we’re able to analyze the signal in ways that just weren’t possible 10 years ago.”
    With these new signals, Isi and his colleagues got a complete look at the collision from the moment the black holes first careened into each other until the final reverberations as the merged black hole settled into its new state, which happened only milliseconds after first contact.
    Previously, the final reverberations were difficult to capture, as by that point, the ringing of the black hole would be very faint. As a result, scientists couldn’t separate the ringing of the collision from that of the final black hole itself.
    In 2021, Isi led a study showcasing a cutting-edge method that he, Farr and others developed to isolate certain frequencies — or ‘tones’ — using data from the 2015 black hole merger. This method proved powerful, but the 2015 measurements weren’t clear enough to confirm key predictions about black holes. With the new, more precise measurements, though, Isi and his colleagues were more confident they had successfully isolated the milliseconds-long signal of the final, settled black hole. This enabled more unambiguous tests of the nature of black holes.
    “Ten milliseconds sounds really short, but our instruments are so much better now that this is enough time for us to really analyze the ringing of the final black hole,” Isi says. “With this new detection, we have an exquisitely detailed view of the signal both before and after the black hole merger.”
    The new observations allowed scientists to test a key conjecture dating back decades that black holes are fundamentally simple objects. In 1963, physicist Roy Kerr used Einstein’s general relativity to mathematically describe black holes with one equation. The equation showed that astrophysical black holes can be described by just two characteristics: spin and mass. With the new, higher-quality data, the scientists were able to measure the frequency and duration of the ringing of the merged black hole more precisely than ever before. This allowed them to see that, indeed, the merged black hole is a simple object, described by just its mass and spin.

    The observations were also used to test a foundational idea proposed by Stephen Hawking called Hawking’s area theorem. It states that the size of a black hole’s event horizon — the line past which nothing, not even light, can return — can only ever grow. Testing whether this theorem applies requires exceptional measurements of black holes before and after their merger. Following the first black hole merger detection in 2015, Hawking wondered if the merger signature could be used to confirm his theorem. At the time, no one thought it was possible.
    By 2019, a year after Hawking’s death, methods had improved enough that a first tentative confirmation came using techniques developed by Isi, Farr, and colleagues. With four times better resolution, the new data gives scientists much more confidence that Hawking’s theorem is correct.
    In confirming Hawking’s theorem, the results also hint at connections to the second law of thermodynamics. This law states that a property that measures a system’s disorder, known as entropy, must increase, or at least remain constant, over time. Understanding the thermodynamics of black holes could lead to advances in other areas of physics, including quantum gravity, which aims to merge general relativity with quantum physics.
    “It’s really profound that the size of a black hole’s event horizon behaves like entropy,” Isi says. “It has very deep theoretical implications and means that some aspects of black holes can be used to mathematically probe the true nature of space and time.”
    Many suspect that future black hole merger detections will only reveal more about the nature of these objects. In the next decade, detectors are expected to become 10 times more sensitive than today, allowing for more rigorous tests of black hole characteristics.
    “Listening to the tones emitted by these black holes is our best hope for learning about the properties of the extreme space-times they produce,” says Farr, who is also a professor at Stony Brook University. “And as we build more and better gravitational wave detectors, the precision will continue to improve.”
    “For so long this field has been pure mathematical and theoretical speculation,” Isi says. “But now we’re in a position of actually seeing these amazing processes in action, which highlights how much progress there’s been — and will continue to be — in this field.” More

  • in

    Quantum chips just proved they’re ready for the real world

    UNSW Sydney nano-tech startup Diraq has shown its quantum chips aren’t just lab-perfect prototypes – they also hold up in real-world production, maintaining the 99% accuracy needed to make quantum computers viable.
    Diraq, a pioneer of silicon-based quantum computing, achieved this feat by teaming up with European nanoelectronics institute Interuniversity Microelectronics Centre (imec). Together they demonstrated the chips worked just as reliably coming off a semiconductor chip fabrication line as they do in the experimental conditions of a research lab at UNSW.
    UNSW Engineering Professor Andrew Dzurak, who is the founder and CEO of Diraq, said up until now it hadn’t been proven that the processors’ lab-based fidelity – meaning accuracy in the quantum computing world – could be translated to a manufacturing setting.
    “Now it’s clear that Diraq’s chips are fully compatible with manufacturing processes that have been around for decades.”
    In a paper published on Sept. 24 in Nature, the teams report that Diraq-designed, imec-fabricated devices achieved over 99% fidelity in operations involving two quantum bits – or ‘qubits’. The result is a crucial step towards Diraq’s quantum processors achieving utility scale, the point at which a quantum computer’s commercial value exceeds its operational cost. This is the key metric set out in the Quantum Benchmarking Initiative, a program run by the United States’ Defense Advanced Research Projects Agency (DARPA) to gauge whether Diraq and 17 other companies can reach this goal.
    Utility-scale quantum computers are expected to be able to solve problems that are out of reach of the most advanced high-performance computers available today. But breaching the utility-scale threshold requires storing and manipulating quantum information in millions of qubits to overcome the errors associated with the fragile quantum state.
    “Achieving utility scale in quantum computing hinges on finding a commercially viable way to produce high-fidelity quantum bits at scale,” said Prof. Dzurak.

    “Diraq’s collaboration with imec makes it clear that silicon-based quantum computers can be built by leveraging the mature semiconductor industry, which opens a cost-effective pathway to chips containing millions of qubits while still maximizing fidelity.”
    Silicon is emerging as the front-runner among materials being explored for quantum computers – it can pack millions of qubits onto a single chip and works seamlessly with today’s trillion-dollar microchip industry, making use of the methods that put billions of transistors onto modern computer chips.
    Diraq has previously shown that qubits fabricated in an academic laboratory can achieve high fidelity when performing two-qubit logic gates, the basic building block of future quantum computers. However, it was unclear whether this fidelity could be reproduced in qubits manufactured in a semiconductor foundry environment.
    “Our new findings demonstrate that Diraq’s silicon qubits can be fabricated using processes that are widely used in semiconductor foundries, meeting the threshold for fault tolerance in a way that is cost-effective and industry-compatible,” Prof. Dzurak said.
    Diraq and imec previously showed that qubits manufactured using CMOS processes – the same technology used to build everyday computer chips – could perform single-qubit operations with 99.9% accuracy. But more complex operations using two qubits that are critical to achieving utility scale had not yet been demonstrated.
    “This latest achievement clears the way for the development of a fully fault-tolerant, functional quantum computer that is more cost effective than any other qubit platform,” Prof. Dzurak said. More