More stories

  • in

    AI Reveals Milky Way’s Black Hole Spins Near Top Speed

    An international team of astronomers has trained a neural network with millions of synthetic simulations and artificial intelligence (AI) to tease out new cosmic curiosities about black holes, revealing the one at the center of our Milky Way is spinning at nearly top speed.
    These large ensembles of simulations were generated by throughput computing capabilities provided by the Center for High Throughput Computing (CHTC), a joint entity of the Morgridge Institute for Research and the University of Wisconsin-Madison. The astronomers published their results and methodology today in three papers in the journal Astronomy & Astrophysics.
    High-throughput computing, celebrating its 40th anniversary this year, was pioneered by Wisconsin computer scientist Miron Livny. It’s a novel form of distributed computing that automates computing tasks across a network of thousands of computers, essentially turning a single massive computing challenge into a supercharged fleet of smaller ones. This computing innovation is helping fuel big-data discovery across hundreds of scientific projects worldwide, including the search for cosmic neutrinos, subatomic particles and gravitational waves as well as to unravel antibiotic resistance.
    In 2019, the Event Horizon Telescope (EHT) Collaboration released the first image of a supermassive black hole at the center of the galaxy M87. In 2022, they presented the image of the black hole at the center of our Milky Way, Sagittarius A*. However, the data behind the images still contained a wealth of hard-to-crack information. An international team of researchers trained a neural network to extract as much information as possible from the data.
    From a handful to millions
    Previous studies by the EHT Collaboration used only a handful of realistic synthetic data files. Funded by the National Science Foundation (NSF) as part of the Partnership to Advance Throughput Computing (PATh) project, the Madison-based CHTC enabled the astronomers to feed millions of such data files into a so-called Bayesian neural network, which can quantify uncertainties. This allowed the researchers to make a much better comparison between the EHT data and the models.
    Thanks to the neural network, the researchers now suspect that the black hole at the center of the Milky Way is spinning at almost top speed. Its rotation axis points to the Earth. In addition, the emission near the black hole is mainly caused by extremely hot electrons in the surrounding accretion disk and not by a so-called jet. Also, the magnetic fields in the accretion disk appear to behave differently from the usual theories of such disks.

    “That we are defying the prevailing theory is of course exciting,” says lead researcher Michael Janssen, of Radboud University Nijmegen, the Netherlands. “However, I see our AI and machine learning approach primarily as a first step. Next, we will improve and extend the associated models and simulations.”
    Impressive scaling
    “The ability to scale up to the millions of synthetic data files required to train the model is an impressive achievement,” adds Chi-kwan Chan, an Associate Astronomer of Steward Observatory at the University of Arizonaand a longtime PATh collaborator. “It requires dependable workflow automation, and effective workload distribution across storage resources and processing capacity.”
    “We are pleased to see EHT leveraging our throughput computing capabilities to bring the power of AI to their science,” says Professor Anthony Gitter, a Morgridge Investigator and a PATh Co-PI. “Like in the case of other science domains, CHTC’s capabilities allowed EHT researchers to assemble the quantity and quality of AI-ready data needed to train effective models that facilitate scientific discovery.”
    The NSF-funded Open Science Pool, operated by PATh, offers computing capacity contributed by more than 80 institutions across the United States. The Event Horizon black hole project performed more than 12 million computing jobs in the past three years.
    “A workload that consists of millions of simulations is a perfect match for our throughput-oriented capabilities that were developed and refined over four decades” says Livny, director of the CHTC and lead investigator of PATh. “We love to collaborate with researchers who have workloads that challenge the scalability of our services.”
    Scientific papers referenced

    Deep learning inference with the Event Horizon Telescope I. Calibration improvements and a comprehensive synthetic data library. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025.
    Deep learning inference with the Event Horizon Telescope II. The Zingularity framework for Bayesian artificial neural networks. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025.
    Deep learning inference with the Event Horizon Telescope III. Zingularity results from the 2017 observations and predictions for future array expansions. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025. More

  • in

    Passive cooling breakthrough could slash data center energy use

    Engineers at the University of California San Diego have developed a new cooling technology that could significantly improve the energy efficiency of data centers and high-powered electronics. The technology features a specially engineered fiber membrane that passively removes heat through evaporation. It offers a promising alternative to traditional cooling systems like fans, heat sinks and liquid pumps. It could also reduce the water use associated with many current cooling systems.
    The advance is detailed in a paper published on June 13 in the journal Joule.
    As artificial intelligence (AI) and cloud computing continue to expand, the demand for data processing — and the heat it generates — is skyrocketing. Currently, cooling accounts for up to 40% of a data center’s total energy use. If trends continue, global energy use for cooling could more than double by 2030.
    The new evaporative cooling technology could help curb that trend. It uses a low-cost fiber membrane with a network of tiny, interconnected pores that draw cooling liquid across its surface using capillary action. As the liquid evaporates, it efficiently removes heat from the electronics underneath — no extra energy required. The membrane sits on top of microchannels above the electronics, pulling in liquid that flows through the channels and efficiently dissipating heat.
    “Compared to traditional air or liquid cooling, evaporation can dissipate higher heat flux while using less energy,” said Renkun Chen, professor in the Department of Mechanical and Aerospace Engineering at the UC San Diego Jacobs School of Engineering, who co-led the project with professors Shengqiang Cai and Abhishek Saha, both from the same department. Mechanical and aerospace engineering Ph.D. student Tianshi Feng and postdoctoral researcher Yu Pei, both members of Chen’s research group, are co-first authors on the study.
    Many applications currently rely on evaporation for cooling. Heat pipes in laptops and evaporators in air conditioners are some examples, explained Chen. But applying it effectively to high-power electronics has been a challenge. Previous attempts using porous membranes — which have high surface areas that are ideal for evaporation — have been unsuccessful because their pores were either too small they would clog or too large they would trigger unwanted boiling. “Here, we use porous fiber membranes with interconnected pores with the right size,” said Chen. This design achieves efficient evaporation without those downsides.
    When tested across variable heat fluxes, the membrane achieved record-breaking performance. It managed heat fluxes exceeding 800 watts of heat per square centimeter — one of the highest levels ever recorded for this kind of cooling system. It also proved stable over multiple hours of operation.

    “This success showcases the potential of reimagining materials for entirely new applications,” said Chen. “These fiber membranes were originally designed for filtration, and no one had previously explored their use in evaporation. We recognized that their unique structural characteristics — interconnected pores and just the right pore size — could make them ideal for efficient evaporative cooling. What surprised us was that, with the right mechanical reinforcement, they not only withstood the high heat flux-they performed extremely well under it.”
    While the current results are promising, Chen says the technology is still operating well below its theoretical limit. The team is now working to refine the membrane and optimize performance. Next steps include integrating it into prototypes of cold plates, which are flat components that attach to chips like CPUs and GPUs to dissipate heat. The team is also launching a startup company to commercialize the technology.
    This research was supported by the National Science Foundation (grants CMMI-1762560 and DMR-2005181). The work was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) at UC San Diego, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-2025752).
    Disclosures: A patent related to this work was filed by the Regents of the University of California (PCT Application No. PCT/US24/46923.). The authors declare that they have no other competing interests. More

  • in

    This quantum sensor tracks 3D movement without GPS

    In a new study, physicists at the University of Colorado Boulder have used a cloud of atoms chilled down to incredibly cold temperatures to simultaneously measure acceleration in three dimensions — a feat that many scientists didn’t think was possible.
    The device, a new type of atom “interferometer,” could one day help people navigate submarines, spacecraft, cars and other vehicles more precisely.
    “Traditional atom interferometers can only measure acceleration in a single dimension, but we live within a three-dimensional world,” said Kendall Mehling, a co-author of the new study and a graduate student in the Department of Physics at CU Boulder. “To know where I’m going, and to know where I’ve been, I need to track my acceleration in all three dimensions.”
    The researchers published their paper, titled “Vector atom accelerometry in an optical lattice,” this month in the journal Science Advances. The team included Mehling; Catie LeDesma, a postdoctoral researcher in physics; and Murray Holland, professor of physics and fellow of JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST).
    In 2023, NASA awarded the CU Boulder researchers a $5.5 million grant through the agency’s Quantum Pathways Institute to continue developing the sensor technology.
    The new device is a marvel of engineering: Holland and his colleagues employ six lasers as thin as a human hair to pin a cloud of tens of thousands of rubidium atoms in place. Then, with help from artificial intelligence, they manipulate those lasers in complex patterns — allowing the team to measure the behavior of the atoms as they react to small accelerations, like pressing the gas pedal down in your car.
    Today, most vehicles track acceleration using GPS and traditional, or “classical,” electronic devices known as accelerometers. The team’s quantum device has a long way to go before it can compete with these tools. But the researchers see a lot of promise for navigation technology based on atoms.

    “If you leave a classical sensor out in different environments for years, it will age and decay,” Mehling said. “The springs in your clock will change and warp. Atoms don’t age.”
    Fingerprints of motion
    Interferometers, in some form or another, have been around for centuries — and they’ve been used to do everything from transporting information over optical fibers to searching for gravitational waves, or ripples in the fabric of the universe.
    The general idea involves splitting things apart and bringing them back together, not unlike unzipping, then zipping back up a jacket.
    In laser interferometry, for example, scientists first shine a laser light, then split it into two, identical beams that travel over two separate paths. Eventually, they bring the beams back together. If the lasers have experienced diverging effects along their journeys, such as gravity acting in different ways, they may not mesh perfectly when they recombine. Put differently, the zipper might get stuck. Researchers can make measurements based on how the two beams, once identical, now interfere with each other — hence the name.
    In the current study, the team achieved the same feat, but with atoms instead of light.

    Here’s how it works: The device currently fits on a bench about the size of an air hockey table. First, the researchers cool a collection of rubidium atoms down to temperatures just a few billionths of a degree above absolute zero.
    In that frigid realm, the atoms form a mysterious quantum state of matter known as a Bose-Einstein Condensate (BEC). Carl Wieman, then a physicist at CU Boulder, and Eric Cornell of JILA won a Nobel Prize in 2001 for creating the first BEC.
    Next, the team uses laser light to jiggle the atoms, splitting them apart. In this case, that doesn’t mean that groups of atoms are separating. Instead, each individual atom exists in a ghostly quantum state called a superposition, in which it can be simultaneously in two places at the same time.
    When the atoms split and separate, those ghosts travel away from each other following two different paths. (In the current experiment, the researchers didn’t actually move the device itself but used lasers to push on the atoms, causing acceleration).
    “Our Bose-Einstein Condensate is a matter-wave pond made of atoms, and we throw stones made of little packets of light into the pond, sending ripples both left and right,” Holland said. “Once the ripples have spread out, we reflect them and bring them back together where they interfere.”
    When the atoms snap back together, they form a unique pattern, just like the two beams of laser light zipping together but more complex. The result resembles a thumb print on a glass.
    “We can decode that fingerprint and extract the acceleration that the atoms experienced,” Holland said.
    Planning with computers
    The group spent almost three years building the device to achieve this feat.
    “For what it is, the current experimental device is incredibly compact. Even though we have 18 laser beams passing through the vacuum system that contains our atom cloud, the entire experiment is small enough that we could deploy in the field one day,” LeDesma said.
    One of the secrets to that success comes down to an artificial intelligence technique called machine learning. Holland explained that splitting and recombining the rubidium atoms requires adjusting the lasers through a complex, multi-step process. To streamline the process, the group trained a computer program that can plan out those moves in advance.
    So far, the device can only measure accelerations several thousand times smaller than the force of Earth’s gravity. Currently available technologies can do a lot better.
    But the group is continuing to improve its engineering and hopes to increase the performance of its quantum device many times over in the coming years. Still, the technology is a testament to just how useful atoms can be.
    “We’re not exactly sure of all the possible ramifications of this research, because it opens up a door,” Holland said. More

  • in

    Atom-thin tech replaces silicon in the world’s first 2D computer

    UNIVERSITY PARK, Pa. — Silicon is king in the semiconductor technology that underpins smartphones, computers, electric vehicles and more, but its crown may be slipping according to a team led by researchers at Penn State. In a world first, they used two-dimensional (2D) materials, which are only an atom thick and retain their properties at that scale, unlike silicon, to develop a computer capable of simple operations.
    The development, published today (June 11) in Nature, represents a major leap toward the realization of thinner, faster and more energy-efficient electronics, the researchers said. They created a complementary metal-oxide semiconductor (CMOS) computer — technology at the heart of nearly every modern electronic device — without relying on silicon. Instead, they used two different 2D materials to develop both types of transistors needed to control the electric current flow in CMOS computers: molybdenum disulfide for n-type transistors and tungsten diselenide for p-type transistors.
    “Silicon has driven remarkable advances in electronics for decades by enabling continuous miniaturization of field-effect transistors (FETs),” said Saptarshi Das, the Ackley Professor of Engineering and professor of engineering science and mechanics at Penn State, who led the research. FETs control current flow using an electric field, which is produced when a voltage is applied. “However, as silicon devices shrink, their performance begins to degrade. Two-dimensional materials, by contrast, maintain their exceptional electronic properties at atomic thickness, offering a promising path forward.”
    Das explained that CMOS technology requires both n-type and p-type semiconductors working together to achieve high performance at low power consumption — a key challenge that has stymied efforts to move beyond silicon. Although previous studies demonstrated small circuits based on 2D materials, scaling to complex, functional computers had remained elusive, Das said.
    “That’s the key advancement of our work,” Das said. “We have demonstrated, for the first time, a CMOS computer built entirely from 2D materials, combining large area grown molybdenum disulfide and tungsten diselenide transistors.”
    The team used metal-organic chemical vapor deposition (MOCVD) — a fabrication process that involves vaporizing ingredients, forcing a chemical reaction and depositing the products onto a substrate — to grow large sheets of molybdenum disulfide and tungsten diselenide and fabricate over 1,000 of each type of transistor. By carefully tuning the device fabrication and post-processing steps, they were able to adjust the threshold voltages of both n- and p-type transistors, enabling the construction of fully functional CMOS logic circuits.
    “Our 2D CMOS computer operates at low-supply voltages with minimal power consumption and can perform simple logic operations at frequencies up to 25 kilohertz,” said first author Subir Ghosh, a doctoral student pursuing a degree in engineering science and mechanics under Das’s mentorship.

    Ghosh noted that the operating frequency is low compared to conventional silicon CMOS circuits, but their computer — known as a one instruction set computer — can still perform simple logic operations.
    “We also developed a computational model, calibrated using experimental data and incorporating variations between devices, to project the performance of our 2D CMOS computer and benchmark it against state-of-the-art silicon technology,” Ghosh said. “Although there remains scope for further optimization, this work marks a significant milestone in harnessing 2D materials to advance the field of electronics.”
    Das agreed, explaining that more work is needed to further develop the 2D CMOS computer approach for broad use, but also emphasizing that the field is moving quickly when compared to the development of silicon technology.
    “Silicon technology has been under development for about 80 years, but research into 2D materials is relatively recent, only really arising around 2010,” Das said. “We expect that the development of 2D material computers is going to be a gradual process, too, but this is a leap forward compared to the trajectory of silicon.”
    Ghosh and Das credited the 2D Crystal Consortium Materials Innovation Platform (2DCC-MIP) at Penn State with providing the facilities and tools needed to demonstrate their approach. Das is also affiliated with the Materials Research Institute, the 2DCC-MIP and the Departments of Electrical Engineering and of Materials Science and Engineering, all at Penn State. Other contributors from the Penn State Department of Engineering Science and Mechanics include graduate students Yikai Zheng, Najam U. Sakib, Harikrishnan Ravichandran, Yongwen Sun, Andrew L. Pannone, Muhtasim Ul Karim Sadaf and Samriddha Ray; and Yang Yang, assistant professor. Yang is also affiliated with the Materials Research Institute and the Ken and Mary Alice Lindquist Department of Nuclear Engineering at Penn State. Joan Redwing, director of the 2DCC-MIP and distinguished professor of materials science and engineering and of electrical engineering, and Chen Chen, assistant research professor, also co-authored the paper. Other contributors include Musaib Rafiq and Subham Sahay, Indian Institute of Technology; and Mrinmoy Goswami, Jadavpur University.
    The U.S. National Science Foundation, the Army Research Office and the Office of Naval Research supported this work in part. More

  • in

    Scientists just took a big step toward the quantum internet

    A Danish-German research collaboration with participation of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) aims to develop new quantum light sources and technology for scalable quantum networks based on the rare-earth element erbium. The project EQUAL (Erbium-based silicon quantum light sources) is funded by the Innovation Fund Denmark with 40 million Danish crowns (about 5.3 million euros). It started in May of 2025 and will run for five years.
    Quantum technology enables unbreakable encryption and entirely new types of computers, which in the future are expected to be connected through optical quantum networks. However, this requires quantum light sources that do not exist today. The new project aims to change that.
    “It is a really difficult task, but we have also set a really strong team. One of the toughest goals is to integrate quantum light sources with quantum memories. This seemed unrealistic just a few years ago, but now we see a path forward,” says the project coordinator Søren Stobbe, professor at the Technical University of Denmark (DTU).
    The technological vision is based on combining nanophotonic chips from DTU with unique technologies in materials, nanoelectromechanics, nanolithography, and quantum systems. There are many different types of quantum light sources today, but either they do not work with quantum memories, or they are incompatible with optical fibers.
    There is actually only one viable option: the element erbium. However, erbium interacts too weakly with light. The interaction needs to be significantly enhanced, and this is now possible thanks to new nanophotonic technology developed at DTU. But the project requires not only advanced nanophotonics, but also quantum technology, integrated photonics with extremely low power consumption, and new nanofabrication methods – all of which hold great potential.
    HZDR will help develop new sources of quantum light using silicon, the very same material found in everyday electronics. These light sources will work at the same wavelengths used in fiber-optic communication, making them ideal for future quantum technologies like secure communication and powerful computing. “We intend to use advanced ion beam techniques to implant erbium atoms into tiny silicon structures and study how using ultra-pure silicon can improve their performance. This research will lay the foundation for building quantum devices that can be integrated into today’s technology,” explains Dr. Yonder Berencén, the project’s principal investigator from the Institute of Ion Beam Physics and Materials Research at HZDR.
    The EQUAL team has access to further technological input from partnering institutions: quantum networks from Humboldt University in Berlin, nanotechnology from Beamfox Technologies ApS, and integrated photonics from Lizard Photonics ApS. More

  • in

    AI sees through chaos—and reaches the edge of what physics allows

    No image is infinitely sharp. For 150 years, it has been known that no matter how ingeniously you build a microscope or a camera, there are always fundamental resolution limits that cannot be exceeded in principle. The position of a particle can never be measured with infinite precision; a certain amount of blurring is unavoidable. This limit does not result from technical weaknesses, but from the physical properties of light and the transmission of information itself.
    TU Wien (Vienna), the University of Glasgow and the University of Grenoble therefore posed the question: Where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible? And indeed, the international team succeeded in specifying a lowest limit for the theoretically achievable precision and in developing AI algorithms for neural networks that come very close to this limit after appropriate training. This strategy is now set to be employed in imaging procedures, such as those used in medicine.
    An absolute limit to precision
    “Let’s imagine we are looking at a small object behind an irregular, cloudy pane of glass,” says Prof Stefan Rotter from the Institute of Theoretical Physics at TU Wien. “We don’t just see an image of the object, but a complicated light pattern consisting of many lighter and darker patches of light. The question now is: how precisely can we estimate where the object actually is based on this image — and where is the absolute limit of this precision?”
    Such scenarios are important in biophysics or medical imaging, for example. When light is scattered by biological tissue, it appears to lose information about deeper tissue structures. But how much of this information can be recovered in principle? This question is not only of technical nature, but physics itself sets fundamental limits here.
    The answer to this question is provided by a theoretical measure: the so-called Fisher information. This measure describes how much information an optical signal contains about an unknown parameter — such as the object position. If the Fisher information is low, precise determination is no longer possible, no matter how sophisticatedly the signal is analysed. Based on this Fisher information concept, the team was able to calculate an upper limit for the theoretically achievable precision in different experimental scenarios.
    Neural networks learn from chaotic light patterns
    While the team at TU Wien was providing theoretical input, a corresponding experiment was designed and implemented by Dorian Bouchet from the University of Grenoble (F) together with Ilya Starshynov and Daniele Faccio from the University of Glasgow (UK). In this experiment, a laser beam was directed at a small, reflective object located behind a turbid liquid, so that the recorded images only showed highly distorted light patterns. The measurement conditions varied depending on the turbidity — and therefore also the difficulty of obtaining precise position information from the signal.

    “To the human eye, these images look like random patterns,” says Maximilian Weimar (TU Wien), one of the authors of the study. “But if we feed many such images — each with a known object position — into a neural network, the network can learn which patterns are associated with which positions.” After sufficient training, the network was able to determine the object position very precisely, even with new, unknown patterns.
    Almost at the physical limit
    Particularly noteworthy: the precision of the prediction was only minimally worse than the theoretically achievable maximum, calculated using Fisher information. “This means that our AI-supported algorithm is not only effective, but almost optimal,” says Stefan Rotter. “It achieves almost exactly the precision that is permitted by the laws of physics.”
    This realisation has far-reaching consequences: With the help of intelligent algorithms, optical measurement methods could be significantly improved in a wide range of areas — from medical diagnostics to materials research and quantum technology. In future projects, the research team wants to work with partners from applied physics and medicine to investigate how these AI-supported methods can be used in specific systems. More

  • in

    Sharper than lightning: Oxford’s one-in-6. 7-million quantum breakthrough

    Physicists at the University of Oxford have set a new global benchmark for the accuracy of controlling a single quantum bit, achieving the lowest-ever error rate for a quantum logic operation — just 0.000015%, or one error in 6.7 million operations. This record-breaking result represents nearly an order of magnitude improvement over the previous benchmark, set by the same research group a decade ago.
    To put the result in perspective: a person is more likely to be struck by lightning in a given year (1 in 1.2 million) than for one of Oxford’s quantum logic gates to make a mistake.
    The findings, published in Physical Review Letters, are a major advance towards having robust and useful quantum computers.
    “As far as we are aware, this is the most accurate qubit operation ever recorded anywhere in the world,” said Professor David Lucas, co-author on the paper, from the University of Oxford’s Department of Physics. “It is an important step toward building practical quantum computers that can tackle real-world problems.”
    To perform useful calculations on a quantum computer, millions of operations will need to be run across many qubits. This means that if the error rate is too high, the final result of the calculation will be meaningless. Although error correction can be used to fix mistakes, this comes at the cost of requiring many more qubits. By reducing the error, the new method reduces the number of qubits required and consequently the cost and size of the quantum computer itself.
    Co-lead author Molly Smith (Graduate Student, Department of Physics, University of Oxford), said: “By drastically reducing the chance of error, this work significantly reduces the infrastructure required for error correction, opening the way for future quantum computers to be smaller, faster, and more efficient. Precise control of qubits will also be useful for other quantum technologies such as clocks and quantum sensors.”
    This unprecedented level of precision was achieved using a trapped calcium ion as the qubit (quantum bit). These are a natural choice to store quantum information due to their long lifetime and their robustness. Unlike the conventional approach, which uses lasers, the Oxford team controlled the quantum state of the calcium ions using electronic (microwave) signals.

    This method offers greater stability than laser control and also has other benefits for building a practical quantum computer. For instance, electronic control is much cheaper and more robust than lasers, and easier to integrate in ion trapping chips. Furthermore, the experiment was conducted at room temperature and without magnetic shielding, thus simplifying the technical requirements for a working quantum computer.
    The previous best single-qubit error rate, also achieved by the Oxford team, in 2014, was 1 in 1 million. The group’s expertise led to the launch of the spinout company Oxford Ionics in 2019, which has become an established leader in high-performance trapped-ion qubit platforms.
    Whilst this record-breaking result marks a major milestone, the research team caution that it is part of a larger challenge. Quantum computing requires both single- and two-qubit gates to function together. Currently, two-qubit gates still have significantly higher error rates — around 1 in 2000 in the best demonstrations to date — so reducing these will be crucial to building fully fault-tolerant quantum machines.
    The experiments were carried out at the University of Oxford’s Department of Physics by Molly Smith, Aaron Leu, Dr Mario Gely and Professor David Lucas, together with a visiting researcher, Dr Koichiro Miyanishi, from the University of Osaka’s Centre for Quantum Information and Quantum Biology.
    The Oxford scientists are part of the UK Quantum Computing and Simulation (QCS) Hub, which was a part of the ongoing UK National Quantum Technologies Programme. More

  • in

    Photonic quantum chips are making AI smarter and greener

    One of the current hot research topics is the combination of two of the most recent technological breakthroughs: machine learning and quantum computing. An experimental study shows that already small-scale quantum computers can boost the performance of machine learning algorithms. This was demonstrated on a photonic quantum processor by an international team of researchers of the University of Vienna. The work, recently published in Nature Photonics, shows promising new applications for optical quantum computers.
    Recent scientific breakthroughs have reshaped the development of future technologies. On the one hand, machine learning and artificial intelligence have already revolutionized our lives from everyday tasks to scientific research. On the other hand, quantum computing has emerged as a new paradigm of computation.
    From the combination of these promising two fields, a new research line has opened up: Quantum Machine Learning. This field aims at finding potential enhancements in the speed, efficiency or accuracy of algorithms when they run on quantum platforms. It is however still an open challenge, to achieve such an advantage on current technology quantum computers.
    This is where an international team of researchers took the next step and designed a novel experiment carried out by scientists from the University of Vienna. The set-up features a quantum photonic circuit built at the Politecnico di Milano (Italy), which runs a machine learning algorithm first proposed by researchers working at Quantinuum (United Kingdom). The goal was to classify data points using a photonic quantum computer and single out the contribution of quantum effects, to understand the advantage with respect to classical computers. The experiment showed that already small-sized quantum processors can peform better than conventional algorithms. “We found that for specific tasks our algorithm commits fewer errors than its classical Counterpart,” explains Philip Walther from the University of Vienna, lead of the project. “This implies that existing quantum computers can show good performances without necessarily going beyond the state-of-the-art Technology” adds Zhenghao Yin, first author of the publication in Nature Photonics.
    Another interesting aspect of the new research is that photonic platforms can consume less energy with respect to standard computers. “This could prove crucial in the future, given that machine learning algorithms are becoming infeasible, due to the too high energy demands,” emphasizes co-author Iris Agresti.
    The result of the researchers has an impact both on quantum computation, since it identifies tasks that benefit from quantum effects, as well as on standard computing. Indeed, new algorithms, inspired by quantum architectures could be designed, reaching better performances and reducing energy consumption. More