More stories

  • in

    Atom-thin tech replaces silicon in the world’s first 2D computer

    UNIVERSITY PARK, Pa. — Silicon is king in the semiconductor technology that underpins smartphones, computers, electric vehicles and more, but its crown may be slipping according to a team led by researchers at Penn State. In a world first, they used two-dimensional (2D) materials, which are only an atom thick and retain their properties at that scale, unlike silicon, to develop a computer capable of simple operations.
    The development, published today (June 11) in Nature, represents a major leap toward the realization of thinner, faster and more energy-efficient electronics, the researchers said. They created a complementary metal-oxide semiconductor (CMOS) computer — technology at the heart of nearly every modern electronic device — without relying on silicon. Instead, they used two different 2D materials to develop both types of transistors needed to control the electric current flow in CMOS computers: molybdenum disulfide for n-type transistors and tungsten diselenide for p-type transistors.
    “Silicon has driven remarkable advances in electronics for decades by enabling continuous miniaturization of field-effect transistors (FETs),” said Saptarshi Das, the Ackley Professor of Engineering and professor of engineering science and mechanics at Penn State, who led the research. FETs control current flow using an electric field, which is produced when a voltage is applied. “However, as silicon devices shrink, their performance begins to degrade. Two-dimensional materials, by contrast, maintain their exceptional electronic properties at atomic thickness, offering a promising path forward.”
    Das explained that CMOS technology requires both n-type and p-type semiconductors working together to achieve high performance at low power consumption — a key challenge that has stymied efforts to move beyond silicon. Although previous studies demonstrated small circuits based on 2D materials, scaling to complex, functional computers had remained elusive, Das said.
    “That’s the key advancement of our work,” Das said. “We have demonstrated, for the first time, a CMOS computer built entirely from 2D materials, combining large area grown molybdenum disulfide and tungsten diselenide transistors.”
    The team used metal-organic chemical vapor deposition (MOCVD) — a fabrication process that involves vaporizing ingredients, forcing a chemical reaction and depositing the products onto a substrate — to grow large sheets of molybdenum disulfide and tungsten diselenide and fabricate over 1,000 of each type of transistor. By carefully tuning the device fabrication and post-processing steps, they were able to adjust the threshold voltages of both n- and p-type transistors, enabling the construction of fully functional CMOS logic circuits.
    “Our 2D CMOS computer operates at low-supply voltages with minimal power consumption and can perform simple logic operations at frequencies up to 25 kilohertz,” said first author Subir Ghosh, a doctoral student pursuing a degree in engineering science and mechanics under Das’s mentorship.

    Ghosh noted that the operating frequency is low compared to conventional silicon CMOS circuits, but their computer — known as a one instruction set computer — can still perform simple logic operations.
    “We also developed a computational model, calibrated using experimental data and incorporating variations between devices, to project the performance of our 2D CMOS computer and benchmark it against state-of-the-art silicon technology,” Ghosh said. “Although there remains scope for further optimization, this work marks a significant milestone in harnessing 2D materials to advance the field of electronics.”
    Das agreed, explaining that more work is needed to further develop the 2D CMOS computer approach for broad use, but also emphasizing that the field is moving quickly when compared to the development of silicon technology.
    “Silicon technology has been under development for about 80 years, but research into 2D materials is relatively recent, only really arising around 2010,” Das said. “We expect that the development of 2D material computers is going to be a gradual process, too, but this is a leap forward compared to the trajectory of silicon.”
    Ghosh and Das credited the 2D Crystal Consortium Materials Innovation Platform (2DCC-MIP) at Penn State with providing the facilities and tools needed to demonstrate their approach. Das is also affiliated with the Materials Research Institute, the 2DCC-MIP and the Departments of Electrical Engineering and of Materials Science and Engineering, all at Penn State. Other contributors from the Penn State Department of Engineering Science and Mechanics include graduate students Yikai Zheng, Najam U. Sakib, Harikrishnan Ravichandran, Yongwen Sun, Andrew L. Pannone, Muhtasim Ul Karim Sadaf and Samriddha Ray; and Yang Yang, assistant professor. Yang is also affiliated with the Materials Research Institute and the Ken and Mary Alice Lindquist Department of Nuclear Engineering at Penn State. Joan Redwing, director of the 2DCC-MIP and distinguished professor of materials science and engineering and of electrical engineering, and Chen Chen, assistant research professor, also co-authored the paper. Other contributors include Musaib Rafiq and Subham Sahay, Indian Institute of Technology; and Mrinmoy Goswami, Jadavpur University.
    The U.S. National Science Foundation, the Army Research Office and the Office of Naval Research supported this work in part. More

  • in

    Scientists just took a big step toward the quantum internet

    A Danish-German research collaboration with participation of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) aims to develop new quantum light sources and technology for scalable quantum networks based on the rare-earth element erbium. The project EQUAL (Erbium-based silicon quantum light sources) is funded by the Innovation Fund Denmark with 40 million Danish crowns (about 5.3 million euros). It started in May of 2025 and will run for five years.
    Quantum technology enables unbreakable encryption and entirely new types of computers, which in the future are expected to be connected through optical quantum networks. However, this requires quantum light sources that do not exist today. The new project aims to change that.
    “It is a really difficult task, but we have also set a really strong team. One of the toughest goals is to integrate quantum light sources with quantum memories. This seemed unrealistic just a few years ago, but now we see a path forward,” says the project coordinator Søren Stobbe, professor at the Technical University of Denmark (DTU).
    The technological vision is based on combining nanophotonic chips from DTU with unique technologies in materials, nanoelectromechanics, nanolithography, and quantum systems. There are many different types of quantum light sources today, but either they do not work with quantum memories, or they are incompatible with optical fibers.
    There is actually only one viable option: the element erbium. However, erbium interacts too weakly with light. The interaction needs to be significantly enhanced, and this is now possible thanks to new nanophotonic technology developed at DTU. But the project requires not only advanced nanophotonics, but also quantum technology, integrated photonics with extremely low power consumption, and new nanofabrication methods – all of which hold great potential.
    HZDR will help develop new sources of quantum light using silicon, the very same material found in everyday electronics. These light sources will work at the same wavelengths used in fiber-optic communication, making them ideal for future quantum technologies like secure communication and powerful computing. “We intend to use advanced ion beam techniques to implant erbium atoms into tiny silicon structures and study how using ultra-pure silicon can improve their performance. This research will lay the foundation for building quantum devices that can be integrated into today’s technology,” explains Dr. Yonder Berencén, the project’s principal investigator from the Institute of Ion Beam Physics and Materials Research at HZDR.
    The EQUAL team has access to further technological input from partnering institutions: quantum networks from Humboldt University in Berlin, nanotechnology from Beamfox Technologies ApS, and integrated photonics from Lizard Photonics ApS. More

  • in

    AI sees through chaos—and reaches the edge of what physics allows

    No image is infinitely sharp. For 150 years, it has been known that no matter how ingeniously you build a microscope or a camera, there are always fundamental resolution limits that cannot be exceeded in principle. The position of a particle can never be measured with infinite precision; a certain amount of blurring is unavoidable. This limit does not result from technical weaknesses, but from the physical properties of light and the transmission of information itself.
    TU Wien (Vienna), the University of Glasgow and the University of Grenoble therefore posed the question: Where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible? And indeed, the international team succeeded in specifying a lowest limit for the theoretically achievable precision and in developing AI algorithms for neural networks that come very close to this limit after appropriate training. This strategy is now set to be employed in imaging procedures, such as those used in medicine.
    An absolute limit to precision
    “Let’s imagine we are looking at a small object behind an irregular, cloudy pane of glass,” says Prof Stefan Rotter from the Institute of Theoretical Physics at TU Wien. “We don’t just see an image of the object, but a complicated light pattern consisting of many lighter and darker patches of light. The question now is: how precisely can we estimate where the object actually is based on this image — and where is the absolute limit of this precision?”
    Such scenarios are important in biophysics or medical imaging, for example. When light is scattered by biological tissue, it appears to lose information about deeper tissue structures. But how much of this information can be recovered in principle? This question is not only of technical nature, but physics itself sets fundamental limits here.
    The answer to this question is provided by a theoretical measure: the so-called Fisher information. This measure describes how much information an optical signal contains about an unknown parameter — such as the object position. If the Fisher information is low, precise determination is no longer possible, no matter how sophisticatedly the signal is analysed. Based on this Fisher information concept, the team was able to calculate an upper limit for the theoretically achievable precision in different experimental scenarios.
    Neural networks learn from chaotic light patterns
    While the team at TU Wien was providing theoretical input, a corresponding experiment was designed and implemented by Dorian Bouchet from the University of Grenoble (F) together with Ilya Starshynov and Daniele Faccio from the University of Glasgow (UK). In this experiment, a laser beam was directed at a small, reflective object located behind a turbid liquid, so that the recorded images only showed highly distorted light patterns. The measurement conditions varied depending on the turbidity — and therefore also the difficulty of obtaining precise position information from the signal.

    “To the human eye, these images look like random patterns,” says Maximilian Weimar (TU Wien), one of the authors of the study. “But if we feed many such images — each with a known object position — into a neural network, the network can learn which patterns are associated with which positions.” After sufficient training, the network was able to determine the object position very precisely, even with new, unknown patterns.
    Almost at the physical limit
    Particularly noteworthy: the precision of the prediction was only minimally worse than the theoretically achievable maximum, calculated using Fisher information. “This means that our AI-supported algorithm is not only effective, but almost optimal,” says Stefan Rotter. “It achieves almost exactly the precision that is permitted by the laws of physics.”
    This realisation has far-reaching consequences: With the help of intelligent algorithms, optical measurement methods could be significantly improved in a wide range of areas — from medical diagnostics to materials research and quantum technology. In future projects, the research team wants to work with partners from applied physics and medicine to investigate how these AI-supported methods can be used in specific systems. More

  • in

    Sharper than lightning: Oxford’s one-in-6. 7-million quantum breakthrough

    Physicists at the University of Oxford have set a new global benchmark for the accuracy of controlling a single quantum bit, achieving the lowest-ever error rate for a quantum logic operation — just 0.000015%, or one error in 6.7 million operations. This record-breaking result represents nearly an order of magnitude improvement over the previous benchmark, set by the same research group a decade ago.
    To put the result in perspective: a person is more likely to be struck by lightning in a given year (1 in 1.2 million) than for one of Oxford’s quantum logic gates to make a mistake.
    The findings, published in Physical Review Letters, are a major advance towards having robust and useful quantum computers.
    “As far as we are aware, this is the most accurate qubit operation ever recorded anywhere in the world,” said Professor David Lucas, co-author on the paper, from the University of Oxford’s Department of Physics. “It is an important step toward building practical quantum computers that can tackle real-world problems.”
    To perform useful calculations on a quantum computer, millions of operations will need to be run across many qubits. This means that if the error rate is too high, the final result of the calculation will be meaningless. Although error correction can be used to fix mistakes, this comes at the cost of requiring many more qubits. By reducing the error, the new method reduces the number of qubits required and consequently the cost and size of the quantum computer itself.
    Co-lead author Molly Smith (Graduate Student, Department of Physics, University of Oxford), said: “By drastically reducing the chance of error, this work significantly reduces the infrastructure required for error correction, opening the way for future quantum computers to be smaller, faster, and more efficient. Precise control of qubits will also be useful for other quantum technologies such as clocks and quantum sensors.”
    This unprecedented level of precision was achieved using a trapped calcium ion as the qubit (quantum bit). These are a natural choice to store quantum information due to their long lifetime and their robustness. Unlike the conventional approach, which uses lasers, the Oxford team controlled the quantum state of the calcium ions using electronic (microwave) signals.

    This method offers greater stability than laser control and also has other benefits for building a practical quantum computer. For instance, electronic control is much cheaper and more robust than lasers, and easier to integrate in ion trapping chips. Furthermore, the experiment was conducted at room temperature and without magnetic shielding, thus simplifying the technical requirements for a working quantum computer.
    The previous best single-qubit error rate, also achieved by the Oxford team, in 2014, was 1 in 1 million. The group’s expertise led to the launch of the spinout company Oxford Ionics in 2019, which has become an established leader in high-performance trapped-ion qubit platforms.
    Whilst this record-breaking result marks a major milestone, the research team caution that it is part of a larger challenge. Quantum computing requires both single- and two-qubit gates to function together. Currently, two-qubit gates still have significantly higher error rates — around 1 in 2000 in the best demonstrations to date — so reducing these will be crucial to building fully fault-tolerant quantum machines.
    The experiments were carried out at the University of Oxford’s Department of Physics by Molly Smith, Aaron Leu, Dr Mario Gely and Professor David Lucas, together with a visiting researcher, Dr Koichiro Miyanishi, from the University of Osaka’s Centre for Quantum Information and Quantum Biology.
    The Oxford scientists are part of the UK Quantum Computing and Simulation (QCS) Hub, which was a part of the ongoing UK National Quantum Technologies Programme. More

  • in

    Photonic quantum chips are making AI smarter and greener

    One of the current hot research topics is the combination of two of the most recent technological breakthroughs: machine learning and quantum computing. An experimental study shows that already small-scale quantum computers can boost the performance of machine learning algorithms. This was demonstrated on a photonic quantum processor by an international team of researchers of the University of Vienna. The work, recently published in Nature Photonics, shows promising new applications for optical quantum computers.
    Recent scientific breakthroughs have reshaped the development of future technologies. On the one hand, machine learning and artificial intelligence have already revolutionized our lives from everyday tasks to scientific research. On the other hand, quantum computing has emerged as a new paradigm of computation.
    From the combination of these promising two fields, a new research line has opened up: Quantum Machine Learning. This field aims at finding potential enhancements in the speed, efficiency or accuracy of algorithms when they run on quantum platforms. It is however still an open challenge, to achieve such an advantage on current technology quantum computers.
    This is where an international team of researchers took the next step and designed a novel experiment carried out by scientists from the University of Vienna. The set-up features a quantum photonic circuit built at the Politecnico di Milano (Italy), which runs a machine learning algorithm first proposed by researchers working at Quantinuum (United Kingdom). The goal was to classify data points using a photonic quantum computer and single out the contribution of quantum effects, to understand the advantage with respect to classical computers. The experiment showed that already small-sized quantum processors can peform better than conventional algorithms. “We found that for specific tasks our algorithm commits fewer errors than its classical Counterpart,” explains Philip Walther from the University of Vienna, lead of the project. “This implies that existing quantum computers can show good performances without necessarily going beyond the state-of-the-art Technology” adds Zhenghao Yin, first author of the publication in Nature Photonics.
    Another interesting aspect of the new research is that photonic platforms can consume less energy with respect to standard computers. “This could prove crucial in the future, given that machine learning algorithms are becoming infeasible, due to the too high energy demands,” emphasizes co-author Iris Agresti.
    The result of the researchers has an impact both on quantum computation, since it identifies tasks that benefit from quantum effects, as well as on standard computing. Indeed, new algorithms, inspired by quantum architectures could be designed, reaching better performances and reducing energy consumption. More

  • in

    How outdated phones can power smart cities and save the seas

    Each year, more than 1.2 billion smartphones are produced globally. The production of electronic devices is not only energy-intensive but also consumes valuable natural resources. Additionally, the manufacturing and delivery processes release a significant amount of CO2 into the atmosphere. Meanwhile, devices are aging faster than ever — users replace their still-functional phones on average every 2 to 3 years. At best, old devices are recycled; at worst, they end up in landfills.
    Although the most sustainable solution would be to change consumer behavior and consider more carefully whether every new model truly requires replacing the old one, this is easier said than done. Rapid technological development quickly renders older devices obsolete. Therefore, alternative solutions are needed — such as extending the lifespan of devices by giving them an entirely new purpose.
    This is precisely the approach tested by researchers Huber Flores, Ulrich Norbisrath, and Zhigang Yin from the University of Tartu’s Institute of Computer Science, along with Perseverance Ngoy from the Institute of Technology and their international colleagues. “Innovation often begins not with something new, but with a new way of thinking about the old, re-imagining its role in shaping the future,” explained Huber Flores, Associate Professor of Pervasive Computing. They demonstrated that old smartphones can be successfully repurposed into tiny data centers capable of efficiently processing and storing data. They also found that building such a data center is remarkably inexpensive — around 8 euros per device.
    These tiny data centers have a wide range of applications. For example, they could be used in urban environments like bus stops to collect real-time data on the number of passengers, which could then be used to optimize public transportation networks.
    In the project’s first stage, the researchers removed the phones’ batteries and replaced them with external power sources to reduce the risk of chemical leakage into the environment. Then, four phones were connected together, fitted with 3D-printed casings and holders, and turned into a working prototype ready to be re-used, fostering sustainable practices for old electronics.
    The prototype was then successfully tested underwater, where it participated in marine life monitoring by helping to count different sea species. Normally, these kinds of tasks require a scuba diver to record video and bring it to the surface for analysis. But with the prototype, the whole process was done automatically underwater.
    The team’s results show that outdated technology doesn’t have to end up as waste. With minimal resources, these devices can be given a new purpose, contributing to the development of more environmentally friendly and sustainable digital solutions.
    “Sustainability is not just about preserving the future — it’s about reimagining the present, where yesterday’s devices become tomorrow’s opportunities,” commented Ulrich Norbisrath, Associate Professor of Software Engineering. More

  • in

    This “robot bird” flies at 45 mph through forests—With no GPS or light

    Unlike birds, which navigate unknown environments with remarkable speed and agility, drones typically rely on external guidance or pre-mapped routes. However, a groundbreaking development by Professor Fu Zhang and researchers from the Department of Mechanical Engineering of Faculty of Engineering at the University of Hong Kong (HKU), has enabled drones and micro air vehicles (MAVs) to emulate the flight capabilities of birds more closely than ever before.
    The team has developed the Safety-Assured High-Speed Aerial Robot (SUPER), capable of flying at speeds exceeding 20 meters per second and avoiding obstacles as thin as 2.5 millimeters – such as power lines or twigs – using solely on onboard sensors and computing power. With a compact design featuring a wheelbase of just 280 mm and a takeoff weight of 1.5 kg, SUPER demonstrates exceptional agility, navigating dense forests at night and skillfully avoiding thin wires.
    Professor Zhang describes this invention as a game-changer in the field of drone technology, “Picture a ‘Robot Bird’ swiftly maneuvering through the forest, effortlessly dodging branches and obstacles at high speeds. This is a significant step forward in autonomous flight technology. Our system allows MAVs to navigate complex environments at high speeds with a level of safety previously unattainable. It’s like giving the drone the reflexes of a bird, enabling it to dodge obstacles in real-time while racing toward its goal.”
    The breakthrough lies in the sophisticated integration of hardware and software. SUPER utilizes a lightweight 3D light detection and ranging (LIDAR) sensor capable of detecting obstacles up to 70 meters away with pinpoint accuracy. This is paired with an advanced planning framework that generates two trajectories during flight: one that optimizing speed by venturing into unknown spaces and another prioritizing safety by remaining within known, obstacle-free zones.
    By processing LIDAR data directly as point clouds, the system significantly reduces computation time, enabling rapid decision-making even at high velocities. The technology has been tested in various real-life applications, such as the autonomous exploration of ancient sites, and has demonstrated seamless navigation in both indoor and outdoor environments.
    “The ability to avoid thin obstacles and navigate tight spaces opens up new possibilities for applications like search and rescue, where every second counts. SUPER’s robustness in various lighting conditions, including nighttime, makes it a reliable tool for round-the-clock operations.” said Mr Yunfan Ren, the lead author of the research paper.
    The research team envisions a wide range of applications for this innovative technology, including autonomous delivery, power line inspection, forest monitoring, autonomous exploration, and mapping. In search and rescue missions, MAVs equipped with SUPER technology could swiftly navigate disaster zones – such as collapsed buildings or dense forests – day and night, locating survivors or assessing hazards more efficiently than current drones. Moreover, in disaster relief scenarios, they could deliver crucial supplies to remote and inaccessible areas. More

  • in

    Scientists built a transistor that could leave silicon in the dust

    Hailed as one of the greatest inventions of the 20th century, transistors are integral components of modern electronics that amplify or switch electrical signals. As electronics become smaller, it is becoming increasingly difficult to continue scaling down silicon-based transistors. Has the development of our electronics hit a wall?
    Now, a research team led by the Institute of Industrial Science, The University of Tokyo, has sought a solution. As detailed in their new paper, to be issued in 2025 Symposium on VLSI Technology and Circuits , the team ditched the silicon and instead opted to create a transistor made from gallium-doped indium oxide (InGaOx). This material can be structured as a crystalline oxide, whose orderly, crystal lattice is well suited for electron mobility.
    “We also wanted our crystalline oxide transistor to feature a ‘gate-all-around’ structure, whereby the gate, which turns the current on or off, surrounds the channel where the current flows,” explains Anlan Chen, lead author of the study. “By wrapping the gate entirely around the channel, we can enhance efficiency and scalability compared with traditional gates.”
    With these goals in mind, the team got to work. The researchers knew that they would need to introduce impurities to the indium oxide by ‘doping’ it with gallium. This would make the material react with electricity in a more favorable way.
    “Indium oxide contains oxygen-vacancy defects, which facilitate carrier scattering and thus lower device stability,” says Masaharu Kobayashi, senior author. “We doped indium oxide with gallium to suppress oxygen vacancies and in turn improve transistor reliability.”
    The team used atomic-layer deposition to coat the channel region of a gate-all-around transistor with a thin film of InGaOx, one atomic layer at a time. After deposition, the film was heated to transform it into the crystalline structure needed for electron mobility. This process ultimately enabled the fabrication of a gate-all-around ‘metal oxide-based field-effect transistor’ (MOSFET).
    “Our gate-all-around MOSFET, containing a gallium-doped indium oxide layer, achieves high mobility of 44.5 cm2/Vs,” explains Dr Chen. “Crucially, the device demonstrates promising reliability by operating stably under applied stress for nearly three hours. In fact, our MOSFET outperformed similar devices that have previously been reported.”
    The efforts shown by the team have provided the field with a new transistor design that considers the importance of both materials and structure. The research is a step towards the development of reliable, high-density electronic components suited for applications with high computational demand, such as big data and artificial intelligence. These tiny transistors promise to help next-gen technology run smoothly, making a big difference to our everyday lives.
    The article “A Gate-All-Around Nanosheet Oxide Semiconductor Transistor by Selective Crystallization of InGaOx for Performance and Reliability Enhancement” was issued in 2025 Symposium on VLSI Technology and Circuits. More