More stories

  • in

    Thinking AI models emit 50x more CO2—and often for nothing

    No matter which questions we ask an AI, the model will come up with an answer. To produce this information – regardless of whether than answer is correct or not – the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.
    This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. Now, researchers in Germany measured and compared CO2 emissions of different, already trained, LLMs using a set of standardized questions.
    “The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. “We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”
    ‘Thinking’ AI causes most emissions
    The researchers evaluated 14 LLMs ranging from seven to 72 billion parameters on 1,000 benchmark questions across diverse subjects. Parameters determine how LLMs learn and process information.
    Reasoning models, on average, created 543.5 ‘thinking’ tokens per questions, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn’t, however, necessarily mean the resulting answers are more correct, as elaborate detail that is not always essential for correctness.
    The most accurate model was the reasoning-enabled Cogito model with 70 billion parameters, reaching 84.9% accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. “Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Dauner. “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

    Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.
    Practicing thoughtful use
    The researchers said they hope their work will cause people to make more informed decisions about their own AI use. “Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner pointed out.
    Choice of model, for instance, can make a significant difference in CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.
    The researchers said that their results may be impacted by the choice of hardware used in the study, an emission factor that may vary regionally depending on local energy grid mixes, and the examined models. These factors may limit the generalizability of the results.
    “If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies,” Dauner concluded. More

  • in

    The AI that writes climate-friendly cement recipes in seconds

    The cement industry produces around eight percent of global CO2 emissions – more than the entire aviation sector worldwide. Researchers at the Paul Scherrer Institute PSI have developed an AI-based model that helps to accelerate the discovery of new cement formulations that could yield the same material quality with a better carbon footprint.
    The rotary kilns in cement plants are heated to a scorching 1,400 degrees Celsius to burn ground limestone down to clinker, the raw material for ready-to-use cement. Unsurprisingly, such temperatures typically can’t be achieved with electricity alone. They are the result of energy-intensive combustion processes that emit large amounts of carbon dioxide (CO2). What may be surprising, however, is that the combustion process accounts for less than half of these emissions, far less. The majority is contained in the raw materials needed to produce clinker and cement: CO2 that is chemically bound in the limestone is released during its transformation in the high-temperature kilns.
    One promising strategy for reducing emissions is to modify the cement recipe itself – replacing some of the clinker with alternative cementitious materials. That is exactly what an interdisciplinary team in the Laboratory for Waste Management in PSI’s Center for Nuclear Engineering and Sciences has been investigating. Instead of relying solely on time-consuming experiments or complex simulations, the researchers developed a modelling approach based on machine learning. “This allows us to simulate and optimise cement formulations so that they emit significantly less CO2 while maintaining the same high level of mechanical performance,” explains mathematician Romana Boiger, first author of the study. “Instead of testing thousands of variations in the lab, we can use our model to generate practical recipe suggestions within seconds – it’s like having a digital cookbook for climate-friendly cement.”
    With their novel approach, the researchers were able to selectively filter out those cement formulations that could meet the desired criteria. “The range of possibilities for the material composition – which ultimately determines the final properties – is extraordinarily vast,” says Nikolaos Prasianakis head of the Transport Mechanisms Research Group at PSI, who was the initiator and co-author of the study. “Our method allows us to significantly accelerate the development cycle by selecting promising candidates for further experimental investigation.” The results of the study were published in the journal Materials and Structures.
    The right recipe
    Already today, industrial by-products such as slag from iron production and fly ash from coal-fired power plants are already being used to partially replace clinker in cement formulations and thus reduce CO2 emissions. However, the global demand for cement is so enormous that these materials alone cannot meet the need. “What we need is the right combination of materials that are available in large quantities and from which high-quality, reliable cement can be produced,” says John Provis, head of the Cement Systems Research Group at PSI and co-author of the study.

    Finding such combinations, however, is challenging: “Cement is basically a mineral binding agent – in concrete, we use cement, water, and gravel to artificially create minerals that hold the entire material together,” Provis explains. “You could say we’re doing geology in fast motion.” This geology – or rather, the set of physical processes behind it – is enormously complex, and modelling it on a computer is correspondingly computationally intensive and expensive. That is why the research team is relying on artificial intelligence.
    AI as computational accelerator
    Artificial neural networks are computer models that are trained, using existing data, to speed up complex calculations. During training, the network is fed a known data set and learns from it by adjusting the relative strength or “weighting” of its internal connections so that it can quickly and reliably predict similar relationships. This weighting serves as a kind of shortcut – a faster alternative to otherwise computationally intensive physical modelling.
    The researchers at PSI also made use of such a neural network. They themselves generated the data required for training: “With the help of the open-source thermodynamic modelling software GEMS, developed at PSI, we calculated – for various cement formulations – which minerals form during hardening and which geochemical processes take place,” explains Nikolaos Prasianakis. By combining these results with experimental data and mechanical models, the researchers were able to derive a reliable indicator for mechanical properties – and thus for the material quality of the cement. For each component used, they also applied a corresponding CO2 factor, a specific emission value that made it possible to determine the total CO2 emissions. “That was a very complex and computationally intensive modelling exercise,” the scientist says.
    But it was worth the effort – with the data generated in this way, the AI model was able to learn. “Instead of seconds or minutes, the trained neural network can now calculate mechanical properties for an arbitrary cement recipe in milliseconds – that is, around a thousand times faster than with traditional modelling,” Boiger explains.
    From output to input
    How can this AI now be used to find optimal cement formulations – with the lowest possible CO2 emissions and high material quality? One possibility would be to try out various formulations, use the AI model to calculate their properties, and then select the best variants. A more efficient approach, however, is to reverse the process. Instead of trying out all options, ask the question the other way around: Which cement composition meets the desired specifications regarding CO2 balance and material quality?

    Both the mechanical properties and the CO2 emissions depend directly on the recipe. “Viewed mathematically, both variables are functions of the composition – if this changes, the respective properties also change,” the mathematician explains. To determine an optimal recipe, the researchers formulate the problem as a mathematical optimisation task: They are looking for a composition that simultaneously maximises mechanical properties and minimises CO2 emissions. “Basically, we are looking for a maximum and a minimum – from this we can directly deduce the desired formulation,” the mathematician says.
    To find the solution, the team integrated in the workflow an additional AI technology, the so-called genetic algorithms – computer-assisted methods inspired by natural selection. This enabled them to selectively identify formulations that ideally combine the two target variables.
    The advantage of this “reverse approach”: You no longer have to blindly test countless recipes and then evaluate their resulting properties; instead you can specifically search for those that meet specific desired criteria – in this case, maximum mechanical properties with minimum CO2 emissions.
    Interdisciplinary approach with great potential
    Among the cement formulations identified by the researchers, there are already some promising candidates. “Some of these formulations have real potential,” says John Provis, “not only in terms of CO2 reduction and quality, but also in terms of practical feasibility in production.” To complete the development cycle, however, the recipes must first be tested in the laboratory. “We’re not going to build a tower with them right away without testing them first,” Nikolaos Prasianakis says with a smile.
    The study primarily serves as a proof of concept – that is, as evidence that promising formulations can be identified purely by mathematical calculation. “We can extend our AI modelling tool as required and integrate additional aspects, such as the production or availability of raw materials, or where the building material is to be used – for example, in a marine environment, where cement and concrete behave differently, or even in the desert,” says Romana BoigerNikolaos Prasianakis is already looking ahead: “This is just the beginning. The time savings offered by such a general workflow are enormous – making it a very promising approach for all sorts of material and system designs.”
    Without the interdisciplinary background of the researchers, the project would never have come to fruition: “We needed cement chemists, thermodynamics experts, AI specialists – and a team that could bring all of this together,” Prasianakis says. “Added to this was the important exchange with other research institutions such as EMPA within the framework of the SCENE project.” SCENE (the Swiss Centre of Excellence on Net Zero Emissions) is an interdisciplinary research programme that aims to develop scientifically sound solutions for drastically reducing greenhouse gas emissions in industry and the energy supply. The study was carried out as part of this project. More

  • in

    From shortage to supremacy: How Sandia and the CHIPS Act aim to reboot US chip power

    Sandia National Laboratories has joined a new partnership aimed at helping the United States regain its leadership in semiconductor manufacturing.
    While the U.S. was considered a powerhouse in chip production in the 1990s, fabricating more than 35% of the world’s semiconductors, that share has since dropped to 12%. Today, the U.S. manufactures none of the world’s most advanced chips which power technologies like smartphones, owned by 71% of the world’s population, as well as self-driving cars, quantum computers, and artificial intelligence-powered devices and programs.
    Sandia hopes to help change that. It recently became the first national lab to join the U.S. National Semiconductor Technology Center. The NSTC was established under the CHIPS and Science Act to accelerate innovation and address some of the country’s most pressing technology challenges.
    “We have pioneered the way for other labs to join,” said Mary Monson, Sandia’s senior manager of Technology Partnerships and Business Development. “The CHIPS Act has brought the band back together, you could say. By including the national labs, U.S. companies, and academia, it’s really a force multiplier.”
    Sandia has a long history of contributing to the semiconductor industry through research and development partnerships, its Microsystems Engineering, Science and Applications facility known as MESA, and its advanced cleanrooms for developing next-generation technologies. Through its NSTC partnerships, Sandia hopes to strengthen U.S. semiconductor manufacturing and research and development, enhance national security production, and foster the innovation of new technologies that sets the nation apart globally.
    “The big goal is to strengthen capabilities. Industry is moving fast, so we are keeping abreast of everything happening and incorporating what will help us deliver more efficiently on our national security mission. It’s about looking at innovative ways of partnering and expediting the process,” Monson said.
    The urgency of the effort is evident. The pandemic provided a perfect example, as car lots were left bare and manufacturers sat idle, waiting for chips to be produced to build new vehicles.

    “An average car contains 1,400 chips and electric vehicles use more than 3,000,” said Rick McCormick, Sandia’s senior scientist for semiconductor technology strategy. McCormick is helping lead Sandia’s new role. “Other nations around the globe are investing more than $300 billion to be leaders in semiconductor manufacturing. The U.S. CHIPS Act is our way of ‘keeping up with the Joneses.’ One goal is for the U.S. to have more than 25% of the global capacity for state-of-the-art chips by 2032.”
    Sandia is positioned to play a key role in creating the chips of the future.
    “More than $12 billion in research and development spending is planned under CHIPS, including a $3 billion program to create an ecosystem for packaging assemblies of chiplets,” McCormick said. “These chiplets communicate at low energy and high speed as if they were a large expensive chip.”
    Modern commercial AI processors use this approach, and Sandia’s resources and partnerships can help expand access to small companies and national security applications. MESA already fabricates high-reliability chiplet assembly products for the stockpile and nonproliferation applications.
    McCormick said Sandia could also play a major role in training the workforce of the future. The government has invested billions of dollars in new factories, all of which need to be staffed by STEM students.
    “There is a potential crisis looming,” McCormick said. “The Semiconductor Industry Association anticipates that the U.S. will need 60,000 to 70,000 more workers, so we need to help engage the STEM workforce. That effort will also help Sandia bolster its staffing pipeline.”
    As part of its membership, Sandia will offer access to some of its facilities to other NSTC members, fostering collaboration and partnerships. Tech transfer is a core part of Sandia’s missions, and this initiative will build on that by helping private partners increase their stake in the industry while enabling Sandia to build on its own mission.
    “We will be helping develop suppliers and strengthen our capabilities,” Monson said. “We are a government resource for semiconductor knowledge. We are in this evolving landscape and have a front row seat to what it will look like over the next 20 years. We are helping support technology and strengthening our national security capabilities and mission delivery.” More

  • in

    Robots that feel heat, pain, and pressure? This new “skin” makes it possible

    Scientists have developed a low-cost, durable, highly-sensitive robotic ‘skin’ that can be added to robotic hands like a glove, enabling robots to detect information about their surroundings in a way that’s similar to humans.
    The researchers, from the University of Cambridge and University College London (UCL), developed the flexible, conductive skin, which is easy to fabricate and can be melted down and formed into a wide range of complex shapes. The technology senses and processes a range of physical inputs, allowing robots to interact with the physical world in a more meaningful way.
    Unlike other solutions for robotic touch, which typically work via sensors embedded in small areas and require different sensors to detect different types of touch, the entirety of the electronic skin developed by the Cambridge and UCL researchers is a sensor, bringing it closer to our own sensor system: our skin.
    Although the robotic skin is not as sensitive as human skin, it can detect signals from over 860,000 tiny pathways in the material, enabling it to recognise different types of touch and pressure – like the tap of a finger, a hot or cold surface, damage caused by cutting or stabbing, or multiple points being touched at once – in a single material.
    The researchers used a combination of physical tests and machine learning techniques to help the robotic skin ‘learn’ which of these pathways matter most, so it can sense different types of contact more efficiently.
    In addition to potential future applications for humanoid robots or human prosthetics where a sense of touch is vital, the researchers say the robotic skin could be useful in industries as varied as the automotive sector or disaster relief. The results are reported in the journal Science Robotics.
    Electronic skins work by converting physical information – like pressure or temperature – into electronic signals. In most cases, different types of sensors are needed for different types of touch – one type of sensor to detect pressure, another for temperature, and so on – which are then embedded into soft, flexible materials. However, the signals from these different sensors can interfere with each other, and the materials are easily damaged.

    “Having different sensors for different types of touch leads to materials that are complex to make,” said lead author Dr David Hardman from Cambridge’s Department of Engineering. “We wanted to develop a solution that can detect multiple types of touch at once, but in a single material.”
    “At the same time, we need something that’s cheap and durable, so that it’s suitable for widespread use,” said co-author Dr Thomas George Thuruthel from UCL.
    Their solution uses one type of sensor that reacts differently to different types of touch, known as multi-modal sensing. While it’s challenging to separate out the cause of each signal, multi-modal sensing materials are easier to make and more robust.
    The researchers melted down a soft, stretchy and electrically conductive gelatine-based hydrogel, and cast it into the shape of a human hand. They tested a range of different electrode configurations to determine which gave them the most useful information about different types of touch. From just 32 electrodes placed at the wrist, they were able to collect over 1.7 million pieces of information over the whole hand, thanks to the tiny pathways in the conductive material.
    The skin was then tested on different types of touch: the researchers blasted it with a heat gun, pressed it with their fingers and a robotic arm, gently touched it with their fingers, and even cut it open with a scalpel. The team then used the data gathered during these tests to train a machine learning model so the hand would recognize what the different types of touch meant.
    “We’re able to squeeze a lot of information from these materials – they can take thousands of measurements very quickly,” said Hardman, who is a postdoctoral researcher in the lab of co-author Professor Fumiya Iida. “They’re measuring lots of different things at once, over a large surface area.”
    “We’re not quite at the level where the robotic skin is as good as human skin, but we think it’s better than anything else out there at the moment,” said Thuruthel. “Our method is flexible and easier to build than traditional sensors, and we’re able to calibrate it using human touch for a range of tasks.”
    In future, the researchers are hoping to improve the durability of the electronic skin, and to carry out further tests on real-world robotic tasks.
    The research was supported by Samsung Global Research Outreach Program, the Royal Society, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Fumiya Iida is a Fellow of Corpus Christi College, Cambridge. More

  • in

    AI Reveals Milky Way’s Black Hole Spins Near Top Speed

    An international team of astronomers has trained a neural network with millions of synthetic simulations and artificial intelligence (AI) to tease out new cosmic curiosities about black holes, revealing the one at the center of our Milky Way is spinning at nearly top speed.
    These large ensembles of simulations were generated by throughput computing capabilities provided by the Center for High Throughput Computing (CHTC), a joint entity of the Morgridge Institute for Research and the University of Wisconsin-Madison. The astronomers published their results and methodology today in three papers in the journal Astronomy & Astrophysics.
    High-throughput computing, celebrating its 40th anniversary this year, was pioneered by Wisconsin computer scientist Miron Livny. It’s a novel form of distributed computing that automates computing tasks across a network of thousands of computers, essentially turning a single massive computing challenge into a supercharged fleet of smaller ones. This computing innovation is helping fuel big-data discovery across hundreds of scientific projects worldwide, including the search for cosmic neutrinos, subatomic particles and gravitational waves as well as to unravel antibiotic resistance.
    In 2019, the Event Horizon Telescope (EHT) Collaboration released the first image of a supermassive black hole at the center of the galaxy M87. In 2022, they presented the image of the black hole at the center of our Milky Way, Sagittarius A*. However, the data behind the images still contained a wealth of hard-to-crack information. An international team of researchers trained a neural network to extract as much information as possible from the data.
    From a handful to millions
    Previous studies by the EHT Collaboration used only a handful of realistic synthetic data files. Funded by the National Science Foundation (NSF) as part of the Partnership to Advance Throughput Computing (PATh) project, the Madison-based CHTC enabled the astronomers to feed millions of such data files into a so-called Bayesian neural network, which can quantify uncertainties. This allowed the researchers to make a much better comparison between the EHT data and the models.
    Thanks to the neural network, the researchers now suspect that the black hole at the center of the Milky Way is spinning at almost top speed. Its rotation axis points to the Earth. In addition, the emission near the black hole is mainly caused by extremely hot electrons in the surrounding accretion disk and not by a so-called jet. Also, the magnetic fields in the accretion disk appear to behave differently from the usual theories of such disks.

    “That we are defying the prevailing theory is of course exciting,” says lead researcher Michael Janssen, of Radboud University Nijmegen, the Netherlands. “However, I see our AI and machine learning approach primarily as a first step. Next, we will improve and extend the associated models and simulations.”
    Impressive scaling
    “The ability to scale up to the millions of synthetic data files required to train the model is an impressive achievement,” adds Chi-kwan Chan, an Associate Astronomer of Steward Observatory at the University of Arizonaand a longtime PATh collaborator. “It requires dependable workflow automation, and effective workload distribution across storage resources and processing capacity.”
    “We are pleased to see EHT leveraging our throughput computing capabilities to bring the power of AI to their science,” says Professor Anthony Gitter, a Morgridge Investigator and a PATh Co-PI. “Like in the case of other science domains, CHTC’s capabilities allowed EHT researchers to assemble the quantity and quality of AI-ready data needed to train effective models that facilitate scientific discovery.”
    The NSF-funded Open Science Pool, operated by PATh, offers computing capacity contributed by more than 80 institutions across the United States. The Event Horizon black hole project performed more than 12 million computing jobs in the past three years.
    “A workload that consists of millions of simulations is a perfect match for our throughput-oriented capabilities that were developed and refined over four decades” says Livny, director of the CHTC and lead investigator of PATh. “We love to collaborate with researchers who have workloads that challenge the scalability of our services.”
    Scientific papers referenced

    Deep learning inference with the Event Horizon Telescope I. Calibration improvements and a comprehensive synthetic data library. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025.
    Deep learning inference with the Event Horizon Telescope II. The Zingularity framework for Bayesian artificial neural networks. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025.
    Deep learning inference with the Event Horizon Telescope III. Zingularity results from the 2017 observations and predictions for future array expansions. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025. More

  • in

    Passive cooling breakthrough could slash data center energy use

    Engineers at the University of California San Diego have developed a new cooling technology that could significantly improve the energy efficiency of data centers and high-powered electronics. The technology features a specially engineered fiber membrane that passively removes heat through evaporation. It offers a promising alternative to traditional cooling systems like fans, heat sinks and liquid pumps. It could also reduce the water use associated with many current cooling systems.
    The advance is detailed in a paper published on June 13 in the journal Joule.
    As artificial intelligence (AI) and cloud computing continue to expand, the demand for data processing — and the heat it generates — is skyrocketing. Currently, cooling accounts for up to 40% of a data center’s total energy use. If trends continue, global energy use for cooling could more than double by 2030.
    The new evaporative cooling technology could help curb that trend. It uses a low-cost fiber membrane with a network of tiny, interconnected pores that draw cooling liquid across its surface using capillary action. As the liquid evaporates, it efficiently removes heat from the electronics underneath — no extra energy required. The membrane sits on top of microchannels above the electronics, pulling in liquid that flows through the channels and efficiently dissipating heat.
    “Compared to traditional air or liquid cooling, evaporation can dissipate higher heat flux while using less energy,” said Renkun Chen, professor in the Department of Mechanical and Aerospace Engineering at the UC San Diego Jacobs School of Engineering, who co-led the project with professors Shengqiang Cai and Abhishek Saha, both from the same department. Mechanical and aerospace engineering Ph.D. student Tianshi Feng and postdoctoral researcher Yu Pei, both members of Chen’s research group, are co-first authors on the study.
    Many applications currently rely on evaporation for cooling. Heat pipes in laptops and evaporators in air conditioners are some examples, explained Chen. But applying it effectively to high-power electronics has been a challenge. Previous attempts using porous membranes — which have high surface areas that are ideal for evaporation — have been unsuccessful because their pores were either too small they would clog or too large they would trigger unwanted boiling. “Here, we use porous fiber membranes with interconnected pores with the right size,” said Chen. This design achieves efficient evaporation without those downsides.
    When tested across variable heat fluxes, the membrane achieved record-breaking performance. It managed heat fluxes exceeding 800 watts of heat per square centimeter — one of the highest levels ever recorded for this kind of cooling system. It also proved stable over multiple hours of operation.

    “This success showcases the potential of reimagining materials for entirely new applications,” said Chen. “These fiber membranes were originally designed for filtration, and no one had previously explored their use in evaporation. We recognized that their unique structural characteristics — interconnected pores and just the right pore size — could make them ideal for efficient evaporative cooling. What surprised us was that, with the right mechanical reinforcement, they not only withstood the high heat flux-they performed extremely well under it.”
    While the current results are promising, Chen says the technology is still operating well below its theoretical limit. The team is now working to refine the membrane and optimize performance. Next steps include integrating it into prototypes of cold plates, which are flat components that attach to chips like CPUs and GPUs to dissipate heat. The team is also launching a startup company to commercialize the technology.
    This research was supported by the National Science Foundation (grants CMMI-1762560 and DMR-2005181). The work was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) at UC San Diego, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-2025752).
    Disclosures: A patent related to this work was filed by the Regents of the University of California (PCT Application No. PCT/US24/46923.). The authors declare that they have no other competing interests. More

  • in

    This quantum sensor tracks 3D movement without GPS

    In a new study, physicists at the University of Colorado Boulder have used a cloud of atoms chilled down to incredibly cold temperatures to simultaneously measure acceleration in three dimensions — a feat that many scientists didn’t think was possible.
    The device, a new type of atom “interferometer,” could one day help people navigate submarines, spacecraft, cars and other vehicles more precisely.
    “Traditional atom interferometers can only measure acceleration in a single dimension, but we live within a three-dimensional world,” said Kendall Mehling, a co-author of the new study and a graduate student in the Department of Physics at CU Boulder. “To know where I’m going, and to know where I’ve been, I need to track my acceleration in all three dimensions.”
    The researchers published their paper, titled “Vector atom accelerometry in an optical lattice,” this month in the journal Science Advances. The team included Mehling; Catie LeDesma, a postdoctoral researcher in physics; and Murray Holland, professor of physics and fellow of JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST).
    In 2023, NASA awarded the CU Boulder researchers a $5.5 million grant through the agency’s Quantum Pathways Institute to continue developing the sensor technology.
    The new device is a marvel of engineering: Holland and his colleagues employ six lasers as thin as a human hair to pin a cloud of tens of thousands of rubidium atoms in place. Then, with help from artificial intelligence, they manipulate those lasers in complex patterns — allowing the team to measure the behavior of the atoms as they react to small accelerations, like pressing the gas pedal down in your car.
    Today, most vehicles track acceleration using GPS and traditional, or “classical,” electronic devices known as accelerometers. The team’s quantum device has a long way to go before it can compete with these tools. But the researchers see a lot of promise for navigation technology based on atoms.

    “If you leave a classical sensor out in different environments for years, it will age and decay,” Mehling said. “The springs in your clock will change and warp. Atoms don’t age.”
    Fingerprints of motion
    Interferometers, in some form or another, have been around for centuries — and they’ve been used to do everything from transporting information over optical fibers to searching for gravitational waves, or ripples in the fabric of the universe.
    The general idea involves splitting things apart and bringing them back together, not unlike unzipping, then zipping back up a jacket.
    In laser interferometry, for example, scientists first shine a laser light, then split it into two, identical beams that travel over two separate paths. Eventually, they bring the beams back together. If the lasers have experienced diverging effects along their journeys, such as gravity acting in different ways, they may not mesh perfectly when they recombine. Put differently, the zipper might get stuck. Researchers can make measurements based on how the two beams, once identical, now interfere with each other — hence the name.
    In the current study, the team achieved the same feat, but with atoms instead of light.

    Here’s how it works: The device currently fits on a bench about the size of an air hockey table. First, the researchers cool a collection of rubidium atoms down to temperatures just a few billionths of a degree above absolute zero.
    In that frigid realm, the atoms form a mysterious quantum state of matter known as a Bose-Einstein Condensate (BEC). Carl Wieman, then a physicist at CU Boulder, and Eric Cornell of JILA won a Nobel Prize in 2001 for creating the first BEC.
    Next, the team uses laser light to jiggle the atoms, splitting them apart. In this case, that doesn’t mean that groups of atoms are separating. Instead, each individual atom exists in a ghostly quantum state called a superposition, in which it can be simultaneously in two places at the same time.
    When the atoms split and separate, those ghosts travel away from each other following two different paths. (In the current experiment, the researchers didn’t actually move the device itself but used lasers to push on the atoms, causing acceleration).
    “Our Bose-Einstein Condensate is a matter-wave pond made of atoms, and we throw stones made of little packets of light into the pond, sending ripples both left and right,” Holland said. “Once the ripples have spread out, we reflect them and bring them back together where they interfere.”
    When the atoms snap back together, they form a unique pattern, just like the two beams of laser light zipping together but more complex. The result resembles a thumb print on a glass.
    “We can decode that fingerprint and extract the acceleration that the atoms experienced,” Holland said.
    Planning with computers
    The group spent almost three years building the device to achieve this feat.
    “For what it is, the current experimental device is incredibly compact. Even though we have 18 laser beams passing through the vacuum system that contains our atom cloud, the entire experiment is small enough that we could deploy in the field one day,” LeDesma said.
    One of the secrets to that success comes down to an artificial intelligence technique called machine learning. Holland explained that splitting and recombining the rubidium atoms requires adjusting the lasers through a complex, multi-step process. To streamline the process, the group trained a computer program that can plan out those moves in advance.
    So far, the device can only measure accelerations several thousand times smaller than the force of Earth’s gravity. Currently available technologies can do a lot better.
    But the group is continuing to improve its engineering and hopes to increase the performance of its quantum device many times over in the coming years. Still, the technology is a testament to just how useful atoms can be.
    “We’re not exactly sure of all the possible ramifications of this research, because it opens up a door,” Holland said. More

  • in

    Atom-thin tech replaces silicon in the world’s first 2D computer

    UNIVERSITY PARK, Pa. — Silicon is king in the semiconductor technology that underpins smartphones, computers, electric vehicles and more, but its crown may be slipping according to a team led by researchers at Penn State. In a world first, they used two-dimensional (2D) materials, which are only an atom thick and retain their properties at that scale, unlike silicon, to develop a computer capable of simple operations.
    The development, published today (June 11) in Nature, represents a major leap toward the realization of thinner, faster and more energy-efficient electronics, the researchers said. They created a complementary metal-oxide semiconductor (CMOS) computer — technology at the heart of nearly every modern electronic device — without relying on silicon. Instead, they used two different 2D materials to develop both types of transistors needed to control the electric current flow in CMOS computers: molybdenum disulfide for n-type transistors and tungsten diselenide for p-type transistors.
    “Silicon has driven remarkable advances in electronics for decades by enabling continuous miniaturization of field-effect transistors (FETs),” said Saptarshi Das, the Ackley Professor of Engineering and professor of engineering science and mechanics at Penn State, who led the research. FETs control current flow using an electric field, which is produced when a voltage is applied. “However, as silicon devices shrink, their performance begins to degrade. Two-dimensional materials, by contrast, maintain their exceptional electronic properties at atomic thickness, offering a promising path forward.”
    Das explained that CMOS technology requires both n-type and p-type semiconductors working together to achieve high performance at low power consumption — a key challenge that has stymied efforts to move beyond silicon. Although previous studies demonstrated small circuits based on 2D materials, scaling to complex, functional computers had remained elusive, Das said.
    “That’s the key advancement of our work,” Das said. “We have demonstrated, for the first time, a CMOS computer built entirely from 2D materials, combining large area grown molybdenum disulfide and tungsten diselenide transistors.”
    The team used metal-organic chemical vapor deposition (MOCVD) — a fabrication process that involves vaporizing ingredients, forcing a chemical reaction and depositing the products onto a substrate — to grow large sheets of molybdenum disulfide and tungsten diselenide and fabricate over 1,000 of each type of transistor. By carefully tuning the device fabrication and post-processing steps, they were able to adjust the threshold voltages of both n- and p-type transistors, enabling the construction of fully functional CMOS logic circuits.
    “Our 2D CMOS computer operates at low-supply voltages with minimal power consumption and can perform simple logic operations at frequencies up to 25 kilohertz,” said first author Subir Ghosh, a doctoral student pursuing a degree in engineering science and mechanics under Das’s mentorship.

    Ghosh noted that the operating frequency is low compared to conventional silicon CMOS circuits, but their computer — known as a one instruction set computer — can still perform simple logic operations.
    “We also developed a computational model, calibrated using experimental data and incorporating variations between devices, to project the performance of our 2D CMOS computer and benchmark it against state-of-the-art silicon technology,” Ghosh said. “Although there remains scope for further optimization, this work marks a significant milestone in harnessing 2D materials to advance the field of electronics.”
    Das agreed, explaining that more work is needed to further develop the 2D CMOS computer approach for broad use, but also emphasizing that the field is moving quickly when compared to the development of silicon technology.
    “Silicon technology has been under development for about 80 years, but research into 2D materials is relatively recent, only really arising around 2010,” Das said. “We expect that the development of 2D material computers is going to be a gradual process, too, but this is a leap forward compared to the trajectory of silicon.”
    Ghosh and Das credited the 2D Crystal Consortium Materials Innovation Platform (2DCC-MIP) at Penn State with providing the facilities and tools needed to demonstrate their approach. Das is also affiliated with the Materials Research Institute, the 2DCC-MIP and the Departments of Electrical Engineering and of Materials Science and Engineering, all at Penn State. Other contributors from the Penn State Department of Engineering Science and Mechanics include graduate students Yikai Zheng, Najam U. Sakib, Harikrishnan Ravichandran, Yongwen Sun, Andrew L. Pannone, Muhtasim Ul Karim Sadaf and Samriddha Ray; and Yang Yang, assistant professor. Yang is also affiliated with the Materials Research Institute and the Ken and Mary Alice Lindquist Department of Nuclear Engineering at Penn State. Joan Redwing, director of the 2DCC-MIP and distinguished professor of materials science and engineering and of electrical engineering, and Chen Chen, assistant research professor, also co-authored the paper. Other contributors include Musaib Rafiq and Subham Sahay, Indian Institute of Technology; and Mrinmoy Goswami, Jadavpur University.
    The U.S. National Science Foundation, the Army Research Office and the Office of Naval Research supported this work in part. More