More stories

  • in

    Quantum breakthrough: ‘Magic states’ now easier, faster, and way less noisy

    For decades, quantum computers that perform calculations millions of times faster than conventional computers have remained a tantalizing yet distant goal. However, a new breakthrough in quantum physics may have just sped up the timeline.In an article published in PRX Quantum, researchers from the Graduate School of Engineering Science and the Center for Quantum Information and Quantum Biology at The University of Osaka devised a method that can be used to prepare high-fidelity “magic states” for use in quantum computers with dramatically less overhead and unprecedented accuracy.Quantum computers harness the fantastic properties of quantum mechanics such as entanglement and superposition to perform calculations much more efficiently than classical computers can. Such machines could catalyze innovations in fields as diverse as engineering, finance, and biotechnology. But before this can happen, there is a significant obstacle that must be overcome.“Quantum systems have always been extremely susceptible to noise,” says lead researcher Tomohiro Itogawa. “Even the slightest perturbation in temperature or a single wayward photon from an external source can easily ruin a quantum computer setup, making it useless. Noise is absolutely the number one enemy of quantum computers.”Thus, scientists have become very interested in building so-called fault-tolerant quantum computers, which are robust enough to continue computing accurately even when subject to noise. Magic state distillation, in which a single high-fidelity quantum state is prepared from many noisy ones, is a popular method for creating such systems. But there is a catch.“The distillation of magic states is traditionally a very computationally expensive process because it requires many qubits,” explains Keisuke Fujii, senior author. “We wanted to explore if there was any way of expediting the preparation of the high-fidelity states necessary for quantum computation.”Following this line of inquiry, the team was inspired to create a “level-zero” version of magic state distillation, in which a fault-tolerant circuit is developed at the physical qubit or “zeroth” level as opposed to higher, more abstract levels. In addition to requiring far fewer qubits, this new method led to a roughly several dozen times decrease in spatial and temporal overhead compared with that of the traditional version in numerical simulations.Itogawa and Fujii are optimistic that the era of quantum computing is not as far off as we imagine. Whether one calls it magic or physics, this technique certainly marks an important step toward the development of larger-scale quantum computers that can withstand noise. More

  • in

    MIT’s tiny 5G receiver could make smart devices last longer and work anywhere

    MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
    The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
    The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
    Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
    The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
    “This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
    He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.

    A new standard
    A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
    Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
    But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
    “This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
    To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.

    One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
    In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
    Shrinking the circuit
    Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
    “This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
    Their receiver has an active area of less than 0.05 square millimeters.
    One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
    In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
    To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
    Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
    “Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
    Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
    Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
    This research is supported, in part, by the National Science Foundation. More

  • in

    Scientists create ‘universal translator’ for quantum tech

    UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.
    The technology could serve as a universal translator for quantum computers — enabling them to talk to each other over long distances and converting up to 95 per cent of a signal with virtually no noise. And it all fits on a silicon chip, the same material found in everyday computers.
    “It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his PhD at UBC’s faculty of applied science and the UBC Blusson Quantum Matter Institute.
    “Most importantly, this device preserves the quantum connections between distant particles and works in both directions. Without that, you’d just have expensive individual computers. With it, you get a true quantum network.”
    How it works
    Quantum computers process information using microwave signals. But to send that information across cities or continents, it needs to be converted into optical signals that travel through fibre optic cables. These signals are so fragile, even tiny disturbances during translation can destroy them.
    That’s a problem for entanglement, the phenomenon quantum computers rely on, where two particles remain connected regardless of distance. Einstein called it “spooky action at a distance.” Losing that connection means losing the quantum advantage. The UBC device, described in npj Quantum Information, could enable long-distance quantum communication while preserving these entangled links.

    The silicon solution
    The team’s model is a microwave-optical photon converter that can be fabricated on a silicon wafer. The breakthrough lies in tiny engineered flaws, magnetic defects intentionally embedded in silicon to control its properties. When microwave and optical signals are precisely tuned, electrons in these defects convert one signal to the other without absorbing energy, avoiding the instability that plagues other transformation methods.
    The device also runs efficiently at extremely low power — just millionths of a watt. The authors outlined a practical design that uses superconducting components, materials that conduct electricity perfectly, alongside this specially engineered silicon.
    What’s next
    While the work is still theoretical, it marks an important step in quantum networking.
    “We’re not getting a quantum internet tomorrow — but this clears a major roadblock,” says the study’s senior author Dr. Joseph Salfi, an assistant professor in the department of electrical and computer engineering and principal investigator at UBC Blusson QMI.
    “Currently, reliably sending quantum information between cities remains challenging. Our approach could change that: silicon-based converters could be built using existing chip fabrication technology and easily integrated into today’s communication infrastructure.”
    Eventually, quantum networks could enable virtually unbreakable online security, GPS that works indoors, and the power to tackle problems beyond today’s reach such as designing new medicines or predicting weather with dramatically improved accuracy. More

  • in

    U.S. seal populations have rebounded — and so have their conflicts with humans

    Aaron Tremper is the editorial assistant for Science News Explores. He has a B.A. in English (with minors in creative writing and film production) from SUNY New Paltz and an M.A. in Journalism from the Craig Newmark Graduate School of Journalism’s Science and Health Reporting program. A former intern at Audubon magazine and Atlanta’s NPR station, WABE 90.1 FM, he has reported a wide range of science stories for radio, print, and digital media. His favorite reporting adventure? Tagging along with researchers studying bottlenose dolphins off of New York City and Long Island, NY. More

  • in

    AI at light speed: How glass fibers could replace silicon brains

    Imagine a computer that does not rely only on electronics but uses light to perform tasks faster and more efficiently. Collaboration between two research teams from Tampere University in Finland and Université Marie et Louis Pasteur in France, have now demonstrated a novel way for processing information using light and optical fibers, opening up the possibility to build ultra-fast computers.
    The study performed by postdoctoral researchers Dr. Mathilde Hary from Tampere University and Dr. Andrei Ermolaev from the Université Marie et Louis Pasteur, Besançon, demonstrated how laser light inside thin glass fibers can mimic the way artificial intelligence (AI) processes information. Their work has investigated a particular class of computing architecture known as an Extreme Learning Machine, an approach inspired by neural networks.
    “Instead of using conventional electronics and algorithms, computation is achieved by taking advantage of the nonlinear interaction between intense light pulses and the glass,” Hary and Ermolaev explain.
    Traditional electronics approaches their limits in terms of bandwidth, data throughput and power consumption. AI models are growing larger, they are more energy-hungry, and electronics can process data only up to a certain speed. Optical fibers on the other hand can transform input signals at speeds thousands of times faster and amplify tiny differences via extreme nonlinear interactions to make them discernable.
    Towards efficient computing
    In their recent work, the researchers used femtosecond laser pulses (a billion times shorter than a camera flash) and an optical fiber confining light in an area smaller than a fraction of human hair to demonstrate the working principle of an optical ELM system. The pulses are short enough to contain a large number of different wavelengths or colors. By sending those into the fiber with a relative delay encoded according to an image, they show that the resulting spectrum of wavelengths at the output of the fiber transformed by the nonlinear interaction of light and glass contains sufficient information to classify handwritten digits (like those used in the popular MNIST AI benchmark). According to the researchers the best systems reached an accuracy of over 91%, close to the state of art digital methods, in under one picosecond.
    What is remarkable is that the best results did not occur at maximum level of nonlinear interaction or complexity; but rather from a delicate balance between fiber length, dispersion (the propagation speed difference between different wavelengths) and power levels.

    “Performance is not simply matter of pushing more power through the fiber. It depends on how precisely the light is initially structured, in other words how information is encoded, and how it interacts with the fiber properties,” says Hary.
    By harnessing the potential of light, this research could pave the way towards new ways of computing while exploring routes towards more efficient architectures.
    “Our models show how dispersion, nonlinearity and even quantum noise influence performance, providing critical knowledge for designing the next generation of hybrid optical-electronic AI systems,” continues Ermolaev.
    Advancing optical nonlinearity through collaborative research in AI and photonics
    Both research teams are internationally recognized for their expertise in nonlinear light-matter interactions. Their collaboration brings together theoretical understanding and state-of-the-art experimental capabilities to harness optical nonlinearity for various applications.
    “This work demonstrates how fundamental research in nonlinear fiber optics can drive new approaches to computation. By merging physics and machine learning, we are opening new paths toward ultrafast and energy-efficient AI hardware” say Professors Goëry Genty from Tampere University and John Dudley and Daniel Brunner from the Université Marie et Louis Pasteur, who led the teams.
    The research combines nonlinear fiber optics and applied AI to explore new types of computing. In the future their aim would be to build on-chip optical systems that can operate in real time and outside the lab. Potential applications range from real-time signal processing to environmental monitoring and high-speed AI inference.
    The project is funded by the Research Council of Finland, the French National Research Agency and the European Research Council. More

  • in

    Thinking AI models emit 50x more CO2—and often for nothing

    No matter which questions we ask an AI, the model will come up with an answer. To produce this information – regardless of whether than answer is correct or not – the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.
    This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. Now, researchers in Germany measured and compared CO2 emissions of different, already trained, LLMs using a set of standardized questions.
    “The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. “We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”
    ‘Thinking’ AI causes most emissions
    The researchers evaluated 14 LLMs ranging from seven to 72 billion parameters on 1,000 benchmark questions across diverse subjects. Parameters determine how LLMs learn and process information.
    Reasoning models, on average, created 543.5 ‘thinking’ tokens per questions, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn’t, however, necessarily mean the resulting answers are more correct, as elaborate detail that is not always essential for correctness.
    The most accurate model was the reasoning-enabled Cogito model with 70 billion parameters, reaching 84.9% accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. “Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Dauner. “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

    Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.
    Practicing thoughtful use
    The researchers said they hope their work will cause people to make more informed decisions about their own AI use. “Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner pointed out.
    Choice of model, for instance, can make a significant difference in CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.
    The researchers said that their results may be impacted by the choice of hardware used in the study, an emission factor that may vary regionally depending on local energy grid mixes, and the examined models. These factors may limit the generalizability of the results.
    “If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies,” Dauner concluded. More

  • in

    The AI that writes climate-friendly cement recipes in seconds

    The cement industry produces around eight percent of global CO2 emissions – more than the entire aviation sector worldwide. Researchers at the Paul Scherrer Institute PSI have developed an AI-based model that helps to accelerate the discovery of new cement formulations that could yield the same material quality with a better carbon footprint.
    The rotary kilns in cement plants are heated to a scorching 1,400 degrees Celsius to burn ground limestone down to clinker, the raw material for ready-to-use cement. Unsurprisingly, such temperatures typically can’t be achieved with electricity alone. They are the result of energy-intensive combustion processes that emit large amounts of carbon dioxide (CO2). What may be surprising, however, is that the combustion process accounts for less than half of these emissions, far less. The majority is contained in the raw materials needed to produce clinker and cement: CO2 that is chemically bound in the limestone is released during its transformation in the high-temperature kilns.
    One promising strategy for reducing emissions is to modify the cement recipe itself – replacing some of the clinker with alternative cementitious materials. That is exactly what an interdisciplinary team in the Laboratory for Waste Management in PSI’s Center for Nuclear Engineering and Sciences has been investigating. Instead of relying solely on time-consuming experiments or complex simulations, the researchers developed a modelling approach based on machine learning. “This allows us to simulate and optimise cement formulations so that they emit significantly less CO2 while maintaining the same high level of mechanical performance,” explains mathematician Romana Boiger, first author of the study. “Instead of testing thousands of variations in the lab, we can use our model to generate practical recipe suggestions within seconds – it’s like having a digital cookbook for climate-friendly cement.”
    With their novel approach, the researchers were able to selectively filter out those cement formulations that could meet the desired criteria. “The range of possibilities for the material composition – which ultimately determines the final properties – is extraordinarily vast,” says Nikolaos Prasianakis head of the Transport Mechanisms Research Group at PSI, who was the initiator and co-author of the study. “Our method allows us to significantly accelerate the development cycle by selecting promising candidates for further experimental investigation.” The results of the study were published in the journal Materials and Structures.
    The right recipe
    Already today, industrial by-products such as slag from iron production and fly ash from coal-fired power plants are already being used to partially replace clinker in cement formulations and thus reduce CO2 emissions. However, the global demand for cement is so enormous that these materials alone cannot meet the need. “What we need is the right combination of materials that are available in large quantities and from which high-quality, reliable cement can be produced,” says John Provis, head of the Cement Systems Research Group at PSI and co-author of the study.

    Finding such combinations, however, is challenging: “Cement is basically a mineral binding agent – in concrete, we use cement, water, and gravel to artificially create minerals that hold the entire material together,” Provis explains. “You could say we’re doing geology in fast motion.” This geology – or rather, the set of physical processes behind it – is enormously complex, and modelling it on a computer is correspondingly computationally intensive and expensive. That is why the research team is relying on artificial intelligence.
    AI as computational accelerator
    Artificial neural networks are computer models that are trained, using existing data, to speed up complex calculations. During training, the network is fed a known data set and learns from it by adjusting the relative strength or “weighting” of its internal connections so that it can quickly and reliably predict similar relationships. This weighting serves as a kind of shortcut – a faster alternative to otherwise computationally intensive physical modelling.
    The researchers at PSI also made use of such a neural network. They themselves generated the data required for training: “With the help of the open-source thermodynamic modelling software GEMS, developed at PSI, we calculated – for various cement formulations – which minerals form during hardening and which geochemical processes take place,” explains Nikolaos Prasianakis. By combining these results with experimental data and mechanical models, the researchers were able to derive a reliable indicator for mechanical properties – and thus for the material quality of the cement. For each component used, they also applied a corresponding CO2 factor, a specific emission value that made it possible to determine the total CO2 emissions. “That was a very complex and computationally intensive modelling exercise,” the scientist says.
    But it was worth the effort – with the data generated in this way, the AI model was able to learn. “Instead of seconds or minutes, the trained neural network can now calculate mechanical properties for an arbitrary cement recipe in milliseconds – that is, around a thousand times faster than with traditional modelling,” Boiger explains.
    From output to input
    How can this AI now be used to find optimal cement formulations – with the lowest possible CO2 emissions and high material quality? One possibility would be to try out various formulations, use the AI model to calculate their properties, and then select the best variants. A more efficient approach, however, is to reverse the process. Instead of trying out all options, ask the question the other way around: Which cement composition meets the desired specifications regarding CO2 balance and material quality?

    Both the mechanical properties and the CO2 emissions depend directly on the recipe. “Viewed mathematically, both variables are functions of the composition – if this changes, the respective properties also change,” the mathematician explains. To determine an optimal recipe, the researchers formulate the problem as a mathematical optimisation task: They are looking for a composition that simultaneously maximises mechanical properties and minimises CO2 emissions. “Basically, we are looking for a maximum and a minimum – from this we can directly deduce the desired formulation,” the mathematician says.
    To find the solution, the team integrated in the workflow an additional AI technology, the so-called genetic algorithms – computer-assisted methods inspired by natural selection. This enabled them to selectively identify formulations that ideally combine the two target variables.
    The advantage of this “reverse approach”: You no longer have to blindly test countless recipes and then evaluate their resulting properties; instead you can specifically search for those that meet specific desired criteria – in this case, maximum mechanical properties with minimum CO2 emissions.
    Interdisciplinary approach with great potential
    Among the cement formulations identified by the researchers, there are already some promising candidates. “Some of these formulations have real potential,” says John Provis, “not only in terms of CO2 reduction and quality, but also in terms of practical feasibility in production.” To complete the development cycle, however, the recipes must first be tested in the laboratory. “We’re not going to build a tower with them right away without testing them first,” Nikolaos Prasianakis says with a smile.
    The study primarily serves as a proof of concept – that is, as evidence that promising formulations can be identified purely by mathematical calculation. “We can extend our AI modelling tool as required and integrate additional aspects, such as the production or availability of raw materials, or where the building material is to be used – for example, in a marine environment, where cement and concrete behave differently, or even in the desert,” says Romana BoigerNikolaos Prasianakis is already looking ahead: “This is just the beginning. The time savings offered by such a general workflow are enormous – making it a very promising approach for all sorts of material and system designs.”
    Without the interdisciplinary background of the researchers, the project would never have come to fruition: “We needed cement chemists, thermodynamics experts, AI specialists – and a team that could bring all of this together,” Prasianakis says. “Added to this was the important exchange with other research institutions such as EMPA within the framework of the SCENE project.” SCENE (the Swiss Centre of Excellence on Net Zero Emissions) is an interdisciplinary research programme that aims to develop scientifically sound solutions for drastically reducing greenhouse gas emissions in industry and the energy supply. The study was carried out as part of this project. More

  • in

    From shortage to supremacy: How Sandia and the CHIPS Act aim to reboot US chip power

    Sandia National Laboratories has joined a new partnership aimed at helping the United States regain its leadership in semiconductor manufacturing.
    While the U.S. was considered a powerhouse in chip production in the 1990s, fabricating more than 35% of the world’s semiconductors, that share has since dropped to 12%. Today, the U.S. manufactures none of the world’s most advanced chips which power technologies like smartphones, owned by 71% of the world’s population, as well as self-driving cars, quantum computers, and artificial intelligence-powered devices and programs.
    Sandia hopes to help change that. It recently became the first national lab to join the U.S. National Semiconductor Technology Center. The NSTC was established under the CHIPS and Science Act to accelerate innovation and address some of the country’s most pressing technology challenges.
    “We have pioneered the way for other labs to join,” said Mary Monson, Sandia’s senior manager of Technology Partnerships and Business Development. “The CHIPS Act has brought the band back together, you could say. By including the national labs, U.S. companies, and academia, it’s really a force multiplier.”
    Sandia has a long history of contributing to the semiconductor industry through research and development partnerships, its Microsystems Engineering, Science and Applications facility known as MESA, and its advanced cleanrooms for developing next-generation technologies. Through its NSTC partnerships, Sandia hopes to strengthen U.S. semiconductor manufacturing and research and development, enhance national security production, and foster the innovation of new technologies that sets the nation apart globally.
    “The big goal is to strengthen capabilities. Industry is moving fast, so we are keeping abreast of everything happening and incorporating what will help us deliver more efficiently on our national security mission. It’s about looking at innovative ways of partnering and expediting the process,” Monson said.
    The urgency of the effort is evident. The pandemic provided a perfect example, as car lots were left bare and manufacturers sat idle, waiting for chips to be produced to build new vehicles.

    “An average car contains 1,400 chips and electric vehicles use more than 3,000,” said Rick McCormick, Sandia’s senior scientist for semiconductor technology strategy. McCormick is helping lead Sandia’s new role. “Other nations around the globe are investing more than $300 billion to be leaders in semiconductor manufacturing. The U.S. CHIPS Act is our way of ‘keeping up with the Joneses.’ One goal is for the U.S. to have more than 25% of the global capacity for state-of-the-art chips by 2032.”
    Sandia is positioned to play a key role in creating the chips of the future.
    “More than $12 billion in research and development spending is planned under CHIPS, including a $3 billion program to create an ecosystem for packaging assemblies of chiplets,” McCormick said. “These chiplets communicate at low energy and high speed as if they were a large expensive chip.”
    Modern commercial AI processors use this approach, and Sandia’s resources and partnerships can help expand access to small companies and national security applications. MESA already fabricates high-reliability chiplet assembly products for the stockpile and nonproliferation applications.
    McCormick said Sandia could also play a major role in training the workforce of the future. The government has invested billions of dollars in new factories, all of which need to be staffed by STEM students.
    “There is a potential crisis looming,” McCormick said. “The Semiconductor Industry Association anticipates that the U.S. will need 60,000 to 70,000 more workers, so we need to help engage the STEM workforce. That effort will also help Sandia bolster its staffing pipeline.”
    As part of its membership, Sandia will offer access to some of its facilities to other NSTC members, fostering collaboration and partnerships. Tech transfer is a core part of Sandia’s missions, and this initiative will build on that by helping private partners increase their stake in the industry while enabling Sandia to build on its own mission.
    “We will be helping develop suppliers and strengthen our capabilities,” Monson said. “We are a government resource for semiconductor knowledge. We are in this evolving landscape and have a front row seat to what it will look like over the next 20 years. We are helping support technology and strengthening our national security capabilities and mission delivery.” More