More stories

  • in

    MIT’s tiny 5G receiver could make smart devices last longer and work anywhere

    MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
    The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
    The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
    Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
    The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
    “This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
    He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.

    A new standard
    A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
    Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
    But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
    “This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
    To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.

    One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
    In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
    Shrinking the circuit
    Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
    “This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
    Their receiver has an active area of less than 0.05 square millimeters.
    One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
    In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
    To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
    Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
    “Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
    Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
    Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
    This research is supported, in part, by the National Science Foundation. More

  • in

    Scientists create ‘universal translator’ for quantum tech

    UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.
    The technology could serve as a universal translator for quantum computers — enabling them to talk to each other over long distances and converting up to 95 per cent of a signal with virtually no noise. And it all fits on a silicon chip, the same material found in everyday computers.
    “It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his PhD at UBC’s faculty of applied science and the UBC Blusson Quantum Matter Institute.
    “Most importantly, this device preserves the quantum connections between distant particles and works in both directions. Without that, you’d just have expensive individual computers. With it, you get a true quantum network.”
    How it works
    Quantum computers process information using microwave signals. But to send that information across cities or continents, it needs to be converted into optical signals that travel through fibre optic cables. These signals are so fragile, even tiny disturbances during translation can destroy them.
    That’s a problem for entanglement, the phenomenon quantum computers rely on, where two particles remain connected regardless of distance. Einstein called it “spooky action at a distance.” Losing that connection means losing the quantum advantage. The UBC device, described in npj Quantum Information, could enable long-distance quantum communication while preserving these entangled links.

    The silicon solution
    The team’s model is a microwave-optical photon converter that can be fabricated on a silicon wafer. The breakthrough lies in tiny engineered flaws, magnetic defects intentionally embedded in silicon to control its properties. When microwave and optical signals are precisely tuned, electrons in these defects convert one signal to the other without absorbing energy, avoiding the instability that plagues other transformation methods.
    The device also runs efficiently at extremely low power — just millionths of a watt. The authors outlined a practical design that uses superconducting components, materials that conduct electricity perfectly, alongside this specially engineered silicon.
    What’s next
    While the work is still theoretical, it marks an important step in quantum networking.
    “We’re not getting a quantum internet tomorrow — but this clears a major roadblock,” says the study’s senior author Dr. Joseph Salfi, an assistant professor in the department of electrical and computer engineering and principal investigator at UBC Blusson QMI.
    “Currently, reliably sending quantum information between cities remains challenging. Our approach could change that: silicon-based converters could be built using existing chip fabrication technology and easily integrated into today’s communication infrastructure.”
    Eventually, quantum networks could enable virtually unbreakable online security, GPS that works indoors, and the power to tackle problems beyond today’s reach such as designing new medicines or predicting weather with dramatically improved accuracy. More

  • in

    AI at light speed: How glass fibers could replace silicon brains

    Imagine a computer that does not rely only on electronics but uses light to perform tasks faster and more efficiently. Collaboration between two research teams from Tampere University in Finland and Université Marie et Louis Pasteur in France, have now demonstrated a novel way for processing information using light and optical fibers, opening up the possibility to build ultra-fast computers.
    The study performed by postdoctoral researchers Dr. Mathilde Hary from Tampere University and Dr. Andrei Ermolaev from the Université Marie et Louis Pasteur, Besançon, demonstrated how laser light inside thin glass fibers can mimic the way artificial intelligence (AI) processes information. Their work has investigated a particular class of computing architecture known as an Extreme Learning Machine, an approach inspired by neural networks.
    “Instead of using conventional electronics and algorithms, computation is achieved by taking advantage of the nonlinear interaction between intense light pulses and the glass,” Hary and Ermolaev explain.
    Traditional electronics approaches their limits in terms of bandwidth, data throughput and power consumption. AI models are growing larger, they are more energy-hungry, and electronics can process data only up to a certain speed. Optical fibers on the other hand can transform input signals at speeds thousands of times faster and amplify tiny differences via extreme nonlinear interactions to make them discernable.
    Towards efficient computing
    In their recent work, the researchers used femtosecond laser pulses (a billion times shorter than a camera flash) and an optical fiber confining light in an area smaller than a fraction of human hair to demonstrate the working principle of an optical ELM system. The pulses are short enough to contain a large number of different wavelengths or colors. By sending those into the fiber with a relative delay encoded according to an image, they show that the resulting spectrum of wavelengths at the output of the fiber transformed by the nonlinear interaction of light and glass contains sufficient information to classify handwritten digits (like those used in the popular MNIST AI benchmark). According to the researchers the best systems reached an accuracy of over 91%, close to the state of art digital methods, in under one picosecond.
    What is remarkable is that the best results did not occur at maximum level of nonlinear interaction or complexity; but rather from a delicate balance between fiber length, dispersion (the propagation speed difference between different wavelengths) and power levels.

    “Performance is not simply matter of pushing more power through the fiber. It depends on how precisely the light is initially structured, in other words how information is encoded, and how it interacts with the fiber properties,” says Hary.
    By harnessing the potential of light, this research could pave the way towards new ways of computing while exploring routes towards more efficient architectures.
    “Our models show how dispersion, nonlinearity and even quantum noise influence performance, providing critical knowledge for designing the next generation of hybrid optical-electronic AI systems,” continues Ermolaev.
    Advancing optical nonlinearity through collaborative research in AI and photonics
    Both research teams are internationally recognized for their expertise in nonlinear light-matter interactions. Their collaboration brings together theoretical understanding and state-of-the-art experimental capabilities to harness optical nonlinearity for various applications.
    “This work demonstrates how fundamental research in nonlinear fiber optics can drive new approaches to computation. By merging physics and machine learning, we are opening new paths toward ultrafast and energy-efficient AI hardware” say Professors Goëry Genty from Tampere University and John Dudley and Daniel Brunner from the Université Marie et Louis Pasteur, who led the teams.
    The research combines nonlinear fiber optics and applied AI to explore new types of computing. In the future their aim would be to build on-chip optical systems that can operate in real time and outside the lab. Potential applications range from real-time signal processing to environmental monitoring and high-speed AI inference.
    The project is funded by the Research Council of Finland, the French National Research Agency and the European Research Council. More

  • in

    Thinking AI models emit 50x more CO2—and often for nothing

    No matter which questions we ask an AI, the model will come up with an answer. To produce this information – regardless of whether than answer is correct or not – the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.
    This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. Now, researchers in Germany measured and compared CO2 emissions of different, already trained, LLMs using a set of standardized questions.
    “The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. “We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”
    ‘Thinking’ AI causes most emissions
    The researchers evaluated 14 LLMs ranging from seven to 72 billion parameters on 1,000 benchmark questions across diverse subjects. Parameters determine how LLMs learn and process information.
    Reasoning models, on average, created 543.5 ‘thinking’ tokens per questions, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn’t, however, necessarily mean the resulting answers are more correct, as elaborate detail that is not always essential for correctness.
    The most accurate model was the reasoning-enabled Cogito model with 70 billion parameters, reaching 84.9% accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. “Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Dauner. “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

    Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.
    Practicing thoughtful use
    The researchers said they hope their work will cause people to make more informed decisions about their own AI use. “Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner pointed out.
    Choice of model, for instance, can make a significant difference in CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.
    The researchers said that their results may be impacted by the choice of hardware used in the study, an emission factor that may vary regionally depending on local energy grid mixes, and the examined models. These factors may limit the generalizability of the results.
    “If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies,” Dauner concluded. More

  • in

    The AI that writes climate-friendly cement recipes in seconds

    The cement industry produces around eight percent of global CO2 emissions – more than the entire aviation sector worldwide. Researchers at the Paul Scherrer Institute PSI have developed an AI-based model that helps to accelerate the discovery of new cement formulations that could yield the same material quality with a better carbon footprint.
    The rotary kilns in cement plants are heated to a scorching 1,400 degrees Celsius to burn ground limestone down to clinker, the raw material for ready-to-use cement. Unsurprisingly, such temperatures typically can’t be achieved with electricity alone. They are the result of energy-intensive combustion processes that emit large amounts of carbon dioxide (CO2). What may be surprising, however, is that the combustion process accounts for less than half of these emissions, far less. The majority is contained in the raw materials needed to produce clinker and cement: CO2 that is chemically bound in the limestone is released during its transformation in the high-temperature kilns.
    One promising strategy for reducing emissions is to modify the cement recipe itself – replacing some of the clinker with alternative cementitious materials. That is exactly what an interdisciplinary team in the Laboratory for Waste Management in PSI’s Center for Nuclear Engineering and Sciences has been investigating. Instead of relying solely on time-consuming experiments or complex simulations, the researchers developed a modelling approach based on machine learning. “This allows us to simulate and optimise cement formulations so that they emit significantly less CO2 while maintaining the same high level of mechanical performance,” explains mathematician Romana Boiger, first author of the study. “Instead of testing thousands of variations in the lab, we can use our model to generate practical recipe suggestions within seconds – it’s like having a digital cookbook for climate-friendly cement.”
    With their novel approach, the researchers were able to selectively filter out those cement formulations that could meet the desired criteria. “The range of possibilities for the material composition – which ultimately determines the final properties – is extraordinarily vast,” says Nikolaos Prasianakis head of the Transport Mechanisms Research Group at PSI, who was the initiator and co-author of the study. “Our method allows us to significantly accelerate the development cycle by selecting promising candidates for further experimental investigation.” The results of the study were published in the journal Materials and Structures.
    The right recipe
    Already today, industrial by-products such as slag from iron production and fly ash from coal-fired power plants are already being used to partially replace clinker in cement formulations and thus reduce CO2 emissions. However, the global demand for cement is so enormous that these materials alone cannot meet the need. “What we need is the right combination of materials that are available in large quantities and from which high-quality, reliable cement can be produced,” says John Provis, head of the Cement Systems Research Group at PSI and co-author of the study.

    Finding such combinations, however, is challenging: “Cement is basically a mineral binding agent – in concrete, we use cement, water, and gravel to artificially create minerals that hold the entire material together,” Provis explains. “You could say we’re doing geology in fast motion.” This geology – or rather, the set of physical processes behind it – is enormously complex, and modelling it on a computer is correspondingly computationally intensive and expensive. That is why the research team is relying on artificial intelligence.
    AI as computational accelerator
    Artificial neural networks are computer models that are trained, using existing data, to speed up complex calculations. During training, the network is fed a known data set and learns from it by adjusting the relative strength or “weighting” of its internal connections so that it can quickly and reliably predict similar relationships. This weighting serves as a kind of shortcut – a faster alternative to otherwise computationally intensive physical modelling.
    The researchers at PSI also made use of such a neural network. They themselves generated the data required for training: “With the help of the open-source thermodynamic modelling software GEMS, developed at PSI, we calculated – for various cement formulations – which minerals form during hardening and which geochemical processes take place,” explains Nikolaos Prasianakis. By combining these results with experimental data and mechanical models, the researchers were able to derive a reliable indicator for mechanical properties – and thus for the material quality of the cement. For each component used, they also applied a corresponding CO2 factor, a specific emission value that made it possible to determine the total CO2 emissions. “That was a very complex and computationally intensive modelling exercise,” the scientist says.
    But it was worth the effort – with the data generated in this way, the AI model was able to learn. “Instead of seconds or minutes, the trained neural network can now calculate mechanical properties for an arbitrary cement recipe in milliseconds – that is, around a thousand times faster than with traditional modelling,” Boiger explains.
    From output to input
    How can this AI now be used to find optimal cement formulations – with the lowest possible CO2 emissions and high material quality? One possibility would be to try out various formulations, use the AI model to calculate their properties, and then select the best variants. A more efficient approach, however, is to reverse the process. Instead of trying out all options, ask the question the other way around: Which cement composition meets the desired specifications regarding CO2 balance and material quality?

    Both the mechanical properties and the CO2 emissions depend directly on the recipe. “Viewed mathematically, both variables are functions of the composition – if this changes, the respective properties also change,” the mathematician explains. To determine an optimal recipe, the researchers formulate the problem as a mathematical optimisation task: They are looking for a composition that simultaneously maximises mechanical properties and minimises CO2 emissions. “Basically, we are looking for a maximum and a minimum – from this we can directly deduce the desired formulation,” the mathematician says.
    To find the solution, the team integrated in the workflow an additional AI technology, the so-called genetic algorithms – computer-assisted methods inspired by natural selection. This enabled them to selectively identify formulations that ideally combine the two target variables.
    The advantage of this “reverse approach”: You no longer have to blindly test countless recipes and then evaluate their resulting properties; instead you can specifically search for those that meet specific desired criteria – in this case, maximum mechanical properties with minimum CO2 emissions.
    Interdisciplinary approach with great potential
    Among the cement formulations identified by the researchers, there are already some promising candidates. “Some of these formulations have real potential,” says John Provis, “not only in terms of CO2 reduction and quality, but also in terms of practical feasibility in production.” To complete the development cycle, however, the recipes must first be tested in the laboratory. “We’re not going to build a tower with them right away without testing them first,” Nikolaos Prasianakis says with a smile.
    The study primarily serves as a proof of concept – that is, as evidence that promising formulations can be identified purely by mathematical calculation. “We can extend our AI modelling tool as required and integrate additional aspects, such as the production or availability of raw materials, or where the building material is to be used – for example, in a marine environment, where cement and concrete behave differently, or even in the desert,” says Romana BoigerNikolaos Prasianakis is already looking ahead: “This is just the beginning. The time savings offered by such a general workflow are enormous – making it a very promising approach for all sorts of material and system designs.”
    Without the interdisciplinary background of the researchers, the project would never have come to fruition: “We needed cement chemists, thermodynamics experts, AI specialists – and a team that could bring all of this together,” Prasianakis says. “Added to this was the important exchange with other research institutions such as EMPA within the framework of the SCENE project.” SCENE (the Swiss Centre of Excellence on Net Zero Emissions) is an interdisciplinary research programme that aims to develop scientifically sound solutions for drastically reducing greenhouse gas emissions in industry and the energy supply. The study was carried out as part of this project. More

  • in

    From shortage to supremacy: How Sandia and the CHIPS Act aim to reboot US chip power

    Sandia National Laboratories has joined a new partnership aimed at helping the United States regain its leadership in semiconductor manufacturing.
    While the U.S. was considered a powerhouse in chip production in the 1990s, fabricating more than 35% of the world’s semiconductors, that share has since dropped to 12%. Today, the U.S. manufactures none of the world’s most advanced chips which power technologies like smartphones, owned by 71% of the world’s population, as well as self-driving cars, quantum computers, and artificial intelligence-powered devices and programs.
    Sandia hopes to help change that. It recently became the first national lab to join the U.S. National Semiconductor Technology Center. The NSTC was established under the CHIPS and Science Act to accelerate innovation and address some of the country’s most pressing technology challenges.
    “We have pioneered the way for other labs to join,” said Mary Monson, Sandia’s senior manager of Technology Partnerships and Business Development. “The CHIPS Act has brought the band back together, you could say. By including the national labs, U.S. companies, and academia, it’s really a force multiplier.”
    Sandia has a long history of contributing to the semiconductor industry through research and development partnerships, its Microsystems Engineering, Science and Applications facility known as MESA, and its advanced cleanrooms for developing next-generation technologies. Through its NSTC partnerships, Sandia hopes to strengthen U.S. semiconductor manufacturing and research and development, enhance national security production, and foster the innovation of new technologies that sets the nation apart globally.
    “The big goal is to strengthen capabilities. Industry is moving fast, so we are keeping abreast of everything happening and incorporating what will help us deliver more efficiently on our national security mission. It’s about looking at innovative ways of partnering and expediting the process,” Monson said.
    The urgency of the effort is evident. The pandemic provided a perfect example, as car lots were left bare and manufacturers sat idle, waiting for chips to be produced to build new vehicles.

    “An average car contains 1,400 chips and electric vehicles use more than 3,000,” said Rick McCormick, Sandia’s senior scientist for semiconductor technology strategy. McCormick is helping lead Sandia’s new role. “Other nations around the globe are investing more than $300 billion to be leaders in semiconductor manufacturing. The U.S. CHIPS Act is our way of ‘keeping up with the Joneses.’ One goal is for the U.S. to have more than 25% of the global capacity for state-of-the-art chips by 2032.”
    Sandia is positioned to play a key role in creating the chips of the future.
    “More than $12 billion in research and development spending is planned under CHIPS, including a $3 billion program to create an ecosystem for packaging assemblies of chiplets,” McCormick said. “These chiplets communicate at low energy and high speed as if they were a large expensive chip.”
    Modern commercial AI processors use this approach, and Sandia’s resources and partnerships can help expand access to small companies and national security applications. MESA already fabricates high-reliability chiplet assembly products for the stockpile and nonproliferation applications.
    McCormick said Sandia could also play a major role in training the workforce of the future. The government has invested billions of dollars in new factories, all of which need to be staffed by STEM students.
    “There is a potential crisis looming,” McCormick said. “The Semiconductor Industry Association anticipates that the U.S. will need 60,000 to 70,000 more workers, so we need to help engage the STEM workforce. That effort will also help Sandia bolster its staffing pipeline.”
    As part of its membership, Sandia will offer access to some of its facilities to other NSTC members, fostering collaboration and partnerships. Tech transfer is a core part of Sandia’s missions, and this initiative will build on that by helping private partners increase their stake in the industry while enabling Sandia to build on its own mission.
    “We will be helping develop suppliers and strengthen our capabilities,” Monson said. “We are a government resource for semiconductor knowledge. We are in this evolving landscape and have a front row seat to what it will look like over the next 20 years. We are helping support technology and strengthening our national security capabilities and mission delivery.” More

  • in

    Robots that feel heat, pain, and pressure? This new “skin” makes it possible

    Scientists have developed a low-cost, durable, highly-sensitive robotic ‘skin’ that can be added to robotic hands like a glove, enabling robots to detect information about their surroundings in a way that’s similar to humans.
    The researchers, from the University of Cambridge and University College London (UCL), developed the flexible, conductive skin, which is easy to fabricate and can be melted down and formed into a wide range of complex shapes. The technology senses and processes a range of physical inputs, allowing robots to interact with the physical world in a more meaningful way.
    Unlike other solutions for robotic touch, which typically work via sensors embedded in small areas and require different sensors to detect different types of touch, the entirety of the electronic skin developed by the Cambridge and UCL researchers is a sensor, bringing it closer to our own sensor system: our skin.
    Although the robotic skin is not as sensitive as human skin, it can detect signals from over 860,000 tiny pathways in the material, enabling it to recognise different types of touch and pressure – like the tap of a finger, a hot or cold surface, damage caused by cutting or stabbing, or multiple points being touched at once – in a single material.
    The researchers used a combination of physical tests and machine learning techniques to help the robotic skin ‘learn’ which of these pathways matter most, so it can sense different types of contact more efficiently.
    In addition to potential future applications for humanoid robots or human prosthetics where a sense of touch is vital, the researchers say the robotic skin could be useful in industries as varied as the automotive sector or disaster relief. The results are reported in the journal Science Robotics.
    Electronic skins work by converting physical information – like pressure or temperature – into electronic signals. In most cases, different types of sensors are needed for different types of touch – one type of sensor to detect pressure, another for temperature, and so on – which are then embedded into soft, flexible materials. However, the signals from these different sensors can interfere with each other, and the materials are easily damaged.

    “Having different sensors for different types of touch leads to materials that are complex to make,” said lead author Dr David Hardman from Cambridge’s Department of Engineering. “We wanted to develop a solution that can detect multiple types of touch at once, but in a single material.”
    “At the same time, we need something that’s cheap and durable, so that it’s suitable for widespread use,” said co-author Dr Thomas George Thuruthel from UCL.
    Their solution uses one type of sensor that reacts differently to different types of touch, known as multi-modal sensing. While it’s challenging to separate out the cause of each signal, multi-modal sensing materials are easier to make and more robust.
    The researchers melted down a soft, stretchy and electrically conductive gelatine-based hydrogel, and cast it into the shape of a human hand. They tested a range of different electrode configurations to determine which gave them the most useful information about different types of touch. From just 32 electrodes placed at the wrist, they were able to collect over 1.7 million pieces of information over the whole hand, thanks to the tiny pathways in the conductive material.
    The skin was then tested on different types of touch: the researchers blasted it with a heat gun, pressed it with their fingers and a robotic arm, gently touched it with their fingers, and even cut it open with a scalpel. The team then used the data gathered during these tests to train a machine learning model so the hand would recognize what the different types of touch meant.
    “We’re able to squeeze a lot of information from these materials – they can take thousands of measurements very quickly,” said Hardman, who is a postdoctoral researcher in the lab of co-author Professor Fumiya Iida. “They’re measuring lots of different things at once, over a large surface area.”
    “We’re not quite at the level where the robotic skin is as good as human skin, but we think it’s better than anything else out there at the moment,” said Thuruthel. “Our method is flexible and easier to build than traditional sensors, and we’re able to calibrate it using human touch for a range of tasks.”
    In future, the researchers are hoping to improve the durability of the electronic skin, and to carry out further tests on real-world robotic tasks.
    The research was supported by Samsung Global Research Outreach Program, the Royal Society, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Fumiya Iida is a Fellow of Corpus Christi College, Cambridge. More

  • in

    AI Reveals Milky Way’s Black Hole Spins Near Top Speed

    An international team of astronomers has trained a neural network with millions of synthetic simulations and artificial intelligence (AI) to tease out new cosmic curiosities about black holes, revealing the one at the center of our Milky Way is spinning at nearly top speed.
    These large ensembles of simulations were generated by throughput computing capabilities provided by the Center for High Throughput Computing (CHTC), a joint entity of the Morgridge Institute for Research and the University of Wisconsin-Madison. The astronomers published their results and methodology today in three papers in the journal Astronomy & Astrophysics.
    High-throughput computing, celebrating its 40th anniversary this year, was pioneered by Wisconsin computer scientist Miron Livny. It’s a novel form of distributed computing that automates computing tasks across a network of thousands of computers, essentially turning a single massive computing challenge into a supercharged fleet of smaller ones. This computing innovation is helping fuel big-data discovery across hundreds of scientific projects worldwide, including the search for cosmic neutrinos, subatomic particles and gravitational waves as well as to unravel antibiotic resistance.
    In 2019, the Event Horizon Telescope (EHT) Collaboration released the first image of a supermassive black hole at the center of the galaxy M87. In 2022, they presented the image of the black hole at the center of our Milky Way, Sagittarius A*. However, the data behind the images still contained a wealth of hard-to-crack information. An international team of researchers trained a neural network to extract as much information as possible from the data.
    From a handful to millions
    Previous studies by the EHT Collaboration used only a handful of realistic synthetic data files. Funded by the National Science Foundation (NSF) as part of the Partnership to Advance Throughput Computing (PATh) project, the Madison-based CHTC enabled the astronomers to feed millions of such data files into a so-called Bayesian neural network, which can quantify uncertainties. This allowed the researchers to make a much better comparison between the EHT data and the models.
    Thanks to the neural network, the researchers now suspect that the black hole at the center of the Milky Way is spinning at almost top speed. Its rotation axis points to the Earth. In addition, the emission near the black hole is mainly caused by extremely hot electrons in the surrounding accretion disk and not by a so-called jet. Also, the magnetic fields in the accretion disk appear to behave differently from the usual theories of such disks.

    “That we are defying the prevailing theory is of course exciting,” says lead researcher Michael Janssen, of Radboud University Nijmegen, the Netherlands. “However, I see our AI and machine learning approach primarily as a first step. Next, we will improve and extend the associated models and simulations.”
    Impressive scaling
    “The ability to scale up to the millions of synthetic data files required to train the model is an impressive achievement,” adds Chi-kwan Chan, an Associate Astronomer of Steward Observatory at the University of Arizonaand a longtime PATh collaborator. “It requires dependable workflow automation, and effective workload distribution across storage resources and processing capacity.”
    “We are pleased to see EHT leveraging our throughput computing capabilities to bring the power of AI to their science,” says Professor Anthony Gitter, a Morgridge Investigator and a PATh Co-PI. “Like in the case of other science domains, CHTC’s capabilities allowed EHT researchers to assemble the quantity and quality of AI-ready data needed to train effective models that facilitate scientific discovery.”
    The NSF-funded Open Science Pool, operated by PATh, offers computing capacity contributed by more than 80 institutions across the United States. The Event Horizon black hole project performed more than 12 million computing jobs in the past three years.
    “A workload that consists of millions of simulations is a perfect match for our throughput-oriented capabilities that were developed and refined over four decades” says Livny, director of the CHTC and lead investigator of PATh. “We love to collaborate with researchers who have workloads that challenge the scalability of our services.”
    Scientific papers referenced

    Deep learning inference with the Event Horizon Telescope I. Calibration improvements and a comprehensive synthetic data library. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025.
    Deep learning inference with the Event Horizon Telescope II. The Zingularity framework for Bayesian artificial neural networks. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025.
    Deep learning inference with the Event Horizon Telescope III. Zingularity results from the 2017 observations and predictions for future array expansions. By: M. Janssen et al. In: Astronomy & Astrophysics, 6 June 2025. More