More stories

  • in

    Toxic chemicals can be detected with new AI method

    Swedish researchers at Chalmers University of Technology and the University of Gothenburg have developed an AI method that improves the identification of toxic chemicals — based solely on knowledge of the molecular structure. The method can contribute to better control and understanding of the ever-growing number of chemicals used in society, and can also help reduce the amount of animal tests.
    The use of chemicals in society is extensive, and they occur in everything from household products to industrial processes. Many chemicals reach our waterways and ecosystems, where they may cause negative effects on humans and other organisms. One example is PFAS, a group of problematic substances which has recently been found in concerning concentrations in both groundwater and drinking water. It has been used, for example, in firefighting foam and in many consumer products.
    Negative effects for humans and the environment arise despite extensive chemical regulations, that often require time-consuming animal testing to demonstrate when chemicals can be considered as safe. In the EU alone, more than two million animals are used annually to comply with various regulations. At the same time, new chemicals are developed at a rapid pace, and it is a major challenge to determine which of these that need to be restricted due to their toxicity to humans or the environment.
    Valuable help in the development of chemicals
    The new method developed by the Swedish researchers utilises artificial intelligence for rapid and cost-effective assessment of chemical toxicity. It can therefore be used to identify toxic substances at an early phase and help reduce the need for animal testing.
    “Our method is able to predict whether a substance is toxic or not based on its chemical structure. It has been developed and refined by analysing large datasets from laboratory tests performed in the past. The method has thereby been trained to make accurate assessments for previously untested chemicals,” says Mikael Gustavsson, researcher at the Department of Mathematical Sciences at Chalmers University of Technology, and at the Department of Biology and Environmental Sciences at the University of Gothenburg.
    “There are currently more than 100,000 chemicals on the market, but only a small part of these have a well-described toxicity towards humans or the environment. To assess the toxicity of all these chemicals using conventional methods, including animal testing, is not practically possible. Here, we see that our method can offer a new alternative,” says Erik Kristiansson, professor at the Department of Mathematical Sciences at Chalmers and at the University of Gothenburg.

    The researchers believe that the method can be very useful within environmental research, as well as for authorities and companies that use or develop new chemicals. They have therefore made it open and publicly available.
    Broader and more accurate than today’s computational tools
    Computational tools for finding toxic chemicals already exist, but so far, they have had too narrow applicability domains or too low accuracy to replace laboratory tests to any greater extent. In the researchers’ study, they compared their method with three other, commonly used, computational tools, and found that the new method has both a higher accuracy and that it is more generally applicable.
    “The type of AI we use is based on advanced deep learning methods,” says Erik Kristiansson. “Our results show that AI-based methods are already on par with conventional computational approaches, and as the amount of available data continues to increase, we expect AI methods to improve further. Thus, we believe that AI has the potential to markedly improve computational assessment of chemical toxicity.”
    The researchers predict that AI systems will be able to replace laboratory tests to an increasingly greater extent.
    “This would mean that the number of animal experiments could be reduced, as well as the economic costs when developing new chemicals. The possibility to rapidly prescreen large and diverse bodies of data can therefore aid the development of new and safer chemicals and help find substitutes for toxic substances that are currently in use. We thus believe that AI-based methods will help reduce the negative impacts of chemical pollution on humans and on ecosystem services,” says Erik Kristiansson.

    More about: the new AI method
    The method is based on transformers, an AI model for deep learning that was originally developed for language processing. Chat GPT — whose abbreviation means Generative Pretrained Transformer — is one example of the applications.
    The model has recently also proved highly efficient at capturing information from chemical structures. Transformers can identify properties in the structure of molecules that cause toxicity, in a more sophisticated way than has been previously possible.
    Using this information, the toxicity of the molecule can then be predicted by a deep neural network. Neural networks and transformers belong to the type of AI that continuously improves itself by using training data — in this case, large amounts of data from previous laboratory tests of the effects of thousands of different chemicals on various animals and plants. More

  • in

    Unveiling a polarized world — in a single shot

    Think of all the information we get based on how an object interacts with wavelengths of light — a.k.a. color. Color can tell us if food is safe to eat or if a piece of metal is hot. Color is an important diagnostic tool in medicine, helping practitioners diagnose diseased tissue, inflammation, or problems in blood flow.
    Companies have invested heavily to improve color in digital imaging, but wavelength is just one property of light. Polarization — how the electric field oscillates as light propagates — is also rich with information, but polarization imaging remains mostly confined to table-top laboratory settings, relying on traditional optics such as waveplates and polarizers on bulky rotational mounts.
    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a compact, single-shot polarization imaging system that can provide a complete picture of polarization. By using just two thin metasurfaces, the imaging system could unlock the vast potential of polarization imaging for a range of existing and new applications, including biomedical imaging, augmented and virtual reality systems and smart phones.
    The research is published in Nature Photonics.
    “This system, which is free of any moving parts or bulk polarization optics, will empower applications in real-time medical imaging, material characterization, machine vision, target detection, and other important areas,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the paper.
    In previous research, Capasso and his team developed a first-of-its-kind compact polarization camera to capture so-called Stokes images, images of the polarization signature reflecting off an object — without controlling the incident illumination.
    “Just as the shade or even the color of an object can appear different depending on the color of the incident illumination, the polarization signature of an object depends on the polarization profile of the illumination,” said Aun Zaidi, a recent PhD graduate from Capasso’s group and first author of the paper. “In contrast to conventional polarization imaging, ‘active’ polarization imaging, known as Mueller matrix imaging, can capture the most complete polarization response of an object by controlling the incident polarization.”
    Currently, Mueller matrix imaging requires a complex optical set-up with multiple rotating plates and polarizers that sequentially capture a series of images which are combined to realize a matrix representation of the image.

    The simplified system developed by Capasso and his team uses two extremely thin metasurfaces — one to illuminate an object and the other to capture and analyze the light on the other side.
    The first metasurface generates what’s known as polarized structured light, in which the polarization is designed to vary spatially in a unique pattern. When this polarized light reflects off or transmits through the object being illuminated, the polarization profile of the beam changes. That change is captured and analyzed by the second metasurface to construct the final image — in a single shot.
    The technique allows for real-time advanced imaging, which is important for applications such as endoscopic surgery, facial recognition in smartphones, and eye tracking in AR/VR systems. It could also be combined with powerful machine learning algorithms for applications in medical diagnostics, material classification and pharmaceuticals.
    “We have brought together two seemingly separate fields of structured light and polarized imaging to design a single system that captures the most complete polarization information. Our use of nanoengineered metasurfaces, which replace many components that would traditionally be required in a system such as this, greatly simplifies its design,” said Zaidi.
    “Our single-shot and compact system provides a viable pathway for the widespread adoption of this type of imaging to empower applications requiring advanced imaging,” said Capasso.
    The Harvard Office of Technology Development has protected the intellectual property associated with this project out of Prof. Capasso’s lab and licensed the technology to Metalenz for further development.
    The research was co-authored by Noah Rubin, Maryna Meretska, Lisa Li, Ahmed Dorrah and Joon-Suh Park. It was supported by the Air Force Office of Scientific Research under award Number FA9550-21-1-0312, the Office of Naval Research (ONR) under award number N00014-20-1-2450, the National Aeronautics and Space Administration (NASA) under award numbers 80NSSC21K0799 and 80NSSC20K0318, and the National Science Foundation under award no. ECCS-2025158. More

  • in

    This highly reflective black paint makes objects more visible to autonomous cars

    Driving at night might be a scary challenge for a new driver, but with hours of practice it soon becomes second nature. For self-driving cars, however, practice may not be enough because the lidar sensors that often act as these vehicles’ “eyes” have difficulty detecting dark-colored objects. Research published in ACS Applied Materials & Interfaces describes a highly reflective black paint that could help these cars see dark objects and make autonomous driving safer.
    Lidar, short for light detection and ranging, is a system used in a variety of applications, including geologic mapping and self-driving vehicles. The system works like echolocation, but instead of emitting sound waves, lidar emits tiny pulses of near-infrared light. The light pulses bounce off objects and back to the sensor, allowing the system to map the 3D environment it’s in. But lidar falls short when objects absorb more of that near-infrared light than they reflect, which can occur on black-painted surfaces. Lidar can’t detect these dark objects on its own, so one common solution is to have the system rely on other sensors or software to fill in the information gaps. However, this solution could still lead to accidents in some situations. Rather than reinventing the lidar sensors, though, Chang-Min Yoon and colleagues wanted to make dark objects easier to detect with existing technology by developing a specially formulated, highly reflective black paint.
    To produce the new paint, the team first formed a thin layer of titanium dioxide (TiO2) on small fragments of glass. Then the glass was etched away with hydrofluoric acid, leaving behind a hollow layer of white, highly reflective TiO2. This was reduced with sodium borohydride to produce a black material that maintained its reflective qualities. By mixing this material with varnish, it could be applied as a paint. The team next tested the new paint with two types of commercially available lidar sensors: a mirror-based sensor and a 360-degree rotating type sensor. For comparison, a traditional carbon black-based version was also evaluated. Both sensors easily recognized the specially formulated, TiO2-based paint but did not readily detect the traditional paint. The researchers say that their highly reflective material could help improve safety on the roads by making dark objects more visible to autonomous vehicles already equipped with existing lidar technology.
    The authors acknowledge funding from the Korea Ministry of SMEs and Startups and the National Research Foundation of Korea. More

  • in

    Artificial intelligence enhances monitoring of threatened marbled murrelet

    Artificial intelligence analysis of data gathered by acoustic recording devices is a promising new tool for monitoring the marbled murrelet and other secretive, hard-to-study species, research by Oregon State University and the U.S. Forest Service has shown.
    The threatened marbled murrelet is an iconic Pacific Northwest seabird that’s closely related to puffins and murres, but unlike those birds, murrelets raise their young as far as 60 miles inland in mature and old-growth forests.
    “There are very few species like it,” said co-author Matt Betts of the OSU College of Forestry. “And there’s no other bird that feeds in the ocean and travels such long distances to inland nest sites. This behavior is super unusual and it makes studying this bird really challenging.”
    A research team led by Adam Duarte of the U.S. Forest Service’s Pacific Northwest Research Station used data from acoustic recorders, originally placed to assist in monitoring northern spotted owl populations, at thousands of locations in federally managed forests in the Oregon Coast Range and Washington’s Olympic Peninsula.
    Researchers developed a machine learning algorithm known as a convolutional neural network to mine the recordings for murrelet calls.
    Findings, published in Ecological Indicators, were tested against known murrelet population data and determined to be correct at a rate exceeding 90%, meaning the recorders and AI are able to provide an accurate look at how much murrelets are calling in a given area.
    “Next, we’re testing whether murrelet sounds can actually predict reproduction and occupancy in the species, but that is still a few steps off,” Betts said.

    The dove-sized marbled murrelet spends most of its time in coastal waters eating krill, other invertebrates and forage fish such as herring, anchovies, smelt and capelin. Murrelets can only produce one offspring per year, if the nest is successful, and their young require forage fish for proper growth and development.
    The birds typically lay their single egg high in a tree on a horizontal limb at least 4 inches in diameter. Steller’s jays, crows and ravens are the main predators of murrelet nests.
    Along the West Coast, marbled murrelets are found regularly from Santa Cruz, California, to the Aleutian Islands. The species is listed as threatened under the U.S. Endangered Species Act in Washington, Oregon and California.
    “The greatest number of detections in our study typically occurred where late-successional forest dominates, and nearer to ocean habitats,” Duarte said.
    Late-successional refers to mature and old-growth forests.
    “Our results offer considerable promise for species distribution modeling and long-term population monitoring for rare species,” Duarte said. “Monitoring that’s far less labor intensive than nest searching via telemetry, ground-based nest searches or traditional audio/visual techniques.”
    Matthew Weldy of the College of Forestry, Zachary Ruff of the OSU College of Agricultural Sciences and Jonathon Valente, a former Oregon State postdoctoral researcher now at the U.S. Geological Survey, joined Betts and Duarte in the study, along with Damon Lesmeister and Julianna Jenkins of the Forest Service.
    Funding was provided by the Forest Service, the Bureau of Land Management and the National Park Service. More

  • in

    Science has an AI problem: This group says they can fix it

    AI holds the potential to help doctors find early markers of disease and policymakers to avoid decisions that lead to war. But a growing body of evidence has revealed deep flaws in how machine learning is used in science, a problem that has swept through dozens of fields and implicated thousands of erroneous papers.
    Now an interdisciplinary team of 19 researchers, led by Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, has published guidelines for the responsible use of machine learning in science.
    “When we graduate from traditional statistical methods to machine learning methods, there are a vastly greater number of ways to shoot oneself in the foot,” said Narayanan, director of Princeton’s Center for Information Technology Policy and a professor of computer science. “If we don’t have an intervention to improve our scientific standards and reporting standards when it comes to machine learning-based science, we risk not just one discipline but many different scientific disciplines rediscovering these crises one after another.”
    The authors say their work is an effort to stamp out this smoldering crisis of credibility that threatens to engulf nearly every corner of the research enterprise. A paper detailing their guidelines appeared May 1 in the journal Science Advances.
    Because machine learning has been adopted across virtually every scientific discipline, with no universal standards safeguarding the integrity of those methods, Narayanan said the current crisis, which he calls the reproducibility crisis, could become far more serious than the replication crisis that emerged in social psychology more than a decade ago.
    The good news is that a simple set of best practices can help resolve this newer crisis before it gets out of hand, according to the authors, who come from computer science, mathematics, social science and health research.
    “This is a systematic problem with systematic solutions,” said Kapoor, a graduate student who works with Narayanan and who organized the effort to produce the new consensus-based checklist.

    The checklist focuses on ensuring the integrity of research that uses machine learning. Science depends on the ability to independently reproduce results and validate claims. Otherwise, new work cannot be reliably built atop old work, and the entire enterprise collapses. While other researchers have developed checklists that apply to discipline-specific problems, notably in medicine, the new guidelines start with the underlying methods and apply them to any quantitative discipline.
    One of the main takeaways is transparency. The checklist calls on researchers to provide detailed descriptions of each machine learning model, including the code, the data used to train and test the model, the hardware specifications used to produce the results, the experimental design, the project’s goals and any limitations of the study’s findings. The standards are flexible enough to accommodate a wide range of nuance, including private datasets and complex hardware configurations, according to the authors.
    While the increased rigor of these new standards might slow the publication of any given study, the authors believe wide adoption of these standards would increase the overall rate of discovery and innovation, potentially by a lot.
    “What we ultimately care about is the pace of scientific progress,” said sociologist Emily Cantrell, one of the lead authors, who is pursuing her Ph.D. at Princeton. “By making sure the papers that get published are of high quality and that they’re a solid base for future papers to build on, that potentially then speeds up the pace of scientific progress. Focusing on scientific progress itself and not just getting papers out the door is really where our emphasis should be.”
    Kapoor concurred. The errors hurt. “At the collective level, it’s just a major time sink,” he said. That time costs money. And that money, once wasted, could have catastrophic downstream effects, limiting the kinds of science that attract funding and investment, tanking ventures that are inadvertently built on faulty science, and discouraging countless numbers of young researchers.
    In working toward a consensus about what should be included in the guidelines, the authors said they aimed to strike a balance: simple enough to be widely adopted, comprehensive enough to catch as many common mistakes as possible.
    They say researchers could adopt the standards to improve their own work; peer reviewers could use the checklist to assess papers; and journals could adopt the standards as a requirement for publication.
    “The scientific literature, especially in applied machine learning research, is full of avoidable errors,” Narayanan said. “And we want to help people. We want to keep honest people honest.” More

  • in

    Physicists build new device that is foundation for quantum computing

    Scientists led by the University of Massachusetts Amherst have adapted a device called a microwave circulator for use in quantum computers, allowing them for the first time to precisely tune the exact degree of nonreciprocity between a qubit, the fundamental unit of quantum computing, and a microwave-resonant cavity. The ability to precisely tune the degree of nonreciprocity is an important tool to have in quantum information processing. In doing so, the team, including collaborators from the University of Chicago, derived a general and widely applicable theory that simplifies and expands upon older understandings of nonreciprocity so that future work on similar topics can take advantage of the team’s model, even when using different components and platforms. The research was published recently in Science Advances.
    Quantum computing differs fundamentally from the bit-based computing we all do every day. A bit is a piece of information typically expressed as a 0 or a 1. Bits are the basis for all the software, websites and emails that make up our electronic world.
    By contrast, quantum computing relies on “quantum bits,” or “qubits,” which are like regular bits except that they are represented by the “quantum superposition” of two states of a quantum object. Matter in a quantum state behaves very differently, which means that qubits aren’t relegated to being only 0s or 1s — they can be both at the same time in a way that sounds like magic, but which is well defined by the laws of quantum mechanics. This property of quantum superposition leads to the increased power capabilities of quantum computers.
    Furthermore, a property called “nonreciprocity” can create additional avenues for quantum computing to leverage the potential of the quantum world.
    “Imagine a conversation between two people,” says Sean van Geldern, graduate student in physics at UMass Amherst and one of the paper’s authors. “Total reciprocity is when each of the people in that conversation is sharing an equal amount of information. Nonreciprocity is when one person is sharing a little bit less than the other.”
    “This is desirable in quantum computing,” says senior author Chen Wang, assistant professor of physics at UMass Amherst, “because there are many computing scenarios where you want to give plenty of access to data without giving anyone the power to alter or degrade that data.”
    To control nonreciprocity, lead author Ying-Ying Wang, graduate student in physics at UMass Amherst, and her co-authors ran a series of simulations to determine the design and properties their circulator would need to have in order for them to vary its nonreciprocity. They then built their circulator and ran a host of experiments not just to prove their concept, but to understand exactly how their device enabled nonreciprocity. In the course of doing so, they were able to revise their model, which contained 16 parameters detailing how to build their specific device, to a simpler and more general model of only six parameters. This revised, more general model is much more useful than the initial, more specific one, because it is widely applicable to a range of future research efforts.

    The “integrated nonreciprocal device” that the team built looks like a “Y.” At the center of the “Y” is the circulator, which is like a traffic roundabout for the microwave signals mediating the quantum interactions. One of the legs is the cavity port, a resonant superconducting cavity hosting an electromagnetic field. Another leg of the “Y” holds the qubit, printed on a sapphire chip. The final leg is the output port.
    “If we vary the superconducting electromagnetic field by bombarding it with photons,” says Ying-Ying Wang, “we see that that qubit reacts in a predictable and controllable way, which means that we can adjust exactly how much reciprocity we want. And the simplified model that we produced describes our system in such a way that the external parameters can be calculated to tune an exact degree of nonreciprocity.”
    “This is the first demonstration of embedding nonreceptivity into a quantum computing device,” says Chen Wang, “and it opens the door to engineering more sophisticated quantum computing hardware.”
    Funding for this research was provided by the U.S. Department of Energy, the Army Research Office, Simons Foundation, Air Force Office of Scientific Research, the U.S. National Science Foundation, and the Laboratory for Physical Sciences Qubit Collaboratory. More

  • in

    Researchers unlock potential of 2D magnetic devices for future computing

    Imagine a future where computers can learn and make decisions in ways that mimic human thinking, but at a speed and efficiency that are orders of magnitude greater than the current capability of computers.
    A research team at the University of Wyoming created an innovative method to control tiny magnetic states within ultrathin, two-dimensional (2D) van der Waals magnets — a process akin to how flipping a light switch controls a bulb.
    “Our discovery could lead to advanced memory devices that store more data and consume less power or enable the development of entirely new types of computers that can quickly solve problems that are currently intractable,” says Jifa Tian, an assistant professor in the UW Department of Physics and Astronomy and interim director of UW’s Center for Quantum Information Science and Engineering.
    Tian was corresponding author of a paper, titled “Tunneling current-controlled spin states in few-layer van der Waals magnets,” that was published today (May 1) in Nature Communications, an open access, multidisciplinary journal dedicated to publishing high-quality research in all areas of the biological, health, physical, chemical, Earth, social, mathematical, applied and engineering sciences.
    Van der Waals materials are made up of strongly bonded 2D layers that are bound in the third dimension through weaker van der Waals forces. For example, graphite is a van der Waals material that isbroadly used in industry in electrodes, lubricants, fibers, heat exchangers and batteries. The nature of the van der Waals forces between layers allows researchers to use Scotch tape to peel the layers into atomic thickness.
    The team developed a device known as a magnetic tunnel junction, which uses chromium triiodide — a 2D insulating magnet only a few atoms thick — sandwiched between two layers of graphene. By sending a tiny electric current — called a tunneling current — through this sandwich, the direction of the magnet’s orientation of the magnetic domains (around 100 nanometers in size) can be dictated within the individual chromium triiodide layers, Tian says.
    Specifically, “this tunneling current not only can control the switching direction between two stable spin states, but also induces and manipulates switching between metastable spin states, called stochastic switching,” says ZhuangEn Fu, a graduate student in Tian’s research lab and now a postdoctoral fellow at the University of Maryland.

    “This breakthrough is not just intriguing; it’s highly practical. It consumes three orders of magnitude smaller energy than traditional methods, akin to swapping an old lightbulb for an LED, marking it a potential game-changer for future technology,” Tian says. “Our research could lead to the development of novel computing devices that are faster, smaller and more energy-efficient and powerful than ever before. Our research marks a significant advancement in magnetism at the 2D limit and sets the stage for new, powerful computing platforms, such as probabilistic computers.”
    Traditional computers use bits to store information as 0’s and 1’s. This binary code is the foundation of all classic computing processes. Quantum computers use quantum bits that can represent both “0” and “1” at the same time, increasing processing power exponentially.
    “In our work, we’ve developed what you might think of as a probabilistic bit, which can switch between ‘0’ and ‘1’ (two spin states) based on the tunneling current controlled probabilities,” Tian says. “These bits are based on the unique properties of ultrathin 2D magnets and can be linked together in a way that is similar to neurons in the brain to form a new kind of computer, known as a probabilistic computer.
    “What makes these new computers potentially revolutionary is their ability to handle tasks that are incredibly challenging for traditional and even quantum computers, such as certain types of complex machine learning tasks and data processing problems,” Tian continues. “They are naturally tolerant to errors, simple in design and take up less space, which could lead to more efficient and powerful computing technologies.”
    Hua Chen, an associate professor of physics at Colorado State University, and Allan MacDonald, a professor of physics at the University of Texas-Austin, collaborated to develop a theoretical model that elucidates how tunneling currents influence spin states in the 2D magnetic tunnel junctions. Other contributors were from Penn State University, Northeastern University and the National Institute for Materials Science in Namiki, Tsukuba, Japan.
    The study was funded through grants from the U.S. Department of Energy; Wyoming NASA EPSCoR (Established Program to Stimulate Competitive Research); the National Science Foundation; and the World Premier International Research Center Initiative and the Ministry of Education, Culture, Sports, Science and Technology, both in Japan. More

  • in

    New computer algorithm supercharges climate models and could lead to better predictions of future climate change

    Earth System Models — complex computer models which describe Earth processes and how they interact — are critical for predicting future climate change. By simulating the response of our land, oceans and atmosphere to manmade greenhouse gas emissions, these models form the foundation for predictions of future extreme weather and climate event scenarios, including those issued by the UN Intergovernmental Panel on Climate Change (IPCC).
    However, climate modellers have long faced a major problem. Because Earth System Models integrate many complicated processes, they cannot immediately run a simulation; they must first ensure that it has reached a stable equilibrium representative of real-world conditions before the industrial revolution. Without this initial settling period — referred to as the “spin-up” phase — the model can “drift,” simulating changes that may be erroneously attributed to manmade factors.
    Unfortunately, this process is extremely slow as it requires running the model for many thousands of model years which, for IPCC simulations, can take as much as two years on some of the world’s most powerful supercomputers.
    However, a study in Science Advances by a University of Oxford scientist funded by the Agile Initiative describes a new computer algorithm which can be applied to Earth System Models to drastically reduce spin-up time. During tests on models used in IPCC simulations, the algorithm was on average 10 times faster at spinning up the model than currently-used approaches, reducing the time taken to achieve equilibrium from many months to under a week.
    Study author Samar Khatiwala, Professor of Earth Sciences at the University of Oxford’s Department of Earth Sciences, who devised the algorithm, said: ‘Minimising model drift at a much lower cost in time and energy is obviously critical for climate change simulations, but perhaps the greatest value of this research may ultimately be to policy makers who need to know how reliable climate projections are.’
    Currently, the lengthy spin-up time of many IPCC models prevents climate researchers from running their model at a higher resolution and defining uncertainty through carrying out repeat simulations. By drastically reducing the spin-up time, the new algorithm will enable researchers to investigate how subtle changes to the model parameters can alter the output — which is critical for defining the uncertainty of future emission scenarios.
    Professor Khatiwala’s new algorithm employs a mathematical approach known as sequence acceleration, which has its roots with the famous mathematician Euler. In the 1960s this idea was applied by D. G. Anderson to speed-up the solution of Schrödinger’s equation, which predicts how matter behaves at the microscopic level. So important is this problem that more than half the world’s supercomputing power is currently devoted to solving it, and ‘Anderson Acceleration’, as it is now known, is one of the most commonly used algorithms employed for it.

    Professor Khatiwala realised that Anderson Acceleration might also be able to reduce model spin-up time since both problems are of an iterative nature: an output is generated and then fed back into the model many times over. By retaining previous outputs and combining them into a single input using Anderson’s scheme, the final solution is achieved much more quickly.
    Not only does this make the spin-up process much faster and less computationally expensive, but the concept can be applied to the huge variety of different models that are used to investigate, and inform policy on, issues ranging from ocean acidification to biodiversity loss. With research groups around the world beginning to spin-up their models for the next IPCC report, due in 2029, Professor Khatiwala is working with a number of them, including the UK Met Office, to trial his approach and software in their models.
    Professor Helene Hewitt OBE, Co-chair for the Coupled Model Intercomparison Project (CMIP) Panel, which will inform the next IPCC report, commented: ‘Policymakers rely on climate projections to inform negotiations as the world tries to meet the Paris Agreement. This work is a step towards reducing the time it takes to produce those critical climate projections.’
    Professor Colin Jones Head of the NERC/Met Office sponsored UK Earth system modelling, commented on the findings: ‘Spin-up has always been prohibitively expensive in terms of computational cost and time. The new approaches developed by Professor Khatiwala have the promise to break this logjam and deliver a quantum leap in the efficiency of spinning up such complex models and, as a consequence, greatly increase our ability to deliver timely, robust estimates of global climate change.’ More