More stories

  • in

    Study identifies distinct brain organization patterns in women and men

    A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.
    The findings, to be published Feb. 19 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and suggest that understanding these differences may be critical to addressing neuropsychiatric conditions that affect women and men differently.
    “A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,” said Vinod Menon, PhD, professor of psychiatry and behavioral sciences and director of the Stanford Cognitive and Systems Neuroscience Laboratory. “Identifying consistent and replicable sex differences in the healthy adult brain is a critical step toward a deeper understanding of sex-specific vulnerabilities in psychiatric and neurological disorders.”
    Menon is the study’s senior author. The lead authors are senior research scientist Srikanth Ryali, PhD, and academic staff researcher Yuan Zhang, PhD.
    “Hotspots” that most helped the model distinguish male brains from female ones include the default mode network, a brain system that helps us process self-referential information, and the striatum and limbic network, which are involved in learning and how we respond to rewards.
    The investigators noted that this work does notweigh in on whether sex-related differences arise early in life or may be driven by hormonal differences or the different societal circumstances that men and women may be more likely to encounter.
    Uncovering brain differences
    The extent to which a person’s sex affects how their brain is organized and operates has long been a point of dispute among scientists. While we know the sex chromosomes we are born with help determine the cocktail of hormones our brains are exposed to — particularly during early development, puberty and aging — researchers have long struggled to connect sex to concrete differences in the human brain. Brain structures tend to look much the same in men and women, and previous research examining how brain regions work together has also largely failed to turn up consistent brain indicators of sex.

    In their current study, Menon and his team took advantage of recent advances in artificial intelligence, as well as access to multiple large datasets, to pursue a more powerful analysis than has previously been employed. First, they created a deep neural network model, which learns to classify brain imaging data: As the researchers showed brain scans to the model and told it that it was looking at a male or female brain, the model started to “notice” what subtle patterns could help it tell the difference.
    This model demonstrated superior performance compared with those in previous studies, in part because it used a deep neural network that analyzes dynamic MRI scans. This approach captures the intricate interplay among different brain regions. When the researchers tested the model on around 1,500 brain scans, it could almost always tell if the scan came from a woman or a man.
    The model’s success suggests that detectable sex differences do exist in the brain but just haven’t been picked up reliably before. The fact that it worked so well in different datasets, including brain scans from multiple sites in the U.S. and Europe, make the findings especially convincing as it controls for many confounds that can plague studies of this kind.
    “This is a very strong piece of evidence that sex is a robust determinant of human brain organization,” Menon said.
    Making predictions
    Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.

    Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.
    The team then wondered if they could create another model that could predict how well participants would do on certain cognitive tasks based on functional brain features that differ between women and men. They developed sex-specific models of cognitive abilities: One model effectively predicted cognitive performance in men but not women, and another in women but not men. The findings indicate that functional brain characteristics varying between sexes have significant behavioral implications.
    “These models worked really well because we successfully separated brain patterns between sexes,” Menon said. “That tells me that overlooking sex differences in brain organization could lead us to miss key factors underlying neuropsychiatric disorders.”
    While the team applied their deep neural network model to questions about sex differences, Menon says the model can be applied to answer questions regarding how just about any aspect of brain connectivity might relate to any kind of cognitive ability or behavior. He and his team plan to make their model publicly available for any researcher to use.
    “Our AI models have very broad applicability,” Menon said. “A researcher could use our models to look for brain differences linked to learning impairments or social functioning differences, for instance — aspects we are keen to understand better to aid individuals in adapting to and surmounting these challenges.”
    The research was sponsored by the National Institutes of Health (grants MH084164, EB022907, MH121069, K25HD074652 and AG072114), the Transdisciplinary Initiative, the Uytengsu-Hamilton 22q11 Programs, the Stanford Maternal and Child Health Research Institute, and the NARSAD Young Investigator Award. More

  • in

    Online digital data and AI for monitoring biodiversity

    The random information posted online could be used to generate information about biodiversity and its conservation.
    “I think it’s quite amazing that images and comments that people post online can be used to infer changes on biodiversity,” says Dr. Andrea Soriano-Redondo, the lead-author of a new article published in the journal Plos Biology and a researcher at the Helsinki Lab of Interdisciplinary Conservation Science at the University of Helsinki.
    Scientists from the University of Helsinki together with colleagues from other universities and institutions around the world propose a strategy for integrating online digital data from media platforms to complement monitoring efforts to help address the global biodiversity crisis in light of the Kunming-Montreal Global Biodiversity Framework.
    “Online digital data, such as social media data, can be used to strengthen existing assessments of the status and trends of biodiversity, the pressures upon it, and the conservation solutions being implemented, as well as to generate novel insights about human-nature interactions,” says Dr. Andrea Soriano-Redondo.
    “The most common sources of online biodiversity data include web pages, news media, social media, image- and video-sharing platforms, and digital books and encyclopedias. These data, for example geolocated distribution data, can be filtered and processed by researchers to target specific research questions and are increasingly being used to explore ecological processes and to investigate the distribution, spatiotemporal trends, phenology, ecological interactions, or behavior of species or assemblages and their drivers of change,” she continues.
    Data generated through the framework in near real-time could be continuously integrated with other independently collected biodiversity datasets and used for real-time applications.
    “Data relevant to assessment of species extinction or ecosystem collapse risk, for example, could be mobilized into the workflows for generating the IUCN Red List of Threatened Species and Red List of Ecosystems,” says Dr. Thomas Brooks, chief scientist of the International Union for the Conservation of Nature and a co-author in the article.

    “Other data on sites of global significance for the persistence of biodiversity could be served to the appropriate national coordination groups to strengthen their efforts in identifying Key Biodiversity Areas,” he continues.
    Data on the illegal wildlife trade could also be integrated with the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) Trade Database or the Trade Records Analysis of Flora and Fauna in Commerce (TRAFFIC) open-source wildlife seizure and incident data.
    Online digital data can also be used to explore human-nature interactions from multiple perspectives.
    “We have successfully used social media data to identify instances of illegal wildlife trade. There is great potential to use these data to provide novel insights into human-nature interactions and how they shape, both positively and negatively, biodiversity conservation,” says Professor Enrico Di Minin, senior co-author in the article, from the University of Helsinki.
    “The necessary technology to implement the work is available, but it will require harnessing expertise from multiple sectors and academic disciplines, as well as the collaboration of digital media companies. Most importantly we need to ensure full access to the data as to maximize its full potential to help address the global biodiversity crisis and other sustainability challenges,” he continues. More

  • in

    New chip opens door to AI computing at light speed

    Penn Engineers have developed a new chip that uses light waves, rather than electricity, to perform the complex math essential to training AI. The chip has the potential to radically accelerate the processing speed of computers while also reducing their energy consumption.
    The silicon-photonic (SiPh) chip’s design is the first to bring together Benjamin Franklin Medal Laureate and H. Nedwill Ramsey Professor Nader Engheta’s pioneering research in manipulating materials at the nanoscale to perform mathematical computations using light — the fastest possible means of communication — with the SiPh platform, which uses silicon, the cheap, abundant element used to mass-produce computer chips.
    The interaction of light waves with matter represents one possible avenue for developing computers that supersede the limitations of today’s chips, which are essentially based on the same principles as chips from the earliest days of the computing revolution in the 1960s.
    In a paper in Nature Photonics, Engheta’s group, together with that of Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering, describes the development of the new chip. “We decided to join forces,” says Engheta, leveraging the fact that Aflatouni’s research group has pioneered nanoscale silicon devices.
    Their goal was to develop a platform for performing what is known as vector-matrix multiplication, a core mathematical operation in the development and function of neural networks, the computer architecture that powers today’s AI tools.
    Instead of using a silicon wafer of uniform height, explains Engheta, “you make the silicon thinner, say 150 nanometers,” but only in specific regions. Those variations in height — without the addition of any other materials — provide a means of controlling the propagation of light through the chip, since the variations in height can be distributed to cause light to scatter in specific patterns, allowing the chip to perform mathematical calculations at the speed of light.
    Due to the constraints imposed by the commercial foundry that produced the chips, Aflatouni says, this design is already ready for commercial applications, and could potentially be adapted for use in graphics processing units (GPUs), the demand for which has skyrocketed with the widespread interest in developing new AI systems. “They can adopt the Silicon Photonics platform as an add-on,” says Aflatouni, “and then you could speed up training and classification.”
    In addition to faster speed and less energy consumption, Engheta and Aflatouni’s chip has privacy advantages: because many computations can happen simultaneously, there will be no need to store sensitive information in a computer’s working memory, rendering a future computer powered by such technology virtually unhackable. “No one can hack into a non-existing memory to access your information,” says Aflatouni.
    This study was conducted at the University of Pennsylvania School of Engineering and Applied science and supported in part by a grant from the U.S. Air Force Office of Scientific Research’s (AFOSR) Multidisciplinary University Research Initiative (MURI)to Engheta (FA9550-21-1-0312)and a grant from the U.S. Office of Naval Research (ONR) to Afaltouni (N00014-19-1-2248).
    Other co-authors include Vahid Nikkhah, Ali Pirmoradi, Farshid Ashtiani and Brian Edwards of Penn Engineering. More

  • in

    A new design for quantum computers

    Creating a quantum computer powerful enough to tackle problems we cannot solve with current computers remains a big challenge for quantum physicists. A well-functioning quantum simulator — a specific type of quantum computer — could lead to new discoveries about how the world works at the smallest scales. Quantum scientist Natalia Chepiga from Delft University of Technology has developed a guide on how to upgrade these machines so that they can simulate even more complex quantum systems. The study is now published in Physical Review Letters.
    “Creating useful quantum computers and quantum simulators is one of the most important and debated topics in quantum science today, with the potential to revolutionise society,” says researcher Natalia Chepiga. Quantum simulators are a type of quantum computer, Chepiga explains: “Quantum simulators are meant to address open problems of quantum physics to further push our understanding of nature. Quantum computers will have wide applications in various areas of social life, for example in finances, encryption and data storage.”
    Steering wheel
    “A key ingredient of a useful quantum simulator is a possibility to control or manipulate it,” says Chepiga. “Imagine a car without a steering wheel. It can only go forward but cannot turn. Is it useful? Only if you need to go in one particular direction, otherwise the answer will be ‘no!’. If we want to create a quantum computer that will be able to discover new physics phenomena in the near-future, we need to build a ‘steering wheel’ to tune into what seems interesting. In my paper I propose a protocol that creates a fully controllable quantum simulator.”
    Recipe
    The protocol is a recipe — a set of ingredients that a quantum simulator should have to be tunable. In the conventional setup of a quantum simulator, rubidium (Rb) or cesium (Cs) atoms are targeted by a single laser. As a result, these particles will take up electrons, and thereby become more energetic; they become excited. “I show that if we were to use two lasers with different frequencies or colours, thereby exciting these atoms to different states, we could tune the quantum simulators to many different settings,” Chepiga explains.
    The protocol offers an additional dimension of what can be simulated. “Imagine that you have only seen a cube as a sketch on a flat piece of paper, but now you get a real 3D cube that you can touch, rotate and explore in different ways,” Chepiga continues. “Theoretically we can add even more dimensions by bringing in more lasers.”
    Simulating many particles
    “The collective behaviour of a quantum system with many particles is extremely challenging to simulate,” Chepiga explains. “Beyond a few dozens of particles, modelling with our usual computer or a supercomputer has to rely on approximations.” When taking the interaction of more particles, temperature and motion into account, there are simply too many calculations to perform for the computer.
    Quantum simulators are composed of quantum particles, which means that the components are entangled. “Entanglement is some sort of mutual information that quantum particles share between themselves. It is an intrinsic property of the simulator and therefore allows to overcome this computational bottleneck.” More

  • in

    1,000 atomic qubits and rising

    Making quantum systems more scalable is one of the key requirements for the further development of quantum computers because the advantages they offer become increasingly evident as the systems are scaled up. Researchers at TU Darmstadt have recently taken a decisive step towards achieving this goal.
    Quantum processors based on two-dimensional arrays of optical tweezers, which are created using focussed laser beams, are one of the most promising technologies for developing quantum computing and simulation that will enable highly beneficial applications in the future. A diverse range of applications from drug development through to optimising traffic flows will benefit from this technology.
    These processors have been able to hold several hundred single-atom quantum systems up to now, whereby each atom represents one quantum bit or qubit as the basic unit of quantum information. In order to make further advances, it is necessary to increase the number of qubits in the processors. This has now been achieved by a team headed by Professor Gerhard Birkl from the “Atoms — Photons — Quanta” research group in the Department of Physics at TU Darmstadt.
    In a research article, which was first published at the beginning of October 2023 on the arXiv preprint server and has now also been published following scientific peer review in the journal OPTICA, the team reports on the world’s first successful experiment to realise a quantum-processing architecture that contains more than 1,000 atomic qubits in one single plane.
    “We are extremely pleased that we were the first to break the mark of 1,000 individually controllable atomic qubits because so many other outstanding competitors are hot on our heels,” says Birkl about their results.
    The researchers were able to demonstrate in their experiments that their approach of combining the latest quantum-optical methods with advanced micro-optical technology has enabled them to significantly increase the current limits on the accessible number of qubits.
    This was achieved by introducing the novel method of “quantum bit supercharging.” It allowed them to overcome the restrictions imposed on the number of usable qubits by the limited performance of the lasers. 1305 single-atom qubits were loaded in a quantum array with 3,000 trap sites and reassembled into defect-free target structures with up to 441 qubits. By using several laser sources in parallel, this concept has broken through the technological boundaries that had been perceived as being almost insurmountable up to now.
    For many different applications, 1,000 qubits is seen as the threshold value from which the boost to efficiency promised by quantum computers can now be demonstrated for the first time. Researchers around the world have thus been working intensively to be the first to break this threshold. The recently published research work demonstrates that for atomic qubits this breakthrough was achieved for the first time worldwide by the research group headed by Professor Birkl. The scientific publication also describes how further increases in the number of laser sources will enable qubit numbers of 10,000 and more in just a few years. More

  • in

    Study shows background checks don’t always check out

    Employers making hiring decisions, landlords considering possible tenants and schools approving field trip chaperones all widely use commercial background checks. But a new multi-institutional study co-authored by a University of Maryland researcher shows that background checks themselves can’t be trusted.
    Assistant Professor Robert Stewart of the Department of Criminology and Criminal Justice and Associate Professor Sarah Lageson of Rutgers University suspected that the loosely regulated entities that businesses and landlords rely on to run background checks produce faulty reports, and their research bore out this hunch. The results were published last week in Criminology.
    “There’s a common, taken-for-granted assumption that background checks are an accurate reflection of a person’s criminal record, but our findings show that’s not necessarily the case,” Stewart said. “My co-author and I found that there are lots of inaccuracies and mistakes in background checks caused, in part, by imperfect data aggregation techniques that rely on names and birth dates rather than unique identifiers like fingerprints.”
    The erroneous results of a background check can “go both ways,” Stewart said: They can miss convictions that a potential employer would want to know about, or they can falsely assign a conviction to an innocent person through transposed numbers in a birth date, incorrect spelling of a name or simply the existence of common aliases.
    Stewart and Lageson’s study is based on the examination of official state rap sheets containing all arrests, criminal charges, and case dispositions recorded in the state linked to the record subject’s name and fingerprints for 101 study participants in New Jersey. Then, the researchers ordered background checks from a regulated service provider — the same type of company that an employer, a landlord, or a school system might use. The researchers also looked up background checks on the same study participants from an unregulated data provider, such as popular “people search” websites.
    “We find that both types of background checks have numerous ‘false positive’ results, reporting charges that our study participants did not have, as well as ‘false negatives,’ not reporting charges that our study participants did have,” Stewart said.
    More than half of study participants had at least one false-positive error on their regulated and unregulated background checks. About 90% of participants had at least one false-negative error.

    Stewart and Lageson defined a number of problems with private-sector criminal records: mismatched data that create false negatives, missing case depositions that create incomplete and misleading criminal records, and incorrect data that create false positives.
    For both the commercial and public-use background check services, the driving force behind errors in background checks is likely erroneous use of algorithms.
    “These companies and platforms are linking records together based on names, aliases and birth dates rather than fingerprints, which is what the police use to match people to records,” Stewart said. “So these companies end up lumping people together who are not the same person.”
    Through interviews with study participants, Stewart and Lageson explored the consequences of the errors, including limited access to employment and housing, as well as the difficulty of correcting them.
    For example, one participant who had a pair of drug convictions decades ago had been mistakenly linked to much more serious crimes, including attempted murder.
    “The problem was, he had at one point used an alias, and another man with a very extensive record had used a similar alias, and all his charges were linked to our participant,” Stewart said. “As a result, this other man’s record followed our participant for decades and helped to explain why he always had trouble securing a decent job.”
    The researchers interviewed participants who described how errors in their background checks limited their access to education.

    “We’re talking about a violation of the basic principles of fairness in our society and in the legal system,” Lageson said. “Unfortunately, people have little legal recourse when facing these issues. It’s clear this is an area ripe for policy reform.”
    While commercial background checks providers are ostensibly regulated by the Fair Credit Reporting Act and other guidelines, Stewart and Lageson’s research has demonstrated that considerable errors persist.
    Stewart said that public awareness of the potentially erroneous and incomplete results of background checks will be key to addressing this systemic social problem.
    “Other countries are handling background checks in different ways, ways that may take more time, but there are better models out there,” Stewart said. It may be better for background checks to be done through the state, or the FBI, or through other ways that use biometric data. It’s important for people to realize that there’s a lot at stake.” More

  • in

    ‘Scientists’ warning’ on climate and technology

    Throughout human history, technologies have been used to make peoples’ lives richer and more comfortable, but they have also contributed to a global crisis threatening Earth’s climate, ecosystems and even our own survival. Researchers at the University of California, Irvine, the University of Kansas and Oregon State University have suggested that industrial civilization’s best way forward may entail embracing further technological advancements but doing so with greater awareness of their potential drawbacks.
    In a paper titled “Scientists’ Warning on Technology,” published recently in the Journal of Cleaner Production, the researchers, including Bill Tomlinson, UCI professor of informatics, stress that innovations, particularly in the fields of clean energy and artificial intelligence, will come with risks but may be the most effective way to ensure a sustainable future.
    “Since prehistoric times, technologies have been created to solve problems and benefit people; think of the improvements that have been made in agriculture, manufacturing and transportation,” Tomlinson said. “But these developments have had a dual nature. While addressing the human need for food, farming has led to environmental degradation, and our factories and vehicles have caused a massive buildup of atmospheric carbon dioxide, which is causing climate change.”
    Co-author Andrew W. Torrance, the Paul E. Wilson Distinguished Professor of Law at the University of Kansas, said: “Technology is often offered as a panacea for environmental crises. It is not. Nevertheless, it will play a crucial role in any solution. That is why the role of technology must be taken seriously, rigorously measured, modeled and understood — and then interpreted in light of population and affluence.”
    He added, “I am extremely optimistic about the beneficial role technology could play in helping humanity find its sustainable niche in the biosphere, but [I’m also] stone-cold sober that other, less hopeful outcomes remain possible.”
    The scientists’ warning concept dates to the early 1990s, when the Union of Concerned Scientists published a letter exhorting people to change their habits regarding stewardship of Earth and its resources “if vast human misery is to be avoided and our global home on this planet is not to be irretrievably mutilated.” A second warning, in 2017, was signed by more than 15,000 scholars in different scientific fields. Since then, dozens of additional admonitions have been published, with over 50 currently in preparation.
    “The scientists’ warnings weave a compelling narrative of humanity at a crossroads, urging us to acknowledge the fragility of our biosphere and embrace a collective responsibility for safeguarding our future through proper, science-based actions,” said co-author William Ripple, Oregon State University Distinguished Professor of ecology, who led the project to write the article.

    The Journal of Cleaner Production warning outlines two main methods for reducing, mitigating or eliminating fossil fuel use. The first is infrastructural substitution, replacing coal- and natural gas-fired power plants with renewable resources such as wind and solar, and abandoning internal combustion engines in favor of electric motors. This shift would also involve widespread adoption of electric appliances in homes and swapping out gas furnaces and water heaters for heat pumps.
    A second method to steer humanity away from fossil fuel burning centers on a concept known as “undesign,” the intentional negation of technology and consideration of alternatives that do not rely on labor-saving human inventions.
    “People are often resistant to change, though, especially in contexts where they have come to depend strongly on particular goods and services,” Tomlinson said. “Embracing undesign will require people to be guided to new cultural narratives that are not so reliant on heavily impactful systems.”
    In addition to clean energy technologies, the warning’s authors look to artificial intelligence as a way to point human civilization toward a more sustainable tomorrow. They mention how AI is being used currently to connect wildlife habitats, monitor methane emissions and optimize supply chains. Tomlinson and his colleagues said AI presents far less energy-intensive alternatives to laborious tasks like writing and illustration and is becoming adept at writing computer code, which could come in handy in managing the “complexities of 8 billion-plus people cohabiting on Earth,” according to the paper.
    But Tomlinson noted that AI is not without risks, such as the possibility of runaway energy consumption, perpetuating biases in human societies and AI systems becoming independent and powerful enough that they pose a real danger to humanity.
    “It’s important that humans deploy new technologies to replace those that are environmentally harmful,” he said. “But we need to remain vigilant for potential future harm and attempt to mitigate that as much as possible.
    “In our scientists’ warning, we identify an array of potential future risks from both electrification and AI. We believe that these outcomes are substantially less problematic than these technologies’ potential benefits from addressing the pressing environmental crises that humanity is currently facing.”
    This project received funding from the National Science Foundation. More

  • in

    Artificial intelligence: Aim policies at ‘hardware’ to ensure AI safety, say experts

    A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.
    Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.
    Researchers argue that AI chips and datacentres offer more effective targets for scrutiny and AI safety governance, as these assets have to be physically possessed, whereas the other elements of the “AI triad” — data and algorithms — can, in theory, be endlessly duplicated and disseminated.
    The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.
    The report, published 14 February, is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.
    “Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI.
    “Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

    “AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.
    “Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”
    The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.
    Government efforts across the world over the past year — including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute — have begun to focus on compute when considering AI governance.
    Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute.
    The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”
    Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

    For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.
    The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling.”
    “Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips.”
    Other suggestions to increase visibility — and accountability — include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.
    “Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”
    These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.
    AI risk mitigation policies might see compute prioritised for research most likely to benefit society — from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.
    The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.
    They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.
    Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”
    The report is Computing Power and the Governance of Artificial Intelligence. More