More stories

  • in

    Study shows background checks don’t always check out

    Employers making hiring decisions, landlords considering possible tenants and schools approving field trip chaperones all widely use commercial background checks. But a new multi-institutional study co-authored by a University of Maryland researcher shows that background checks themselves can’t be trusted.
    Assistant Professor Robert Stewart of the Department of Criminology and Criminal Justice and Associate Professor Sarah Lageson of Rutgers University suspected that the loosely regulated entities that businesses and landlords rely on to run background checks produce faulty reports, and their research bore out this hunch. The results were published last week in Criminology.
    “There’s a common, taken-for-granted assumption that background checks are an accurate reflection of a person’s criminal record, but our findings show that’s not necessarily the case,” Stewart said. “My co-author and I found that there are lots of inaccuracies and mistakes in background checks caused, in part, by imperfect data aggregation techniques that rely on names and birth dates rather than unique identifiers like fingerprints.”
    The erroneous results of a background check can “go both ways,” Stewart said: They can miss convictions that a potential employer would want to know about, or they can falsely assign a conviction to an innocent person through transposed numbers in a birth date, incorrect spelling of a name or simply the existence of common aliases.
    Stewart and Lageson’s study is based on the examination of official state rap sheets containing all arrests, criminal charges, and case dispositions recorded in the state linked to the record subject’s name and fingerprints for 101 study participants in New Jersey. Then, the researchers ordered background checks from a regulated service provider — the same type of company that an employer, a landlord, or a school system might use. The researchers also looked up background checks on the same study participants from an unregulated data provider, such as popular “people search” websites.
    “We find that both types of background checks have numerous ‘false positive’ results, reporting charges that our study participants did not have, as well as ‘false negatives,’ not reporting charges that our study participants did have,” Stewart said.
    More than half of study participants had at least one false-positive error on their regulated and unregulated background checks. About 90% of participants had at least one false-negative error.

    Stewart and Lageson defined a number of problems with private-sector criminal records: mismatched data that create false negatives, missing case depositions that create incomplete and misleading criminal records, and incorrect data that create false positives.
    For both the commercial and public-use background check services, the driving force behind errors in background checks is likely erroneous use of algorithms.
    “These companies and platforms are linking records together based on names, aliases and birth dates rather than fingerprints, which is what the police use to match people to records,” Stewart said. “So these companies end up lumping people together who are not the same person.”
    Through interviews with study participants, Stewart and Lageson explored the consequences of the errors, including limited access to employment and housing, as well as the difficulty of correcting them.
    For example, one participant who had a pair of drug convictions decades ago had been mistakenly linked to much more serious crimes, including attempted murder.
    “The problem was, he had at one point used an alias, and another man with a very extensive record had used a similar alias, and all his charges were linked to our participant,” Stewart said. “As a result, this other man’s record followed our participant for decades and helped to explain why he always had trouble securing a decent job.”
    The researchers interviewed participants who described how errors in their background checks limited their access to education.

    “We’re talking about a violation of the basic principles of fairness in our society and in the legal system,” Lageson said. “Unfortunately, people have little legal recourse when facing these issues. It’s clear this is an area ripe for policy reform.”
    While commercial background checks providers are ostensibly regulated by the Fair Credit Reporting Act and other guidelines, Stewart and Lageson’s research has demonstrated that considerable errors persist.
    Stewart said that public awareness of the potentially erroneous and incomplete results of background checks will be key to addressing this systemic social problem.
    “Other countries are handling background checks in different ways, ways that may take more time, but there are better models out there,” Stewart said. It may be better for background checks to be done through the state, or the FBI, or through other ways that use biometric data. It’s important for people to realize that there’s a lot at stake.” More

  • in

    ‘Scientists’ warning’ on climate and technology

    Throughout human history, technologies have been used to make peoples’ lives richer and more comfortable, but they have also contributed to a global crisis threatening Earth’s climate, ecosystems and even our own survival. Researchers at the University of California, Irvine, the University of Kansas and Oregon State University have suggested that industrial civilization’s best way forward may entail embracing further technological advancements but doing so with greater awareness of their potential drawbacks.
    In a paper titled “Scientists’ Warning on Technology,” published recently in the Journal of Cleaner Production, the researchers, including Bill Tomlinson, UCI professor of informatics, stress that innovations, particularly in the fields of clean energy and artificial intelligence, will come with risks but may be the most effective way to ensure a sustainable future.
    “Since prehistoric times, technologies have been created to solve problems and benefit people; think of the improvements that have been made in agriculture, manufacturing and transportation,” Tomlinson said. “But these developments have had a dual nature. While addressing the human need for food, farming has led to environmental degradation, and our factories and vehicles have caused a massive buildup of atmospheric carbon dioxide, which is causing climate change.”
    Co-author Andrew W. Torrance, the Paul E. Wilson Distinguished Professor of Law at the University of Kansas, said: “Technology is often offered as a panacea for environmental crises. It is not. Nevertheless, it will play a crucial role in any solution. That is why the role of technology must be taken seriously, rigorously measured, modeled and understood — and then interpreted in light of population and affluence.”
    He added, “I am extremely optimistic about the beneficial role technology could play in helping humanity find its sustainable niche in the biosphere, but [I’m also] stone-cold sober that other, less hopeful outcomes remain possible.”
    The scientists’ warning concept dates to the early 1990s, when the Union of Concerned Scientists published a letter exhorting people to change their habits regarding stewardship of Earth and its resources “if vast human misery is to be avoided and our global home on this planet is not to be irretrievably mutilated.” A second warning, in 2017, was signed by more than 15,000 scholars in different scientific fields. Since then, dozens of additional admonitions have been published, with over 50 currently in preparation.
    “The scientists’ warnings weave a compelling narrative of humanity at a crossroads, urging us to acknowledge the fragility of our biosphere and embrace a collective responsibility for safeguarding our future through proper, science-based actions,” said co-author William Ripple, Oregon State University Distinguished Professor of ecology, who led the project to write the article.

    The Journal of Cleaner Production warning outlines two main methods for reducing, mitigating or eliminating fossil fuel use. The first is infrastructural substitution, replacing coal- and natural gas-fired power plants with renewable resources such as wind and solar, and abandoning internal combustion engines in favor of electric motors. This shift would also involve widespread adoption of electric appliances in homes and swapping out gas furnaces and water heaters for heat pumps.
    A second method to steer humanity away from fossil fuel burning centers on a concept known as “undesign,” the intentional negation of technology and consideration of alternatives that do not rely on labor-saving human inventions.
    “People are often resistant to change, though, especially in contexts where they have come to depend strongly on particular goods and services,” Tomlinson said. “Embracing undesign will require people to be guided to new cultural narratives that are not so reliant on heavily impactful systems.”
    In addition to clean energy technologies, the warning’s authors look to artificial intelligence as a way to point human civilization toward a more sustainable tomorrow. They mention how AI is being used currently to connect wildlife habitats, monitor methane emissions and optimize supply chains. Tomlinson and his colleagues said AI presents far less energy-intensive alternatives to laborious tasks like writing and illustration and is becoming adept at writing computer code, which could come in handy in managing the “complexities of 8 billion-plus people cohabiting on Earth,” according to the paper.
    But Tomlinson noted that AI is not without risks, such as the possibility of runaway energy consumption, perpetuating biases in human societies and AI systems becoming independent and powerful enough that they pose a real danger to humanity.
    “It’s important that humans deploy new technologies to replace those that are environmentally harmful,” he said. “But we need to remain vigilant for potential future harm and attempt to mitigate that as much as possible.
    “In our scientists’ warning, we identify an array of potential future risks from both electrification and AI. We believe that these outcomes are substantially less problematic than these technologies’ potential benefits from addressing the pressing environmental crises that humanity is currently facing.”
    This project received funding from the National Science Foundation. More

  • in

    Artificial intelligence: Aim policies at ‘hardware’ to ensure AI safety, say experts

    A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.
    Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.
    Researchers argue that AI chips and datacentres offer more effective targets for scrutiny and AI safety governance, as these assets have to be physically possessed, whereas the other elements of the “AI triad” — data and algorithms — can, in theory, be endlessly duplicated and disseminated.
    The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.
    The report, published 14 February, is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.
    “Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI.
    “Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

    “AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.
    “Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”
    The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.
    Government efforts across the world over the past year — including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute — have begun to focus on compute when considering AI governance.
    Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute.
    The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”
    Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

    For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.
    The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling.”
    “Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips.”
    Other suggestions to increase visibility — and accountability — include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.
    “Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”
    These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.
    AI risk mitigation policies might see compute prioritised for research most likely to benefit society — from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.
    The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.
    They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.
    Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”
    The report is Computing Power and the Governance of Artificial Intelligence. More

  • in

    Altermagnetism experimentally demonstrated

    Ferromagnetism and antiferromagnetism have long been known to scientists as two classes of magnetic order of materials. Back in 2019, researchers at Johannes Gutenberg University Mainz (JGU) postulated a third class of magnetism, called altermagnetism. This altermagnetism has been the subject of heated debate among experts ever since, with some expressing doubts about its existence. Recently, a team of experimental researchers led by Professor Hans-Joachim Elmers at JGU was able to measure for the first time at DESY (Deutsches Elektronen-Synchrotron) an effect that is considered to be a signature of altermagnetism, thus providing evidence for the existence of this third type of magnetism. The research results were published in Science Advances.
    Altermagnetism — a new magnetic phase
    While ferromagnets, which we all know from refrigerator magnets, have all their magnetic moments aligned in the same direction, antiferromagnets have alternating magnetic moments. Thus, at the macroscopic level, the magnetic moments of antiferromagnets cancel each other out, so there is no external magnetic field — which would cause refrigerator magnets made of this material to simply fall off the refrigerator door. The magnetic moments in altermagnets differ in the way they are oriented. “Altermagnets combine the advantages of ferromagnets and antiferromagnets. Their neighboring magnetic moments are always antiparallel to each other, as in antiferromagnets, so there is no macroscopic magnetic effect, but, at the same time, they exhibit a spin-polarized current — just like ferromagnets,” explained Professor Hans-Joachim Elmers, head of the Magnetism group at JGU’s Institute of Physics.
    Moving in the same direction with uniform spin
    Electric currents usually generate magnetic fields. However, if one considers an altermagnet as a whole, integrating the spin polarization in the electronic bands in all directions, it becomes apparent that the magnetic field must be zero despite the spin-polarized current. If, on the other hand, attention is restricted to those electrons that move in a particular direction, the conclusion is that they must have a uniform spin. “This alignment phenomenon has nothing to do with spatial arrangements or where the electrons are located, but only with the direction of the electron velocity,” Elmers added. Since velocity (v) times mass (M) equals momentum (P), physicists use the term “momentum space” in this context. This effect was predicted in the past by theoretical groups at JGU led by Professor Jairo Sinova and Dr. Libor Šmejkal.
    Proof obtained using momentum electron microscopy
    “Our team was the first to experimentally verify the effect,” said Elmers. The researchers used a specially adapted momentum microscope. For their experiment, the team exposed a thin layer of ruthenium dioxide to X-rays. The resulting excitation of the electrons was sufficient for their emission from the ruthenium dioxide layer and their detection. Based on the velocity distribution, the researchers were able to determine the velocity of the electrons in the ruthenium dioxide. And using circularly polarized X-rays, they were even able to infer the spin directions.
    For their momentum microscope, the researchers changed the focal plane that is normally used for observation in standard electron microscopes. Instead of a magnified image of the surface of the ruthenium oxide film, their detector showed a representation of momentum space. “Differing momentums appear at different positions on the detector. Put more simply, the different directions in which the electrons move in a layer are represented by corresponding dots on the detector,” said Elmers.
    Altermagnetism may also be relevant to spintronics. This would involve using the magnetic moment of electrons instead of their charge in dynamic random access memory. As a result, storage capacity could be significantly increased. “Our results could be the solution to what is a major challenge in the field of spintronics,” suggested Elmers. “Exploiting the potential of altermagnets would make it easier to read stored information based on the spin polarization in the electronic bands.” More

  • in

    Do AI-driven chemistry labs actually work? New metrics promise answers

    The fields of chemistry and materials science are seeing a surge of interest in “self-driving labs,” which make use of artificial intelligence (AI) and automated systems to expedite research and discovery. Researchers are now proposing a suite of definitions and performance metrics that will allow researchers, non-experts, and future users to better understand both what these new technologies are doing and how each technology is performing in comparison to other self-driving labs.
    Self-driving labs hold tremendous promise for accelerating the discovery of new molecules, materials and manufacturing processes, with applications ranging from electronic devices to pharmaceuticals. While the technologies are still fairly new, some have been shown to reduce the time needed to identify new materials from months or years to days.
    “Self-driving labs are garnering a great deal of attention right now, but there are a lot of outstanding questions regarding these technologies,” says Milad Abolhasani, corresponding author of a paper on the new metrics and an associate professor of chemical and biomolecular engineering at North Carolina State University. “This technology is described as being ‘autonomous,’ but different research teams are defining ‘autonomous’ differently. By the same token, different research teams are reporting different elements of their work in different ways. This makes it difficult to compare these technologies to each other, and comparison is important if we want to be able to learn from each other and push the field forward.
    “What does Self-Driving Lab A do really well? How could we use that to improve the performance of Self-Driving Lab B? We’re proposing a set of shared definitions and performance metrics, which we hope will be adopted by everyone working in this space. The end goal will be to allow all of us to learn from each other and advance these powerful research acceleration technologies.
    “For example, we seem to be seeing some challenges in self-driving labs related to the performance, precision and robustness of some autonomous systems,” Abolhasani says. “This raises questions about how useful these technologies can be. If we have standardized metrics and reporting of results, we can identify these challenges and better understand how to address them.”
    At the core of the new proposal is a clear definition of self-driving labs and seven proposed performance metrics, which researchers would include in any published work related to their self-driving labs. Degree of autonomy: how much guidance does a system need from users? Operational lifetime: how long can the system operate without intervention from users? Throughput: how long does it take the system to run a single experiment? Experimental precision: how reproducible are the system’s results? Material usage: what’s the total amount of materials used by a system for each experiment? Accessible parameter space: to what extent can the system account for all of the variables in each experiment? Optimization efficiency.”Optimization efficiency is one of the most important of these metrics, but it’s also one of the most complex — it doesn’t lend itself to a concise definition,” Abolhasani says. “Essentially, we want researchers to quantitatively analyze the performance of their self-driving lab and its experiment-selection algorithm by benchmarking it against a baseline — for example, random sampling.
    “Ultimately, we think having a standardized approach to reporting on self-driving labs will help to ensure that this field is producing trustworthy, reproducible results that make the most of AI programs that capitalize on the large, high-quality data sets produced by self-driving labs,” Abolhasani says.
    The work was done with support from the Dreyfus Program for Machine Learning in the Chemical Sciences and Engineering, under award number ML-21-064; the University of North Carolina Research Opportunities Initiative program; and the National Science Foundation, under grants 1940959 and 2208406. More

  • in

    A new optical metamaterial makes true one-way glass possible

    A new approach has allowed researchers at Aalto University to create a kind of metamaterial that has so far been beyond the reach of existing technologies. Unlike natural materials, metamaterials and metasurfaces can be tailored to have specific electromagnetic properties, which means scientists can create materials with features desirable for industrial applications.
    The new metamaterial takes advantage of the nonreciprocal magnetoelectric (NME) effect. The NME effect implies a link between specific properties of the material (its magnetization and polarization) and the different field components of light or other electromagnetic waves. The NME effect is negligible in natural materials, but scientists have been trying to enhance it using metamaterials and metasurfaces because of the technological potential this would unlock.
    “So far, the NME effect has not led to realistic industrial applications. Most of the proposed approaches would only work for microwaves and not visible light, and they also couldn’t be fabricated with available technology,” says Shadi Safaei Jazi, a doctoral researcher at Aalto. The team designed an optical NME metamaterial that can be created with existing technology, using conventional materials and nanofabrication techniques.
    The new material opens up applications that would otherwise need a strong external magnetic field to work — for example, creating truly one-way glass. Glass that’s currently sold as ‘one-way’ is just semi-transparent, letting light through in both directions. When the brightness is different between the two sides (for example, inside and outside a window), it acts like one-way glass. But an NME-based one-way glass wouldn’t need a difference in brightness because light could only go through it in one direction.
    “Just imagine having a window with that glass in your house, office, or car. Regardless of the brightness outside, people wouldn’t be able to see anything inside, while you would enjoy a perfect view from your window,” says Safaei. If technology succeeds, this one-way glass could also make solar cells more efficient by blocking the thermal emissions that existing cells radiate back toward the sun, which reduces the amount of energy they capture. More

  • in

    Fundamental equation for superconducting quantum bits revised

    Quantum bits can be described more precisely with the help of newly discovered harmonics as a team of 30 researchers reports in Nature Physics.
    Physicists from Forschungszentrum Jülich and the Karlsruhe Institute of Technology have uncovered that Josephson tunnel junctions — the fundamental building blocks of superconducting quantum computers — are more complex than previously thought. Just like overtones in a musical instrument, harmonics are superimposed on the fundamental mode. As a consequence, corrections may lead to quantum bits that are 2 to 7 times more stable. The researchers support their findings with experimental evidence from multiple laboratories across the globe, including the University of Cologne, Ecole Normale Supérieure in Paris, and IBM Quantum in New York.
    It all started in 2019, when Dennis Willsch and Dennis Rieger — two PhD students from FZJ and KIT at the time and joint first authors of the paper — were having a hard time understanding their experiments using the standard model for Josephson tunnel junctions. This model had won Brian Josephson the Nobel Prize in Physics in 1973. Excited to get to the bottom of this, the team led by Ioan Pop scrutinized further data from the Ecole Normale Supérieure in Paris and a 27-qubit device at IBM Quantum in New York, as well as data from previously published experiments. Independently, researchers from the University of Cologne were observing similar deviations of their data from the standard model.
    “Fortunately, Gianluigi Catelani, who was involved in both projects and realized the overlap, brought the research teams together!,” recalls Dennis Willsch from FZ Jülich. “The timing was perfect,” adds Chris Dickel from the University of Cologne, “since, at that time, we were exploring quite different consequences of the same underlying problem.”
    Josephson tunnel junctions consist of two superconductors with a thin insulating barrier in-between and, for decades, these circuit elements have been described with a simple sinusoidal model.
    However, as the researchers demonstrate, this “standard model” fails to fully describe the Josephson junctions that are used to build quantum bits. Instead, an extended model including higher harmonics is required to describe the tunneling current between the two superconductors. The principle can also be found in the field of music. When the string of an instrument is struck, the fundamental frequency is overlaid by several harmonic overtones.
    “It’s exciting that the measurements in the community have reached the level of accuracy at which we can resolve these small corrections to a model which has been considered sufficient for more than 15 years,” Dennis Rieger remarks.
    When the four coordinating professors — Ioan Pop from KIT and Gianluigi Catelani, Kristel Michielsen and David DiVincenzo from FZJ — realized the impact of the findings, they brought together the large collaboration of experimentalists, theoreticians, and material scientists, to join their efforts in presenting a compelling case for the Josephson harmonics model. In the Nature Physics publication, the researchers explore the origin and consequences of Josephson harmonics. “As an immediate consequence, we believe that Josephson harmonics will help in engineering better and more reliable quantum bits by reducing errors up to an order of magnitude, which brings us one step closer towards the dream of a fully universal superconducting quantum computer,” the two first authors conclude. More

  • in

    Altermagnetism proves its place on the magnetic family tree

    There is now a new addition to the magnetic family: thanks to experiments at the Swiss Light Source SLS, researchers have proved the existence of altermagnetism. The experimental discovery of this new branch of magnetism is reported in Nature and signifies new fundamental physics, with major implications for spintronics.
    Magnetism is a lot more than just things that stick to the fridge. This understanding came with the discovery of antiferromagnets nearly a century ago. Since then, the family of magnetic materials has been divided into two fundamental phases: the ferromagnetic branch known for several millennia and the antiferromagnetic branch. The experimental proof of a third branch of magnetism, termed altermagnetism, was made at the Swiss Light Source SLS, by an international collaboration led by the Czech Academy of Sciences together with Paul Scherrer Institute PSI.
    The fundamental magnetic phases are defined by the specific spontaneous arrangements of magnetic moments — or electron spins — and of atoms that carry the moments in crystals. Ferromagnets are the type of magnets that stick to the fridge: here spins point in the same direction, giving macroscopic magnetism. In antiferromagnetic materials, spins point in alternating directions, with the result that the materials possess no macroscopic net magnetisation — and thus don’t stick to the fridge. Although other types of magnetism, such as diamagnetism and paramagnetism have been categorised, these describe specific responses to externally applied magnetic fields rather than spontaneous magnetic orderings in materials.
    Altermagnets have a special combination of the arrangement of spins and crystal symmetries. The spins alternate, as in antiferromagnets, resulting in no net magnetisation. Yet, rather than simply cancelling out, the symmetries give an electronic band structure with strong spin polarization that flips in direction as you pass through the material’s energy bands — hence the name altermagnets. This results in highly useful properties more resemblant of ferromagnets, as well as some completely new properties.
    A new and useful sibling
    This third magnetic sibling offers distinct advantages for the developing field of next-generation magnetic memory technology, known as spintronics. Whereas electronics makes use only of the charge of the electrons, spintronics also exploits the spin-state of electrons to carry information.
    Although spintronics has for some years promised to revolutionise IT, it’s still in its infancy. Typically, ferromagnets have been used for such devices, as they offer certain highly desirable strong spin-dependent physical phenomena. Yet the macroscopic net magnetisation that is useful in so many other applications poses practical limitations on the scalability of these devices as it causes crosstalk between bits — the information carrying elements in data storage.

    More recently, antiferromagnets have been investigated for spintronics, as they benefit from having no net magnetisation and thus offer ultra-scalability and energy efficiency. However, the strong spin-dependent effects that are so useful in ferromagnets are lacking, again hindering their practical applicability.
    Here enter altermagnets with the best of both: zero net magnetisation together with the coveted strong spin-dependent phenomena typically found in ferromagnets — merits that were regarded as principally incompatible.
    “That’s the magic about altermagnets,” says Tomáš Jungwirth from the Institute of Physics of the Czech Academy of Sciences, principal investigator of the study. “Something that people believed was impossible until recent theoretical predictions is in fact possible.”
    The search is on
    Murmurings that a new type of magnetism was lurking began not long ago: In 2019, Jungwirth together with theoretical colleagues at the Czech Academy of Sciences and University of Mainz identified a class of magnetic materials with a spin structure that did not fit within the classic descriptions of ferromagnetism or antiferromagnetism.
    In 2022, the theorists published their predictions of the existence of altermagnetism. They uncovered more than two hundred altermagnetic candidates in materials ranging from insulators and semiconductors, to metals and superconductors. Many of these materials have been well known and extensively explored in the past, without noticing their altermagnetic nature. Due to the huge research and application opportunities that altermagnetism poses, these predictions caused great excitement within the community. The search was on.

    X-rays provide the proof
    Obtaining direct experimental proof of altermagnetism’s existence required demonstrating the unique spin symmetry characteristics predicted in altermagnets. The proof came using spin- and angle resolved photoemission spectroscopy at the SIS (COPHEE endstation) and ADRESS beamlines of the SLS. This technique enabled the team to visualise a tell-tale feature in the electronic structure of a suspected altermagnet: the splitting of electronic bands corresponding to different spin states, known as the lifting of Kramers spin degeneracy.
    The discovery was made in crystals of manganese telluride, a well-known simple two-element material. Traditionally, the material has been regarded as a classic antiferromagnet because the magnetic moments on neighbouring manganese atoms point in opposite directions, generating a vanishing net magnetisation.
    However, antiferromagnets should not exhibit lifted Kramers spin degeneracy by the magnetic order, whereas ferromagnets or altermagnets should. When the scientists saw the lifting of Kramers spin degeneracy, accompanied by the vanishing net magnetisation, they knew they were looking at an altermagnet.
    “Thanks to the high precision and sensitivity of our measurements, we could detect the characteristic alternating splitting of the energy levels corresponding to opposite spin states and thus demonstrate that manganese telluride is neither a conventional antiferromagnet nor a conventional ferromagnet but belongs to the new altermagnetic branch of magnetic materials,” says Juraj Krempasky, beamline scientist in the Beamline Optics Group at PSI and first author of the study.
    The beamlines that enabled this discovery are now disassembled, awaiting the SLS 2.0 upgrade. After twenty years of successful science, the COPHEE endstation will be completely integrated into the new ‘QUEST’ beamline. “It was with the last photons of light at COPHEE that we made these experiments. That they gave such an important scientific breakthrough is very emotional for us,” adds Krempasky.
    “Now that we have brought it to light, many people around the world will be able to work on it.”
    The researchers believe that this new fundamental discovery in magnetism will enrich our understanding of condensed-matter physics, with impact across diverse areas of research and technology. As well as its advantages to the developing field of spintronics, it also offers a promising platform for exploring unconventional superconductivity, through new insights into superconducting states that can arise in different magnetic materials.
    “Altermagnetism is actually not something hugely complicated. It is something entirely fundamental that was in front of our eyes for decades without noticing it,” says Jungwirth. “And it is not something that exists only in a few obscure materials. It exists in many crystals that people simply had in their drawers. In that sense, now that we have brought it to light, many people around the world will be able to work on it, giving the potential for a broad impact.” More