More stories

  • in

    Nearly 400,000 new compounds added to open-access materials database

    New technology often calls for new materials — and with supercomputers and simulations, researchers don’t have to wade through inefficient guesswork to invent them from scratch.
    The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials. Researchers can focus on promising materials for future technologies — think lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, or faster transistors for the next generation of computers.
    Now, Google DeepMind — Google’s artificial intelligence lab — is contributing nearly 400,000 new compounds to the Materials Project, expanding the amount of information researchers can draw upon. The dataset includes how the atoms of a material are arranged (the crystal structure) and how stable it is (formation energy).
    “We have to create new materials if we are going to address the global environmental and climate challenges,” said Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley. “With innovation in materials, we can potentially develop recyclable plastics, harness waste energy, make better batteries, and build cheaper solar panels that last longer, among many other things.”
    To generate the new data, Google DeepMind developed a deep learning tool called Graph Networks for Materials Exploration, or GNoME. Researchers trained GNoME using workflows and data that were developed over a decade by the Materials Project, and improved the GNoME algorithm through active learning. GNoME researchers ultimately produced 2.2 million crystal structures, including 380,000 that they are adding to the Materials Project and predict are stable, making them potentially useful in future technologies. The new results from Google DeepMind are published today in the journal Nature.
    Some of the computations from GNoME were used alongside data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. A-Lab’s first paper, also published today in Nature, showed that the autonomous lab can quickly discover novel materials with minimal human input.
    Over 17 days of independent operation, A-Lab successfully produced 41 new compounds out of an attempted 58 — a rate of more than two new materials per day. For comparison, it can take a human researcher months of guesswork and experimentation to create one new material, if they ever reach the desired material at all.

    To make the novel compounds predicted by the Materials Project, A-Lab’s AI created new recipes by combing through scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were used to evaluate the materials’ predicted stability.
    “We had this staggering 71% success rate, and we already have a few ways to improve it,” said Gerd Ceder, the principal investigator for A-Lab and a scientist at Berkeley Lab and UC Berkeley. “We’ve shown that combining the theory and data side with automation has incredible results. We can make and test materials faster than ever before, and adding more data points to the Materials Project means we can make even smarter choices.”
    The Materials Project is the most widely used open-access repository of information on inorganic materials in the world. The database holds millions of properties on hundreds of thousands of structures and molecules, information primarily processed at Berkeley Lab’s National Energy Research Science Computing Center. More than 400,000 people are registered as users of the site and, on average, more than four papers citing the Materials Project are published every day. The contribution from Google DeepMind is the biggest addition of structure-stability data from a group since the Materials Project began.
    “We hope that the GNoME project will drive forward research into inorganic crystals,” said Ekin Dogus Cubuk, lead of Google DeepMind’s Materials Discovery team. “External researchers have already verified more than 736 of GNoME’s new materials through concurrent, independent physical experiments, demonstrating that our model’s discoveries can be realized in laboratories.”
    The Materials Project is now processing the compounds from Google DeepMind and adding them into the online database. The new data will be freely available to researchers, and also feed into projects such as A-Lab that partner with the Materials Project.
    “I’m really excited that people are using the work we’ve done to produce an unprecedented amount of materials information,” said Persson, who is also the director of Berkeley Lab’s Molecular Foundry. “This is what I set out to do with the Materials Project: To not only make the data that I produced free and available to accelerate materials design for the world, but also to teach the world what computations can do for you. They can scan large spaces for new compounds and properties more efficiently and rapidly than experiments alone can.”
    By following promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed useful properties in new materials across several areas. Some show potential for use: in carbon capture (pulling carbon dioxide from the atmosphere) as photocatalysts (materials that speed up chemical reactions in response to light and could be used to break down pollutants or generate hydrogen) as thermoelectrics (materials that could help harness waste heat and turn it into electrical power) as transparent conductors (which might be useful in solar cells, touch screens, or LEDs)Of course, finding these prospective materials is only one of many steps to solving some of humanity’s big technology challenges.
    “Making a material is not for the faint of heart,” Persson said. “It takes a long time to take a material from computation to commercialization. It has to have the right properties, work within devices, be able to scale, and have the right cost efficiency and performance. The goal with the Materials Project and facilities like A-Lab is to harness data, enable data-driven exploration, and ultimately give companies more viable shots on goal.” More

  • in

    Network of robots can successfully monitor pipes using acoustic wave sensors

    An inspection design method and procedure by which mobile robots can inspect large pipe structures has been demonstrated with the successful inspection of multiple defects on a three-meter long steel pipe using guided acoustic wave sensors.
    The University of Bristol team, led by Professor Bruce Drinkwater and Professor Anthony Croxford, developed approach was used to review a long steel pipe with multiple defects, including circular holes with different sizes, a crack-like defect and pits, through a designed inspection path to achieve 100% detection coverage for a defined reference defect.
    In the study, published today in NDT and E International, they show how they were able to effectively examine large plate-like structures using a network of independent robots, each carrying sensors capable of both sending and receiving guided acoustic waves, working in pulse-echo mode.
    This approach has the major advantage of minimizing communication between robots, requires no synchronization and raises the possibility of on-board processing to lower data transfer costs and hence reducing overall inspection expenses. The inspection was divided into a defect detection and a defect localization stage.
    Lead author Dr Jie Zhang explained: “There are many robotic systems with integrated ultrasound sensors used for automated inspection of pipelines from their inside to allow the pipeline operator to perform required inspections without stopping the flow of product in the pipeline. However, available systems struggle to cope with varying pipe cross-sections or network complexity, inevitably leading to pipeline disruption during inspection. This makes them suitable for specific inspections of high value assets, such as oil and gas pipelines, but not generally applicable.
    “As the cost of mobile robots has reduced over recent years, it is increasingly possible to deploy multiple robots for a large area inspection. We take the existence of small inspection robots as its starting point, and explore how they can be used for generic monitoring of a structure. This requires inspection strategies, methodologies and assessment procedures that can be integrated with the mobile robots for accurate defect detection and localization that is low cost and efficient.
    “We investigate this problem by considering a network of robots, each with a single omnidirectional guided acoustic wave transducer. This configuration is considered as it is arguably the simplest, with good potential for integration in a low cost platform.”
    The methods employed are generally applicable to other related scenarios and allow the impact of any detection or localization method decisions to be quickly quantified. The methods could be used across other materials, pipe geometries, noise levels or guided wave modes, allowing the full range of sensor performance parameters, defects sizes and types and operating modalities to be explored. Also the techniques can be used to assess the detection and localization performance for specified inspection parameters, for example, predict the minimum detectable defect under a specified probability of detection and probability of false alarm.
    The team will now investigate collaboration opportunities with industries to advance current prototypes for actual pipe inspections. This work is funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC) as a part of the Pipebots project. More

  • in

    How do you make a robot smarter? Program it to know what it doesn’t know

    Modern robots know how to sense their environment and respond to language, but what they don’t know is often more important than what they do know. Teaching robots to ask for help is key to making them safer and more efficient.
    Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don’t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty — and triggers the robot to ask for clarification.
    Because tasks are typically more complex than a simple “pick up a bowl” command, the engineers use large language models (LLMs) — the technology behind tools such as ChatGPT — to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method.
    “Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don’t know,” said Majumdar.
    The system also allows a robot’s user to set a target degree of success, which is tied to a particular uncertainty threshold that will lead a robot to ask for help. For example, a user would set a surgical robot to have a much lower error tolerance than a robot that’s cleaning up a living room.
    “We want the robot to ask for enough help such that we reach the level of success that the user wants. But meanwhile, we want to minimize the overall amount of help that the robot needs,” said Allen Ren, a graduate student in mechanical and aerospace engineering at Princeton and the study’s lead author. Ren received a best student paper award for his Nov. 8 presentation at the Conference on Robot Learning in Atlanta. The new method produces high accuracy while reducing the amount of help required by a robot compared to other methods of tackling this issue.
    The researchers tested their method on a simulated robotic arm and on two types of robots at Google facilities in New York City and Mountain View, California, where Ren was working as a student research intern. One set of hardware experiments used a tabletop robotic arm tasked with sorting a set of toy food items into two different categories; a setup with a left and right arm added an additional layer of ambiguity.

    The most complex experiments involved a robotic arm mounted on a wheeled platform and placed in an office kitchen with a microwave and a set of recycling, compost and trash bins. In one example, a human asks the robot to “place the bowl in the microwave,” but there are two bowls on the counter — a metal one and a plastic one.
    The robot’s LLM-based planner generates four possible actions to carry out based on this instruction, like multiple-choice answers, and each option is assigned a probability. Using a statistical approach called conformal prediction and a user-specified guaranteed success rate, the researchers designed their algorithm to trigger a request for human help when the options meet a certain probability threshold. In this case, the top two options — place the plastic bowl in the microwave or place the metal bowl in the microwave — meet this threshold, and the robot asks the human which bowl to place in the microwave.
    In another example, a person tells the robot, “There is an apple and a dirty sponge … It is rotten. Can you dispose of it?” This does not trigger a question from the robot, since the action “put the apple in the compost” has a sufficiently higher probability of being correct than any other option.
    “Using the technique of conformal prediction, which quantifies the language model’s uncertainty in a more rigorous way than prior methods, allows us to get to a higher level of success” while minimizing the frequency of triggering help, said the study’s senior author Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton.
    Robots’ physical limitations often give designers insights not readily available from abstract systems. Large language models “might talk their way out of a conversation, but they can’t skip gravity,” said coauthor Andy Zeng, a research scientist at Google DeepMind. “I’m always keen on seeing what we can do on robots first, because it often sheds light on the core challenges behind building generally intelligent machines.”
    Ren and Majumdar began collaborating with Zeng after he gave a talk as part of the Princeton Robotics Seminar series, said Majumdar. Zeng, who earned a computer science Ph.D. from Princeton in 2019, outlined Google’s efforts in using LLMs for robotics, and brought up some open challenges. Ren’s enthusiasm for the problem of calibrating the level of help a robot should ask for led to his internship and the creation of the new method.
    “We enjoyed being able to leverage the scale that Google has” in terms of access to large language models and different hardware platforms, said Majumdar.
    Ren is now extending this work to problems of active perception for robots: For instance, a robot may need to use predictions to determine the location of a television, table or chair within a house, when the robot itself is in a different part of the house. This requires a planner based on a model that combines vision and language information, bringing up a new set of challenges in estimating uncertainty and determining when to trigger help, said Ren. More

  • in

    Researchers engineer a material that can perform different tasks depending on temperature

    Researchers report that they have developed a new composite material designed to change behaviors depending on temperature in order to perform specific tasks. These materials are poised to be part of the next generation of autonomous robotics that will interact with the environment.
    The new study conducted by University of Illinois Urbana-Champaign civil and environmental engineering professor Shelly Zhang and graduate student Weichen Li, in collaboration with professor Tian Chen and graduate student Yue Wang from the University of Houston, uses computer algorithms, two distinct polymers and 3D printing to reverse engineer a material that expands and contracts in response to temperature change with or without human intervention.
    The study findings are reported in the journal Science Advances.
    “Creating a material or device that will respond in specific ways depending on its environment is very challenging to conceptualize using human intuition alone — there are just so many design possibilities out there,” Zhang said. “So, instead, we decided to work with a computer algorithm to help us determine the best combination of materials and geometry.”
    The team first used computer modeling to conceptualize a two-polymer composite that can behave differently under various temperatures based on user input or autonomous sensing.
    “For this study, we developed a material that can behave like soft rubber in low temperatures and as a stiff plastic in high temperatures,” Zhang said.
    Once fabricated into a tangible device, the team tested the new composite material’s ability to respond to temperature changes to perform a simple task — switch on LED lights.

    “Our study demonstrates that it is possible to engineer a material with intelligent temperature sensing capabilities, and we envision this being very useful in robotics,” Zhang said. “For example, if a robot’s carrying capacity needs to change when the temperature changes, the material will ‘know’ to adapt its physical behavior to stop or perform a different task.”
    Zhang said that one of the hallmarks of the study is the optimization process that helps the researchers interpolate the distribution and geometries of the two different polymer materials needed.
    “Our next goal is to use this technique to add another level of complexity to a material’s programmed or autonomous behavior, such as the ability to sense the velocity of some sort of impact from another object,” she said. “This will be critical for robotics materials to know how to respond to various hazards in the field.”
    The National Science Foundation supported this research. More

  • in

    Nextgen computing: Hard-to-move quasiparticles glide up pyramid edges

    A new kind of “wire” for moving excitons, developed at the University of Michigan, could help enable a new class of devices, perhaps including room temperature quantum computers.
    What’s more, the team observed a dramatic violation of Einstein’s relation, used to describe how particles spread out in space, and leveraged it to move excitons in much smaller packages than previously possible.
    “Nature uses excitons in photosynthesis. We use excitons in OLED displays and some LEDs and solar cells,” said Parag Deotare, co-corresponding author of the study in ACS Nano supervising the experimental work, and an associate professor of electrical and computer engineering. “The ability to move excitons where we want will help us improve the efficiency of devices that already use excitons and expand excitonics into computing.”
    An exciton can be thought of as a particle (hence quasiparticle), but it’s really an electron linked with a positively-charged empty space in the lattice of the material (a “hole”). Because an exciton has no net electrical charge, moving excitons are not affected by parasitic capacitances, an electrical interaction between neighboring components in a device that causes energy losses. Excitons are also easy to convert to and from light, so they open the way for extremely fast and efficient computers that use a combination of optics and excitonics, rather than electronics.
    This combination could help enable room temperature quantum computing, said Mackillo Kira, co-corresponding author of the study supervising the theory, and a professor of electrical and computer engineering. Excitons can encode quantum information, and they can hang onto it longer than electrons can inside a semiconductor. But that time is still measured in picoseconds (10-12 seconds) at best, so Kira and others are figuring out how to use femtosecond laser pulses (10-15 seconds) to process information.
    “Full quantum-information applications remain challenging because degradation of quantum information is too fast for ordinary electronics,” he said. “We are currently exploring lightwave electronics as a means to supercharge excitonics with extremely fast processing capabilities.”
    However, the lack of net charge also makes excitons very difficult to move. Previously, Deotare had led a study that pushed excitons through semiconductors with acoustic waves. Now, a pyramid structure enables more precise transport for smaller numbers of excitons, confined to one dimension like a wire.

    It works like this:
    The team used a laser to create a cloud of excitons at a corner of the pyramid’s base, bouncing electrons out of the valence band of a semiconductor into the conduction band — but the negatively charged electrons are still attracted to the positively charged holes left behind in the valence band. The semiconductor is a single layer of tungsten diselenide semiconductor, just three atoms thick, draped over the pyramid like a stretchy cloth. And the stretch in the semiconductor changes the energy landscape that the excitons experience.
    It seems counterintuitive that the excitons should ride up the pyramid’s edge and settle at the peak when we imagine an energy landscape chiefly governed by gravity. But instead, the landscape is governed by how far apart the valence and conduction bands of the semiconductor are. The energy gap between the two, also known as the semiconductor’s band gap, shrinks where the semiconductor is stretched. The excitons migrate to the lowest energy state, funneled onto the pyramid’s edge where they then rise to its peak.
    Usually, an equation penned by Einstein is good at describing how a bunch of particles diffuses outward and drifts. However, the semiconductor was imperfect, and those defects acted as traps that would nab some of the excitons as they tried to drift by. Because the defects at the trailing side of the exciton cloud were filled in, that side of the distribution diffused outward as predicted. The leading edge, however, did not extend so far. Einstein’s relation was off by more than a factor of 10.
    “We’re not saying Einstein was wrong, but we have shown that in complicated cases like this, we shouldn’t be using his relation to predict the mobility of excitons from the diffusion,” said Matthias Florian, co-first-author of the study and a research investigator in electrical and computer engineering, working under Kira.
    To directly measure both, the team needed to detect single photons, emitted when the bound electrons and holes spontaneously recombined. Using time-of-flight measurements, they also figured out where the photons came from precisely enough to measure the distribution of excitons within the cloud.
    The study was supported by the Army Research Office (award no. W911NF2110207) and the Air Force Office of Scientific Research (award no. FA995-22-1-0530).
    The pyramid structure was built in the Lurie Nanofabrication Facility.
    The team has applied for patent protection with the assistance of U-M Innovation Partnerships and is seeking partners to bring the technology to market. More

  • in

    Unlocking the secrets of cells, with AI

    Machine learning is now helping researchers analyze the makeup of unfamiliar cells, which could lead to more personalized medicine in the treatment of cancer and other serious diseases.
    Researchers at the University of Waterloo developed GraphNovo, a new program that provides a more accurate understanding of the peptide sequences in cells. Peptides are chains of amino acids within cells and are building blocks as important and unique as DNA or RNA.
    In a healthy person, the immune system can correctly identify the peptides of irregular or foreign cells, such as cancer cells or harmful bacteria, and then target those cells for destruction. For people whose immune system is struggling, the promising field of immunotherapy is working to retrain their immune systems to identify these dangerous invaders.
    “What scientists want to do is sequence those peptides between the normal tissue and the cancerous tissue to recognize the differences,” said Zeping Mao, a PhD candidate in the Cheriton School of Computer Science who developed GraphNovo under the guidance of Dr. Ming Li.
    This sequencing process is particularly difficult for novel illnesses or cancer cells, which may not have been analyzed before. While scientists can draw on an existing peptide database when analyzing diseases or organisms that have previously been studied, each person’s cancer and immune system are unique.
    To quickly build a profile of the peptides in an unfamiliar cell, scientists have been using a method called de novo peptide sequencing, which uses mass spectrometry to rapidly analyze a new sample. This process may leave some peptides incomplete or entirely missing from the sequence.
    Utilizing machine learning, GraphNovo significantly enhances the accuracy in identifying peptide sequences by filling these gaps with the precise mass of the peptide sequence. Such a leap in accuracy will likely be immensely beneficial in a variety of medical areas, especially in the treatment of cancer and the creation of vaccines for ailments such as Ebola and COVID-19. The researchers achieved this breakthrough due to Waterloo’s commitment to advances in the interface between technology and health.
    “If we don’t have an algorithm that’s good enough, we cannot build the treatments,” Mao said. “Right now, this is all theoretical. But soon, we will be able to use it in the real world.” More

  • in

    Compact accelerator technology achieves major energy milestone

    Particle accelerators hold great potential for semiconductor applications, medical imaging and therapy, and research in materials, energy and medicine. But conventional accelerators require plenty of elbow room — kilometers — making them expensive and limiting their presence to a handful of national labs and universities.
    Researchers from The University of Texas at Austin, several national laboratories, European universities and the Texas-based company TAU Systems Inc. have demonstrated a compact particle accelerator less than 20 meters long that produces an electron beam with an energy of 10 billion electron volts (10 GeV). There are only two other accelerators currently operating in the U.S. that can reach such high electron energies, but both are approximately 3 kilometers long.
    “We can now reach those energies in 10 centimeters,” said Bjorn “Manuel” Hegelich, associate professor of physics at UT and CEO of TAU Systems, referring to the size of the chamber where the beam was produced. He is the senior author on a recent paper describing their achievement in the journal Matter and Radiation at Extremes.
    Hegelich and his team are currently exploring the use of their accelerator, called an advanced wakefield laser accelerator, for a variety of purposes. They hope to use it to test how well space-bound electronics can withstand radiation, to image the 3D internal structures of new semiconductor chip designs, and even to develop novel cancer therapies and advanced medical-imaging techniques.
    This kind of accelerator could also be used to drive another device called an X-ray free electron laser, which could take slow-motion movies of processes on the atomic or molecular scale. Examples of such processes include drug interactions with cells, changes inside batteries that might cause them to catch fire, chemical reactions inside solar panels, and viral proteins changing shape when infecting cells.
    The concept for wakefield laser accelerators was first described in 1979. An extremely powerful laser strikes helium gas, heats it into a plasma and creates waves that kick electrons from the gas out in a high-energy electron beam. During the past couple of decades, various research groups have developed more powerful versions. Hegelich and his team’s key advance relies on nanoparticles. An auxiliary laser strikes a metal plate inside the gas cell, which injects a stream of metal nanoparticles that boost the energy delivered to electrons from the waves.
    The laser is like a boat skimming across a lake, leaving behind a wake, and electrons ride this plasma wave like surfers.

    “It’s hard to get into a big wave without getting overpowered, so wake surfers get dragged in by Jet Skis,” Hegelich said. “In our accelerator, the equivalent of Jet Skis are nanoparticles that release electrons at just the right point and just the right time, so they are all sitting there in the wave. We get a lot more electrons into the wave when and where we want them to be, rather than statistically distributed over the whole interaction, and that’s our secret sauce.”
    For this experiment, the researchers used one of the world’s most powerful pulsed lasers, the Texas Petawatt Laser, which is housed at UT and fires one ultra-intense pulse of light every hour. A single petawatt laser pulse contains about 1,000 times the installed electrical power in the U.S. but lasts only 150 femtoseconds, less than a billionth as long as a lightning discharge. The team’s long-term goal is to drive their system with a laser they’re currently developing that fits on a tabletop and can fire repeatedly at thousands of times per second, making the whole accelerator far more compact and usable in much wider settings than conventional accelerators.
    The study’s co-first authors are Constantin Aniculaesei, corresponding author now at Heinrich Heine University Düsseldorf, Germany; and Thanh Ha, doctoral student at UT and researcher at TAU Systems. Other UT faculty members are professors Todd Ditmire and Michael Downer.
    Hegelich and Aniculaesei have submitted a patent application describing the device and method to generate nanoparticles in a gas cell. TAU Systems, spun out of Hegelich’s lab, holds an exclusive license from the University for this foundational patent. As part of the agreement, UT has been issued shares in TAU Systems.
    Support for this research was provided by the U.S. Air Force Office of Scientific Research, the U.S. Department of Energy, the U.K. Engineering and Physical Sciences Research Council and the European Union’s Horizon 2020 research and innovation program. More

  • in

    Defending your voice against deepfakes

    Recent advances in generative artificial intelligence have spurred developments in realistic speech synthesis. While this technology has the potential to improve lives through personalized voice assistants and accessibility-enhancing communication tools, it also has led to the emergence of deepfakes, in which synthesized speech can be misused to deceive humans and machines for nefarious purposes.
    In response to this evolving threat, Ning Zhang, an assistant professor of computer science and engineering at the McKelvey School of Engineering at Washington University in St. Louis, developed a tool called AntiFake, a novel defense mechanism designed to thwart unauthorized speech synthesis before it happens. Zhang presented AntiFake Nov. 27 at the Association for Computing Machinery’s Conference on Computer and Communications Security in Copenhagen, Denmark.
    Unlike traditional deepfake detection methods, which are used to evaluate and uncover synthetic audio as a post-attack mitigation tool, AntiFake takes a proactive stance. It employs adversarial techniques to prevent the synthesis of deceptive speech by making it more difficult for AI tools to read necessary characteristics from voice recordings. The code is freely available to users.
    “AntiFake makes sure that when we put voice data out there, it’s hard for criminals to use that information to synthesize our voices and impersonate us,” Zhang said. “The tool uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them. We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it’s completely different to AI.”
    To ensure AntiFake can stand up against an ever-changing landscape of potential attackers and unknown synthesis models, Zhang and first author Zhiyuan Yu, a graduate student in Zhang’s lab, built the tool to be generalizable and tested it against five state-of-the-art speech synthesizers. AntiFake achieved a protection rate of over 95%, even against unseen commercial synthesizers. They also tested AntiFake’s usability with 24 human participants to confirm the tool is accessible to diverse populations.
    Currently, AntiFake can protect short clips of speech, taking aim at the most common type of voice impersonation. But, Zhang said, there’s nothing to stop this tool from being expanded to protect longer recordings, or even music, in the ongoing fight against disinformation.
    “Eventually, we want to be able to fully protect voice recordings,” Zhang said. “While I don’t know what will be next in AI voice tech — new tools and features are being developed all the time — I do think our strategy of turning adversaries’ techniques against them will continue to be effective. AI remains vulnerable to adversarial perturbations, even if the engineering specifics may need to shift to maintain this as a winning strategy.” More