More stories

  • in

    Using AI to improve diagnosis of rare genetic disorders

    Diagnosing rare Mendelian disorders is a labor-intensive task, even for experienced geneticists. Investigators at Baylor College of Medicine are trying to make the process more efficient using artificial intelligence. The team developed a machine learning system called AI-MARRVEL (AIM) to help prioritize potentially causative variants for Mendelian disorders. The study is published today in NEJM AI.
    Researchers from the Baylor Genetics clinical diagnostic laboratory noted that AIM’s module can contribute to predictions independent of clinical knowledge of the gene of interest, helping to advance the discovery of novel disease mechanisms. “The diagnostic rate for rare genetic disorders is only about 30%, and on average, it is six years from the time of symptom onset to diagnosis. There is an urgent need for new approaches to enhance the speed and accuracy of diagnosis,” said co-corresponding author Dr. Pengfei Liu, associate professor of molecular and human genetics and associate clinical director at Baylor Genetics.
    AIM is trained using a public database of known variants and genetic analysis called Model organism Aggregated Resources for Rare Variant ExpLoration (MARRVEL) previously developed by the Baylor team. The MARRVEL database includes more than 3.5 million variants from thousands of diagnosed cases. Researchers provide AIM with patients’ exome sequence data and symptoms, and AIM provides a ranking of the most likely gene candidates causing the rare disease.
    Researchers compared AIM’s results to other algorithms used in recent benchmark papers. They tested the models using three data cohorts with established diagnoses from Baylor Genetics, the National Institutes of Health-funded Undiagnosed Diseases Network (UDN) and the Deciphering Developmental Disorders (DDD) project. AIM consistently ranked diagnosed genes as the No. 1 candidate in twice as many cases than all other benchmark methods using these real-world data sets.
    “We trained AIM to mimic the way humans make decisions, and the machine can do it much faster, more efficiently and at a lower cost. This method has effectively doubled the rate of accurate diagnosis,” said co-corresponding author Dr. Zhandong Liu, associate professor of pediatrics — neurology at Baylor and investigator at the Jan and Dan Duncan Neurological Research Institute (NRI) at Texas Children’s Hospital.
    AIM also offers new hope for rare disease cases that have remained unsolved for years. Hundreds of novel disease-causing variants that may be key to solving these cold cases are reported every year; however, determining which cases warrant reanalysis is challenging because of the high volume of cases. The researchers tested AIM’s clinical exome reanalysis on a dataset of UDN and DDD cases and found that it was able to correctly identify 57% of diagnosable cases.
    “We can make the reanalysis process much more efficient by using AIM to identify a high-confidence set of potentially solvable cases and pushing those cases for manual review,” Zhandong Liu said. “We anticipate that this tool can recover an unprecedented number of cases that were not previously thought to be diagnosable.”
    Researchers also tested AIM’s potential for discovery of novel gene candidates that have not been linked to a disease. AIM correctly predicted two newly reported disease genes as top candidates in two UDN cases.

    “AIM is a major step forward in using AI to diagnose rare diseases. It narrows the differential genetic diagnoses down to a few genes and has the potential to guide the discovery of previously unknown disorders,” said co-corresponding author Dr. Hugo Bellen, Distinguished Service Professor in molecular and human genetics at Baylor and chair in neurogenetics at the Duncan NRI.
    “When combined with the deep expertise of our certified clinical lab directors, highly curated datasets and scalable automated technology, we are seeing the impact of augmented intelligence to provide comprehensive genetic insights at scale, even for the most vulnerable patient populations and complex conditions,” said senior author Dr. Fan Xia, associate professor of molecular and human genetics at Baylor and vice president of clinical genomics at Baylor Genetics. “By applying real-world training data from a Baylor Genetics cohort without any inclusion criteria, AIM has shown superior accuracy. Baylor Genetics is aiming to develop the next generation of diagnostic intelligence and bring this to clinical practice.”
    Other authors of this work include Dongxue Mao, Chaozhong Liu, Linhua Wang, Rami AI-Ouran, Cole Deisseroth, Sasidhar Pasupuleti, Seon Young Kim, Lucian Li, Jill A.Rosenfeld, Linyan Meng, Lindsay C. Burrage, Michael Wangler, Shinya Yamamoto, Michael Santana, Victor Perez, Priyank Shukla, Christine Eng, Brendan Lee and Bo Yuan. They are affiliated with one or more of the following institutions: Baylor College of Medicine, Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Al Hussein Technical University, Baylor Genetics and the Human Genome Sequencing Center at Baylor.
    This work was supported by the Chang Zuckerberg Initiative and the National Institute of Neurological Disorders and Stroke (3U2CNS132415). More

  • in

    Artificial intelligence helps scientists engineer plants to fight climate change

    The Intergovernmental Panel on Climate Change (IPCC) declared that removing carbon from the atmosphere is now essential to fighting climate change and limiting global temperature rise. To support these efforts, Salk scientists are harnessing plants’ natural ability to draw carbon dioxide out of the air by optimizing their root systems to store more carbon for a longer period of time.
    To design these climate-saving plants, scientists in Salk’s Harnessing Plants Initiative are using a sophisticated new research tool called SLEAP — an easy-to-use artificial intelligence (AI) software that tracks multiple features of root growth. Created by Salk Fellow Talmo Pereira, SLEAP was initially designed to track animal movement in the lab. Now, Pereira has teamed up with plant scientist and Salk colleague Professor Wolfgang Busch to apply SLEAP to plants.
    In a study published in Plant Phenomics on April 12, 2024, Busch and Pereira debut a new protocol for using SLEAP to analyze plant root phenotypes — how deep and wide they grow, how massive their root systems become, and other physical qualities that, prior to SLEAP, were tedious to measure. The application of SLEAP to plants has already enabled researchers to establish the most extensive catalog of plant root system phenotypes to date.
    What’s more, tracking these physical root system characteristics helps scientists find genes affiliated with those characteristics, as well as whether multiple root characteristics are determined by the same genes or independently. This allows the Salk team to determine what genes are most beneficial to their plant designs.
    “This collaboration is truly a testament to what makes Salk science so special and impactful,” says Pereira. “We’re not just ‘borrowing’ from different disciplines — we’re really putting them on equal footing in order to create something greater than the sum of its parts.”
    Prior to using SLEAP, tracking the physical characteristics of both plants and animals required a lot of labor that slowed the scientific process. If researchers wanted to analyze an image of a plant, they would need to manually flag the parts of the image that were and weren’t plant — frame-by-frame, part-by-part, pixel-by-pixel. Only then could older AI models be applied to process the image and gather data about the plant’s structure.
    What sets SLEAP apart is its unique use of both computer vision (the ability for computers to understand images) and deep learning (an AI approach for training a computer to learn and work like the human brain). This combination allows researchers to process images without moving pixel-by-pixel, instead skipping this intermediate labor-intensive step to jump straight from image input to defined plant features.

    “We created a robust protocol validated in multiple plant types that cuts down on analysis time and human error, while emphasizing accessibility and ease-of-use — and it required no changes to the actual SLEAP software,” says first author Elizabeth Berrigan, a bioinformatics analyst in Busch’s lab.
    Without modifying the baseline technology of SLEAP, the researchers developed a downloadable toolkit for SLEAP called sleap-roots (available as open-source software here). With sleap-roots, SLEAP can process biological traits of root systems like depth, mass, and angle of growth. The Salk team tested the sleap-roots package in a variety of plants, including crop plants like soybeans, rice, and canola, as well as the model plant species Arabidopsis thaliana — a flowering weed in the mustard family. Across the variety of plants trialed, they found the novel SLEAP-based method outperformed existing practices by annotating 1.5 times faster, training the AI model 10 times faster, and predicting plant structure on new data 10 times faster, all with the same or better accuracy than before.
    Together with massive genome sequencing efforts for elucidating the genotype data in large numbers of crop varieties, these phenotypic data, such as a plant’s root system growing especially deep in soil, can be extrapolated to understand the genes responsible for creating that especially deep root system.
    This step — connecting phenotype and genotype — is crucial in Salk’s mission to create plants that hold on to more carbon and for longer, as those plants will need root systems designed to be deeper and more robust. Implementing this accurate and efficient software will allow the Harnessing Plants Initiative to connect desirable phenotypes to targetable genes with groundbreaking ease and speed.
    “We have already been able to create the most extensive catalogue of plant root system phenotypes to date, which is really accelerating our research to create carbon-capturing plants that fight climate change,” says Busch, the Hess Chair in Plant Science at Salk. “SLEAP has been so easy to apply and use, thanks to Talmo’s professional software design, and it’s going to be an indispensable tool in my lab moving forward.”
    Accessibility and reproducibility were at the forefront of Pereira’s mind when creating both SLEAP and sleap-roots. Because the software and sleap-roots toolkit are free to use, the researchers are excited to see how sleap-roots will be used around the world. Already, they have begun discussions with NASA scientists hoping to utilize the tool not only to help guide carbon-sequestering plants on Earth, but also to study plants in space.
    At Salk, the collaborative team is not yet ready to disband — they are already embarking on a new challenge of analyzing 3D data with SLEAP. Efforts to refine, expand, and share SLEAP and sleap-roots will continue for years to come, but its use in Salk’s Harnessing Plants Initiative is already accelerating plant designs and helping the Institute make an impact on climate change.
    Other authors include Lin Wang, Hannah Carrillo, Kimberly Echegoyen, Mikayla Kappes, Jorge Torres, Angel Ai-Perreira, Erica McCoy, Emily Shane, Charles Copeland, Lauren Ragel, Charidimos Georgousakis, Sanghwa Lee, Dawn Reynolds, Avery Talgo, Juan Gonzalez, Ling Zhang, Ashish Rajurkar, Michel Ruiz, Erin Daniels, Liezl Maree, and Shree Pariyar of Salk.
    The work was supported by the Bezos Earth Fund, the Hess Corporation, the TED Audacious Project, and the National Institutes of Health (RF1MH132653). More

  • in

    Scientists tune the entanglement structure in an array of qubits

    Entanglement is a form of correlation between quantum objects, such as particles at the atomic scale. This uniquely quantum phenomenon cannot be explained by the laws of classical physics, yet it is one of the properties that explains the macroscopic behavior of quantum systems.
    Because entanglement is central to the way quantum systems work, understanding it better could give scientists a deeper sense of how information is stored and processed efficiently in such systems.
    Qubits, or quantum bits, are the building blocks of a quantum computer. However, it is extremely difficult to make specific entangled states in many-qubit systems, let alone investigate them. There are also a variety of entangled states, and telling them apart can be challenging.
    Now, MIT researchers have demonstrated a technique to efficiently generate entanglement among an array of superconducting qubits that exhibit a specific type of behavior.
    Over the past years, the researchers at the Engineering Quantum Systems (EQuS) group have developed techniques using microwave technology to precisely control a quantum processor composed of superconducting circuits. In addition to these control techniques, the methods introduced in this work enable the processor to efficiently generate highly entangled states and shift those states from one type of entanglement to another — including between types that are more likely to support quantum speed-up and those that are not.
    “Here, we are demonstrating that we can utilize the emerging quantum processors as a tool to further our understanding of physics. While everything we did in this experiment was on a scale which can still be simulated on a classical computer, we have a good roadmap for scaling this technology and methodology beyond the reach of classical computing,” says Amir H. Karamlou ’18, MEng ’18, PhD ’23, the lead author of the paper.
    The senior author is William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of the EQuS group, and associate director of the Research Laboratory of Electronics. Karamlou and Oliver are joined by Research Scientist Jeff Grover, postdoc Ilan Rosen, and others in the departments of Electrical Engineering and Computer Science and of Physics at MIT, at MIT Lincoln Laboratory, and at Wellesley College and the University of Maryland. The research appears in Nature.

    Assessing entanglement
    In a large quantum system comprising many interconnected qubits, one can think about entanglement as the amount of quantum information shared between a given subsystem of qubits and the rest of the larger system.
    The entanglement within a quantum system can be categorized as area-law or volume-law, based on how this shared information scales with the geometry of subsystems. In volume-law entanglement, the amount of entanglement between a subsystem of qubits and the rest of the system grows proportionally with the total size of the subsystem.
    On the other hand, area-law entanglement depends on how many shared connections exist between a subsystem of qubits and the larger system. As the subsystem expands, the amount of entanglement only grows along the boundary between the subsystem and the larger system.
    In theory, the formation of volume-law entanglement is related to what makes quantum computing so powerful.
    “While have not yet fully abstracted the role that entanglement plays in quantum algorithms, we do know that generating volume-law entanglement is a key ingredient to realizing a quantum advantage,” says Oliver.

    However, volume-law entanglement is also more complex than area-law entanglement and practically prohibitive at scale to simulate using a classical computer.
    “As you increase the complexity of your quantum system, it becomes increasingly difficult to simulate it with conventional computers. If I am trying to fully keep track of a system with 80 qubits, for instance, then I would need to store more information than what we have stored throughout the history of humanity,” Karamlou says.
    The researchers created a quantum processor and control protocol that enable them to efficiently generate and probe both types of entanglement.
    Their processor comprises superconducting circuits, which are used to engineer artificial atoms. The artificial atoms are utilized as qubits, which can be controlled and read out with high accuracy using microwave signals.
    The device used for this experiment contained 16 qubits, arranged in a two-dimensional grid. The researchers carefully tuned the processor so all 16 qubits have the same transition frequency. Then, they applied an additional microwave drive to all of the qubits simultaneously.
    If this microwave drive has the same frequency as the qubits, it generates quantum states that exhibit volume-law entanglement. However, as the microwave frequency increases or decreases, the qubits exhibit less volume-law entanglement, eventually crossing over to entangled states that increasingly follow an area-law scaling.
    Careful control
    “Our experiment is a tour de force of the capabilities of superconducting quantum processors. In one experiment, we operated the processor both as an analog simulation device, enabling us to efficiently prepare states with different entanglement structures, and as a digital computing device, needed to measure the ensuing entanglement scaling,” says Rosen.
    To enable that control, the team put years of work into carefully building up the infrastructure around the quantum processor.
    By demonstrating the crossover from volume-law to area-law entanglement, the researchers experimentally confirmed what theoretical studies had predicted. More importantly, this method can be used to determine whether the entanglement in a generic quantum processor is area-law or volume-law.
    In the future, scientists could utilize this technique to study the thermodynamic behavior of complex quantum systems, which is too complex to be studied using current analytical methods and practically prohibitive to simulate on even the world’s most powerful supercomputers.
    “The experiments we did in this work can be used to characterize or benchmark larger-scale quantum systems, and we may also learn something more about the nature of entanglement in these many-body systems,” says Karamlou.
    Additional co-authors of the study are Sarah E. Muschinske, Cora N. Barrett, Agustin Di Paolo, Leon Ding, Patrick M. Harrington, Max Hays, Rabindra Das, David K. Kim, Bethany M. Niedzielski, Meghan Schuldt, Kyle Serniak, Mollie E. Schwartz, Jonilyn L. Yoder, Simon Gustavsson, and Yariv Yanay.
    This research is funded, in part, by the U.S. Department of Energy, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the National Science Foundation, the STC Center for Integrated Quantum Materials, the Wellesley College Samuel and Hilda Levitt Fellowship, NASA, and the Oak Ridge Institute for Science and Education. More

  • in

    Artificial intelligence can develop treatments to prevent ‘superbugs’

    Cleveland Clinic researchers developed an artficial intelligence (AI) model that can determine the best combination and timeline to use when prescribing drugs to treat a bacterial infection, based solely on how quickly the bacteria grow given certain perturbations. A team led by Jacob Scott, MD, PhD, and his lab in the Theory Division of Translational Hematology and Oncology, recently published their findings in PNAS.
    Antibiotics are credited with increasing the average US lifespan by almost ten years. Treatment lowered fatality rates for health issues we now consider minor — like some cuts and injuries. But antibiotics aren’t working as well as they used to, in part because of widespread use.
    “Health agencies worldwide agree that we’re entering a post-antibiotic era,” explains Dr. Scott. “If we don’t change how we go after bacteria, more people will die from antibiotic-resistant infections than from cancer by 2050.”
    Bacteria replicate quickly, producing mutant offspring. Overusing antibiotics gives bacteria a chance to practice making mutations that resist treatment. Over time, the antibiotics kill all the susceptible bacteria, leaving behind only the stronger mutants that the antibiotics can’t kill.
    One strategy physicians are using to modernize the way we treat bacterial infections is antibiotic cycling. Healthcare providers rotate between different antibiotics over specific time periods. Changing between different drugs gives bacteria less time to evolve resistance to any one class of antibiotic. Cycling can even make bacteria more susceptible to other antibiotics.
    “Drug cycling shows a lot of promise in effectively treating diseases,” says study first author and medical student Davis Weaver, PhD. “The problem is that we don’t know the best way to do it. Nothing’s standardized between hospitals for which antibiotic to give, for how long and in what order.”
    Study co-author Jeff Maltas, PhD, a postdoctoral fellow at Cleveland Clinic, uses computer models to predict how a bacterium’s resistance to one antibiotic will make it weaker to another. He teamed up with Dr. Weaver to see if data-driven models could predict drug cycling regimens that minimize antibiotic resistance and maximize antibiotic susceptibility, despite the random nature of how bacteria evolve.

    Dr. Weaver led the charge to apply reinforcement learning to the drug cycling model, which teaches a computer to learn from its mistakes and successes to determine the best strategy to complete a task. This study is among the first to apply reinforcement learning to antibiotic cycling regiments, Drs. Weaver and Maltas say.
    “Reinforcement learning is an ideal approach because you just need to know how quickly the bacteria are growing, which is relatively easy to determine,” explains Dr. Weaver. “There’s also room for human variations and errors. You don’t need to measure the growth rates perfectly down to the exact millisecond every time.”
    The research team’s AI was able to figure out the most efficient antibiotic cycling plans to treat multiple strains of E. coli and prevent drug resistance. The study shows that AI can support complex decision-making like calculating antibiotic treatment schedules, Dr. Maltas says.
    Dr. Weaver explains that in addition to managing an individual patient’s infection, the team’s AI model can inform how hospitals treat infections across the board. He and his research team are also working to expand their work beyond bacterial infections into other deadly diseases.
    “This idea isn’t limited to bacteria, it can be applied to anything that can evolve treatment resistance,” he says. “In the future we believe these types of AI can be used to to manage drug-resistant cancers, too.” More

  • in

    New study reveals how AI can enhance flexibility, efficiency for customer service centers

    Whenever you call a customer service contact center, the team on the other end of the line typically has three goals: to reduce their response time, solve your problem and do it within the shortest service time possible.
    However, resolving your problem might entail a significant time investment, potentially clashing with an overarching business objective to keep service duration to a minimum. These conflicting priorities can be commonplace for customer service contact centers, which often rely on the latest technology to meet customers’ needs.
    To pursue those conflicting demands, these organizations practice what’s referred to as ambidexterity, and there are three different modes to achieve it: structural separation, behavioral integration and sequential alternation. So, what role might artificial intelligence (AI) systems play in improving how these organizations move from one ambidexterity mode to another to accomplish their tasks?
    New research involving the School of Management at Binghamton University, State University of New York explored that question. Using data from different contact center sites, researchers examined the impact of AI systems on a customer service organization’s ability to shift across ambidexterity modes.
    The key takeaway: it’s a delicate balancing act; AI is a valuable asset, so long as it’s used properly, though these organizations shouldn’t rely on it exclusively to guide their strategies.
    Associate Professor Sumantra Sarkar, who helped conduct the research, said the study’s goal was to understand better how organizations today might use AI to guide their transition from one ambidexterity mode to another because certain structures or approaches might be more beneficial from one month to the next.
    “Customer service organizations often balance exploiting the latest technology to boost efficiency and, therefore, save money,” Sarkar said. “This dichotomy is what ambidexterity is all about, exploring new technology to gain new insights and exploiting it to gain efficiency.”
    As part of the three-year study, researchers examined the practices of five contact center sites: two global banks, one national bank in a developing country, a telecommunication Fortune 500 company in South Asia and a global infrastructure vendor in telecommunications hardware.

    While many customer service organizations have spent recent years investing in AI, assuming that not doing so could lead to customer dissatisfaction, the researchers found these organizations haven’t used AI to its full potential. They have primarily used it for self-service applications.
    Some of the AI-assisted tasks researchers tracked at those sites included: using AI systems to automatically open applications, send emails and transfer information from one system to another approving or disapproving loan applications providing personalized service based on customer’s data and contact historyResearchers determined that while it’s beneficial for customer service companies to be open to harnessing the benefits and navigating any challenges of AI systems as a guide to their business strategies, they should not do so at the expense of supporting quality professional development and ongoing learning opportunities for their staff.
    Sarkar said that to fully utilize AI’s benefits, those leading customer service organizations need to examine every customer touchpoint and identify opportunities to enhance the customer experience while boosting the operation’s efficiency.
    As a result, Sarkar said newcomers in this technology-savvy industry should learn how companies with 20 or 30 years of experience have already adapted to changes in technology, especially AI, during that time before forming their own business strategies.
    “Any business is a balancing game because what you decide to do at the start of the year based on a forecast has to be revised over and over again,” Sarkar said. Since there’s that added tension within customer service organizations of whether they want to be more efficient or explore new areas, they have to work even harder at striking that balance. Using AI in the right way effectively helps them accomplish that.” More

  • in

    Laser technology offers breakthrough in detecting illegal ivory

    A new way of quickly distinguishing between illegal elephant ivory and legal mammoth tusk ivory could prove critical to fighting the illegal ivory trade. A laser-based approach developed by scientists at the Universities of Bristol and Lancaster, could be used by customs worldwide to aid in the enforcement of illegal ivory from being traded under the guise of legal ivory. Results from the study are published in PLOS ONE today [24 April].
    Despite the Convention on the International Trade in Endangered Species (CITES) ban on ivory, poaching associated with its illegal trade has not prevented the suffering of elephants and is estimated to cause an eight per cent loss in the world’s elephant population every year. The 2016 African Elephant Database survey estimated a total of 410,000 elephants remaining in Africa, a decrease of approximately 90,000 elephants from the previous 2013 report.
    While trading/procuring elephant ivory is illegal, it is not illegal to sell ivory from extinct species, such as preserved mammoth tusk ivory. This legal source of ivory is now part of an increasing and lucrative ‘mammoth hunter’ industry. It also poses a time-consuming and enforcement problem for customs teams, as ivory from these two different types of tusk are broadly similar making it difficult to distinguish from one another, especially once specimens have become worked or carved.
    In this new study, scientists from Bristol’s School of Anatomy and Lancaster Medical School sought to establish whether Raman spectroscopy, which is already used in the study of bone and mineral chemistry, could be modified to accurately detect differences in the chemistry of mammoth and elephant ivory. The non-destructive technology, which involves shining a high-energy light at an ivory specimen, can detect small biochemical differences in the tusks from elephants and mammoths.
    Researchers scanned samples of mammoth and elephant tusks from London’s Natural History Museum using the laser based method, Raman spectroscopy. Results from the experiment found the technology provided accurate, quick and non-destructive species identification.
    Dr Rebecca Shepherd, formerly of Lancaster Medical School and now at the University of Bristol’s School of Anatomy, explained “The gold standard method of identification recommended by The United Nations Office on Drugs and Crime for assessing the legality of ivory predominantly are expensive, destructive and time-consuming techniques.
    “Raman spectroscopy can provide results quickly (a single scan takes only a few minutes), and is easier to use than current methods, making it easier to determine between illegal elephant ivory and legal mammoth tusk ivory. Increased surveillance and monitoring of samples passing through customs worldwide using Raman spectroscopy could act as a deterrent to those poaching endangered and critically endangered species of elephant.”
    Dr Jemma Kerns of Lancaster Medical School, added: “The combined approach of a non-destructive laser-based method of Raman spectroscopy with advanced data analysis holds a lot of promise for the identification of unknown samples of ivory, which is especially important, given the increase in available mammoth tusks and the need for timely identification.”

    Alice Roberts, Professor of Public Engagement in Science, from the University of Birmingham and one of the study’s co-authors, said: “There’s a real problem when it comes to stamping down on the illegal trade in elephant ivory. Because trading in ancient mammoth ivory is legal. The compete tusks of elephants and mammoths look very different, but if the ivory is cut into small pieces, it can be practically impossible to tell apart elephant ivory from well-preserved mammoth ivory. I was really pleased to be part of this project exploring a new technique for telling apart elephant and mammoth ivory. This is great science, and should help the enforcers — giving them a valuable and relatively inexpensive tool to help them spot illegal ivory.”
    Professor Adrian Lister, one of the study’s co-authors from the Natural History Museum, added: “Stopping the trade in elephant ivory has been compromised by illegal ivory objects being described or disguised as mammoth ivory (for which trade is legal). A quick and reliable method for distinguishing the two has long been a goal, as other methods (such as radiocarbon dating and DNA analysis) are time-consuming and expensive. The demonstration that the two can be separated by Raman spectroscopy is therefore a significant step forward; also, this method (unlike the others) does not require any sampling, leaving the ivory object intact.”
    Professor Charlotte Deane, Executive Chair of EPSRC, said: “By offering a quick and simple alternative to current methods, the use of Raman spectroscopy could play an important role in tackling the illegal ivory trade.
    “The researchers’ work illustrates how the development and adoption of innovative new techniques can help us to address problems of global significance.”
    The study was funded by the Engineering and Physical Sciences Research Council (EPSRC) and involved researchers from the Universities of Lancaster and Birmingham and the Natural History Museum.
    The 2016 African Elephant Database survey estimated a total of 410,000 elephants remaining in Africa, a decrease of approximately 90,000 elephants from the previous 2013 report. Although the percentage decline in Asian elephants as a result of illegal poaching is lower, as females do not have tusks, there has been a 50% decline over the last three generations of Asian elephants. More

  • in

    Why can’t robots outrun animals?

    Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.
    “A wildebeest can migrate for thousands of kilometres over rough terrain, a mountain goat can climb up a literal cliff, finding footholds that don’t even seem to be there, and cockroaches can lose a leg and not slow down,” says Dr. Max Donelan, Professor in Simon Fraser University’s Department of Biomedical Physiology and Kinesiology. “We have no robots capable of anything like this endurance, agility and robustness.”
    To understand why, and quantify how, robots lag behind animals, an interdisciplinary team of scientists and engineers from leading research universities completed a detailed study of various aspects of running robots, comparing them with their equivalents in animals, for a paper published in Science Robotics. The paper finds that, by the metrics engineers use, biological components performed surprisingly poorly compared to fabricated parts. Where animals excel, though, is in their integration and control of those components.
    Alongside Donelan, the team comprised Drs. Sam Burden, Associate Professor in the Department of Electrical & Computer Engineering at the University of Washington; Tom Libby, Senior Research Engineer, SRI International; Kaushik Jayaram, Assistant Professor in the Paul M Rady Department of Mechanical Engineering at the University of Colorado Boulder; and Simon Sponberg, Dunn Family Associate Professor of Physics and Biological Sciences at the Georgia Institute of Technology.
    The researchers each studied one of five different “subsystems” that combine to create a running robot — Power, Frame, Actuation, Sensing, and Control — and compared them with their biological equivalents. Previously, it was commonly accepted that animals’ outperformance of robots must be due to the superiority of biological components.
    “The way things turned out is that, with only minor exceptions, the engineering subsystems outperform the biological equivalents — and sometimes radically outperformed them,” says Libby. “But also what’s very, very clear is that, if you compare animals to robots at the whole system level, in terms of movement, animals are amazing. And robots have yet to catch up.”
    More optimistically for the field of robotics, the researchers noted that, if you compare the relatively short time that robotics has had to develop its technology with the countless generations of animals that have evolved over many millions of years, the progress has actually been remarkably quick.
    “It will move faster, because evolution is undirected,” says Burden. “Whereas we can very much correct how we design robots and learn something in one robot and download it into every other robot, biology doesn’t have that option. So there are ways that we can move much more quickly when we engineer robots than we can through evolution — but evolution has a massive head start.”
    More than simply an engineering challenge, effective running robots offer countless potential uses. Whether solving ‘last mile’ delivery challenges in a world designed for humans that is often difficult to navigate for wheeled robots, carrying out searches in dangerous environments or handling hazardous materials, there are many potential applications for the technology.
    The researchers hope that this study will help direct future development in robot technology, with an emphasis not on building a better piece of hardware, but in understanding how to better integrate and control existing hardware. Donelan concludes, “As engineering learns integration principles from biology, running robots will become as efficient, agile, and robust as their biological counterparts.” More

  • in

    On the trail of deepfakes, researchers identify ‘fingerprints’ of AI-generated video

    In February, OpenAI released videos created by its generative artificial intelligence program Sora. The strikingly realistic content, produced via simple text prompts, is the latest breakthrough for companies demonstrating the capabilities of AI technology. It also raised concerns about generative AI’s potential to enable the creation of misleading and deceiving content on a massive scale. According to new research from Drexel University, current methods for detecting manipulated digital media will not be effective against AI-generated video; but a machine-learning approach could be the key to unmasking these synthetic creations.
    In a paper accepted for presentation at the IEEE Computer Vision and Pattern Recognition Conference in June, researchers from Multimedia and Information Security Lab in Drexel’s College of Engineering explained that while existing synthetic image detection technology has failed thus far at spotting AI-generated video, they’ve had success with a machine learning algorithm that can be trained to extract and recognize digital “fingerprints” of many different video generators, such as Stable Video Diffusion, Video-Crafter and Cog-Video. Additionally, they have shown that this algorithm can learn to detect new AI generators after studying just a few examples of their videos.
    “It’s more than a bit unnerving that this video technology could be released before there is a good system for detecting fakes created by bad actors,” said Matthew Stamm, PhD, an associate professor in Drexel’s College of Engineering and director of the MISL. “Responsible companies will do their best to embed identifiers and watermarks, but once the technology is publicly available, people who want to use it for deception will find a way. That’s why we’re working to stay ahead of them by developing the technology to identify synthetic videos from patterns and traits that are endemic to the media.”
    Deepfake Detectives
    Stamm’s lab has been active in efforts to flag digitally manipulated images and videos for more than a decade, but the group has been particularly busy in the last year, as editing technology is being used to spread political misinformation.
    Until recently, these manipulations have been the product of photo and video editing programs that add, remove or shift pixels; or slow, speed up or clip out video frames. Each of these edits leaves a unique digital breadcrumb trail and Stamm’s lab has developed a suite of tools calibrated to find and follow them.
    The lab’s tools use a sophisticated machine learning program called a constrained neural network. This algorithm can learn, in ways similar to the human brain, what is “normal” and what is “unusual” at the sub-pixel level of images and videos, rather than searching for specific predetermined identifiers of manipulation from the outset. This makes the program adept at both identifying deepfakes from known sources, as well as spotting those created by a previously unknown program.

    The neural network is typically trained on hundreds or thousands of examples to get a very good feel for the difference between unedited media and something that has been manipulated — this can be anything from variation between adjacent pixels, to the order of spacing of frames in a video, to the size and compression of the files themselves.
    A New Challenge
    “When you make an image, the physical and algorithmic processing in your camera introduces relationships between various pixel values that are very different than the pixel values if you photoshop or AI-generate an image,” Stamm said. “But recently we’ve seen text-to video generators, like Sora, that can make some pretty impressive videos. And those pose a completely new challenge because they have not been produced by a camera or photoshopped.”
    Last year a campaign ad circulating in support of Florida Gov. Ron DeSantis appeared to show former President Donald Trump embracing and kissing Antony Fauci was the first to use generative-AI technology. This means the video was not edited or spliced together from others, rather it was created whole-cloth by an AI program.
    And if there is no editing, Stamm notes, then the standard clues do not exist — which poses a unique problem for detection.
    “Until now, forensic detection programs have been effective against edited videos by simply treating them as a series of images and applying the same detection process,” Stamm said. “But with AI-generated video, there is no evidence of image manipulation frame-to-frame, so for a detection program to be effective it will need to be able to identify new traces left behind by the way generative-AI programs construct their videos.”
    In the study, the team tested 11 publicly available synthetic image detectors. Each of these programs was highly effective — at least 90% accuracy — at identifying manipulated images. But their performance dropped by 20-30% when faced with discerning videos created by publicly available AI-generators, Luma, VideoCrafter-v1, CogVideo and Stable Diffusion Video.

    “These results clearly show that synthetic image detectors experience substantial difficulty detecting synthetic videos,” they wrote. “This finding holds consistent across multiple different detector architectures, as well as when detectors are pretrained by others or retrained using our dataset.”
    A Trusted Approach
    The team speculated that convolutional neural network-based detectors, like its MISLnet algorithm, could be successful against synthetic video because the program is designed to constantly shift its learning as it encounters new examples. By doing this, it’s possible to recognize new forensic traces as they evolve. Over the last several years, the team has demonstrated MISLnet’s acuity at spotting images that had been manipulated using new editing programs, including AI tools — so testing it against synthetic video was a natural step.
    “We’ve used CNN algorithms to detect manipulated images and video and audio deepfakes with reliable success,” said Tai D. Nguyen, a doctoral student in MISL, who was a coauthor of the paper. “Due to their ability to adapt with small amounts of new information we thought they could be an effective solution for identifying AI-generated synthetic videos as well.”
    For the test, the group trained eight CNN detectors, including MISLnet, with the same test dataset used to train the image detectors, which including real videos and AI-generated videos produced by the four publicly available programs. Then they tested the program against a set of videos that included a number created by generative AI programs that are not yet publicly available: Sora, Pika and VideoCrafter-v2.
    By analyzing a small portion — a patch — from a single frame from each video, the CNN detectors were able to learn what a synthetic video looks like at a granular level and apply that knowledge to the new set of videos. Each program was more than 93% effective at identify the synthetic videos, with MISLnet performing the best, at 98.3%.
    The programs were slightly more effective when conducting an analysis of the entire video, by pulling out a random sampling of a few dozen patches from various frames of the video and using those as a mini training set to learn the characteristics of the new video. Using a set of 80 patches, the programs were between 95-98% accurate.
    With a bit of additional training, the programs were also more than 90% accurate at identifying the program that was used to create the videos, which the team suggests is because of the unique, proprietary approach each program uses to produce a video.
    “Videos are generated using a wide variety of strategies and generator architectures,” the researchers wrote. “Since each technique imparts significant traces, this makes it much easier for networks to accurately discriminate between each generator.”
    A Quick Study
    While the programs struggled when faced with the challenge of detecting a completely new generator without previously being exposed to at least a small amount of video from it, with a small amount of fine tuning MISLnet could quickly learn to make the identification at 98% accuracy. This strategy, called “few-shot learning” is an important capability because new AI technology is being created every day, so detection programs must be agile enough to adapt with minimal training.
    “We’ve already seen AI-generated video being used to create misinformation,” Stamm said. “As these programs become more ubiquitous and easier to use, we can reasonably expect to be inundated with synthetic videos. While detection programs shouldn’t be the only line of defense against misinformation — information literacy efforts are key — having the technological ability to verify the authenticity of digital media is certainly an important step.”
    Further information: https://ductai199x.github.io/beyond-deepfake-images/ More