More stories

  • in

    A faster, more reliable method for simulating the plasmas used to make computer chips

    Plasma — the electrically charged fourth state of matter — is at the heart of many important industrial processes, including those used to make computer chips and coat materials. Simulating those plasmas can be challenging, however, because millions of math operations must be performed for thousands of points in the simulation, many times per second. Even with the world’s fastest supercomputers, scientists have struggled to create a kinetic simulation — which considers individual particles — that is detailed and fast enough to help them improve those manufacturing processes.
    Now, a new method offers improved stability and efficiency for kinetic simulations of what’s known as inductively coupled plasmas. The method was implemented in a code developed as part of a private-public partnership between the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) and chip equipment maker Applied Materials Inc., which is already using the tool. Researchers from the University of Alberta, PPPL and Los Alamos National Laboratory contributed to the project.
    Detailed simulations of these plasmas are important to gain a better understanding of how plasma forms and evolves for various manufacturing processes. The more realistic the simulation, the more accurate the distribution functions it provides. These measures show, for example, the probability that a particle is at a particular location moving at a particular speed. Ultimately, understanding these details could lead to realizations about how to use the plasma in a more refined way to etch patterns onto silicon for even faster chips or memory with greater storage, for example.
    “This is a big step forward in our capabilities,” said Igor Kaganovich, a principal research physicist at PPPL and co-author of a journal article published in Physics of Plasmas that details the simulation findings.
    Making the code reliable
    The initial version of the code was developed using an old method that proved unreliable. Dmytro Sydorenko, a research associate at the University of Alberta and first author of the paper, said that significant modifications of the method were made to make the code much more stable. “We changed the equations, so the simulation immediately became very reliable and there were no crashes anymore,” he said. “So now we have a usable tool for the simulation of inductively coupled plasmas into two spatial dimensions.”
    The code was improved, in part, by changing the way one of the electric fields was calculated. An electric field is like an invisible force field that surrounds electric charges and currents. It exerts forces on particles. In an inductively coupled plasma, a wire coil carrying an electric current generates a changing magnetic field, which, in turn, generates an electric field that heats the plasma. It is this field, known as the solenoidal electric field, that the team focused its efforts on.

    The code calculates electromagnetic fields based on procedures developed by Salomon Janhunen from Los Alamos National Laboratory. These procedures were optimized by PPPL’s Jin Chen, who acted as a bridge between physics, mathematics and computer science aspects of the challenge. “For a complicated problem, the improvement is significant,” Chen said.
    The simulation is known as a particle-in-cell code because it tracks individual particles (or small groups of particles clumped together as so-called macroparticles) while they move in space from one grid cell to another. This approach works particularly well for the plasmas used in industrial devices where the gas pressure is low. A fluid approach doesn’t work for such plasmas because it uses average values instead of tracking individual particles.
    Obeying the law of conservation of energy
    “This new simulation allows us to model larger plasmas quickly while accurately conserving energy, helping to ensure the results reflect real physical processes rather than numerical artifacts,” said Kaganovich.
    In the real world, energy doesn’t randomly appear or disappear. It follows the law of conservation of energy. But a small mistake in a computer simulation can accumulate with each step. Because each simulation might involve thousands or even millions of steps, a small error throws off the results significantly. Making sure energy is conserved helps keep the simulation faithful to a real plasma.
    PPPL’s Stéphane Ethier also worked on the new simulation code. The work was supported by a Cooperative Research and Development Agreement between Applied Materials Inc. and PPPL, under contract number DE-AC02-09CH11466. More

  • in

    AI is here to stay, let students embrace the technology, experts urge

    A new study from UBC Okanagan says students appear to be using generative artificial intelligence (GenAI) responsibly, and as a way to speed up tasks, not just boost their grades.
    Dr. Meaghan MacNutt, who teaches professional ethics in the UBCO School of Health and Exercise Sciences (HES), recently published a study in Advances in Physiology Education. Published this month, the paper — titled Reflective writing assignments in the era of GenAI: student behaviour and attitudes suggest utility, not futility — contradicts common concerns about student use of AI.
    Students in three different courses, almost 400 participants, anonymously completed a survey about their use of AI on at least five reflective writing assignments. All three courses used an identical AI policy and students had the option to use the tool for their writing.
    “GenAI tools like ChatGPT allow users to interface with large language models. They offer incredible promise to enhance student learning, however, they are also susceptible to misuse in completion of writing assignments,” says Dr. MacNutt. “This potential has raised concerns about GenAI as a serious threat to academic integrity and to the learning that occurs when students draft and revise their own written work.”
    While UBC offers guidance to students and faculty about the risks and benefits of using GenAI, policies regarding its use in courses are at the discretion of individual instructors.
    Dr. MacNutt, who completed the study with doctoral student and HES lecturer Tori Stranges, notes that discipline-specific factors contribute to the perception that many courses in HES are particularly challenging and many students strive for excellence, often at the expense of their mental wellbeing.
    So, how often were the students using AI and what was motivating their use?

    While only about one-third of the students used AI, the majority of users, 81 per cent, reported their GenAI use was inspired by at least one of the following factors: speed and ease in completing the assignment, a desire for high grades and a desire to learn. About 15 per cent of the students said they were motivated by all three factors, with more than 50 per cent using it to save time on the assignment.
    Dr. MacNutt notes that most students used AI to initiate the paper or revise sections. Only 0.3 per cent of assignments were mostly written by GenAI.
    “There is a lot of speculation when it comes to student use of AI,” she says. “However, students in our study reported that GenAI use was motivated more by learning than by grades, and they are using GenAI tools selectively and in ways they believe are ethical and supportive of their learning. This was somewhat unexpected due to the common perception that undergraduate students have become increasingly focused on grades at the expense of learning.”
    The study does raise some cautions, she warns. GenAI can be a useful tool for students learning English or people with reading and writing disabilities. But there is also the potential that if paid versions are better, students who can afford to use a more effective platform might have an advantage over others — creating further classroom inequities.
    MacNutt says continued research in this area can only provide a better understanding of student behaviour and attitudes as GenAI technologies continue to advance. She also suggests, while AI continues to be used more frequently, that institutions and educators adopt an approach that embodies “collaboration with” rather than “surveillance of” students.
    “Our findings contradict common concerns about widespread student misuse and overuse of GenAI at the expense of academic integrity and learning,” says Dr. MacNutt. “But as we move forward with our policies, or how we’re teaching students how to use it, we have to keep in mind that students are coming from really different places. And they have different ways of benefiting or being harmed by these technologies.” More

  • in

    Breakthrough AI model could transform how we prepare for natural disasters

    As climate-related disasters grow more intense and frequent, an international team of researchers has introduced Aurora — a groundbreaking AI model designed to deliver faster, more accurate, and more affordable forecasts for air quality, ocean waves, and extreme weather events. This model, called Aurora, has been trained on over a million hours of data. According to the researchers, it could revolutionize the way we prepare for natural disasters and respond to climate change.
    From deadly floods in Europe to intensifying tropical cyclones around the world, the climate crisis has made timely and precise forecasting more essential than ever. Yet traditional forecasting methods rely on highly complex numerical models developed over decades, requiring powerful supercomputers and large teams of experts. According to its developers, Aurora offers a powerful and efficient alternative using artificial intelligence.
    Machine learning at the core
    ‘Aurora uses state-of-the-art machine learning techniques to deliver superior forecasts for key environmental systems — air quality, weather, ocean waves, and tropical cyclones,’ explains Max Welling, machine learning expert at the University of Amsterdam and one of the researchers behind the model. Unlike conventional methods, Aurora requires far less computational power, making high-quality forecasting more accessible and scalable — especially in regions that lack expensive infrastructure.
    Trained on a million hours of earth data
    Aurora is built on a 1.3 billion parameter foundation model, trained on more than one million hours of Earth system data. It has been fine-tuned to excel in a range of forecasting tasks: Air quality: Outperforms traditional models in 74% of cases Ocean waves: Exceeds numerical simulations on 86% of targets Tropical cyclones: Beats seven operational forecasting centres in 100% of tests High-resolution weather: Surpasses leading models in 92% of scenarios, especially during extreme eventsForecasting that’s fast, accurate, and inclusive
    As climate volatility increases, rapid and reliable forecasts are crucial for disaster preparedness, emergency response, and climate adaptation. The researchers believe Aurora can help by making advanced forecasting more accessible.

    ‘Development cycles that once took years can now be completed in just weeks by small engineering teams,’ notes AI researcher Ana Lucic, also of the University of Amsterdam. ‘This could be especially valuable for countries in the Global South, smaller weather services, and research groups focused on localised climate risks.’ ‘Importantly, this acceleration builds on decades of foundational research and the vast datasets made available through traditional forecasting methods,’ Welling adds.
    Aurora is available freely online for anyone to use. If someone wants to fine-tune it for a specific task, they will need to provide data for that task. ‘But the “initial” training is done, we don’t need these vast datasets anymore, all the information from them is baked into Aurora already’, Lucic explains.
    A future-proof forecasting tool
    Although current research focuses on the four applications mentioned above, the researchers say Aurora is flexible and can be used for a wide range of future scenarios. These could include forecasting flood risks, wildfire spread, seasonal weather trends, agricultural yields, and renewable energy output. ‘Its ability to process diverse data types makes it a powerful and future-ready tool’, states Welling.
    As the world faces more extreme weather — from heatwaves to hurricanes — innovative models like Aurora could shift the global approach from reactive crisis response to proactive climate resilience concludes the study. More

  • in

    Could AI understand emotions better than we do?

    Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs — including ChatGPT — to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology.
    Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour?
    These results pave the way for AI to be used in contexts thought to be reserved for humans.
    Emotionally charged scenarios
    To find out, a team from UniBE, Institute of Psychology, and UNIGE’s Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ”We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,” says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study.
    For example: One of Michael’s colleagues has stolen his idea and is being unfairly congratulated. What would be Michael’s most effective reaction?
    a) Argue with the colleague involved
    b) Talk to his superior about the situation

    c) Silently resent his colleague
    d) Steal an idea back
    Here, option b) was considered the most appropriate.
    In parallel, the same five tests were administered to human participants. “In the end, the LLMs achieved significantly higher scores — 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence,” explains Marcello Mortillaro, senior scientist at the UNIGE’s Swiss Center for Affective Sciences (CISA), who was involved in the research.
    New tests in record time
    In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ”They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,” explains Katja Schlegel. ”LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,” adds Marcello Mortillaro.
    These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts. More

  • in

    3D printers leave hidden ‘fingerprints’ that reveal part origins

    A new artificial intelligence system pinpoints the origin of 3D printed parts down to the specific machine that made them. The technology could allow manufacturers to monitor their suppliers and manage their supply chains, detecting early problems and verifying that suppliers are following agreed upon processes.
    A team of researchers led by Bill King, a professor of mechanical science and engineering at the University of Illinois Urbana-Champaign, has discovered that parts made by additive manufacturing, also known as 3D printing, carry a unique signature from the specific machine that fabricated them. This inspired the development of an AI system which detects the signature, or “fingerprint,” from a photograph of the part and identifies its origin.
    “We are still amazed that this works: we can print the same part design on two identical machines -same model, same process settings, same material — and each machine leaves a unique fingerprint that the AI model can trace back to the machine,” King said. “It’s possible to determine exactly where and how something was made. You don’t have to take your supplier’s word on anything.”
    The results of this study were recently published in the Nature partner journal Advanced Manufacturing.
    The technology has major implications for supplier management and quality control, according to King. When a manufacturer contracts a supplier to produce parts for a product, the supplier typically agrees to adhere to a specific set of machines, processes, and factory procedures and not to make any changes without permission. However, this provision is difficult to enforce. Suppliers often make changes without notice, from the fabrication process to the materials used. They are normally benign, but they can also cause major issues in the final product.
    “Modern supply chains are based on trust,” King said. “There’s due diligence in the form of audits and site tours at the start of the relationship. But, for most companies, it’s not feasible to continuously monitor their suppliers. Changes to the manufacturing process can go unnoticed for a long time, and you don’t find out until a bad batch of products is made. Everyone who works in manufacturing has a story about a supplier that changed something without permission and caused a serious problem.”
    While studying the repeatability of 3D printers, King’s research group noticed that the tolerances of part dimensions were correlated with individual machines. This inspired the researchers to examine photographs of the parts. It turned out that it is possible to determine the specific machine made the part, the fabrication process, and the materials used — the production “fingerprint.”
    “These manufacturing fingerprints have been hiding in plain sight,” King said. “There are thousands of 3D printers in the world, and tens of millions of 3D printed parts used in airplanes, automobiles, medical devices, consumer products, and a host of other applications. Each one of these parts has a unique signature that can be detected using AI.”

    King’s research group developed an AI model to identify production fingerprints from photographs taken with smartphone cameras. The AI model was developed on a large data set, comprising photographs of 9,192 parts made on 21 machines from six companies and with four different fabrication processes. When calibrating their model, the researchers found that a fingerprint could be obtained with 98% accuracy from just 1 square millimeter of the part’s surface.
    “Our results suggest that the AI model can make accurate predictions when trained with as few as 10 parts,” King said. “Using just a few samples from a supplier, it’s possible to verify everything that they deliver after.”
    King believes that this technology has the potential to overhaul supply chain management. With it, manufacturers can detect problems at early stages of production, and they save the time and resources needed to pinpoint the origins of errors. The technology could also be used to track the origins of illicit goods.
    Miles Bimrose, Davis McGregor, Charlie Wood and Sameh Tawfick also contributed to this work. More

  • in

    AI is good at weather forecasting. Can it predict freak weather events?

    Increasingly powerful AI models can make short-term weather forecasts with surprising accuracy. But neural networks only predict based on patterns from the past — what happens when the weather does something that’s unprecedented in recorded history? A new study led by scientists from the University of Chicago, in collaboration with New York University and the University of California Santa Cruz, is testing the limits of AI-powered weather prediction. In research published May 21 in Proceedings of the National Academy of Sciences, they found that neural networks cannot forecast weather events beyond the scope of existing training data — which might leave out events like 200-year floods, unprecedented heat waves or massive hurricanes.
    This limitation is particularly important as researchers incorporate neural networks into operational weather forecasting, early warning systems, and long-term risk assesments, the authors said. But they also said there are ways to address the problem by integrating more math and physics into the AI tools.
    “AI weather models are one of the biggest achievements in AI in science. What we found is that they are remarkable, but not magical,” said Pedram Hassanzadeh, an associate professor of geophysical sciences at UChicago and a corresponding author on the study. “We’ve only had these models for a few years, so there’s a lot of room for innovation.”
    Gray swan events
    Weather forecasting AIs work in a similar way to other neural networks that many people now interact with, such as ChatGPT.
    Essentially, the model is “trained” by feeding it a bunch of text or images into a model and asking it to look for patterns. Then, when a user presents the model with a question, it looks back at what it’s previously seen and uses that to predict an answer.
    In the case of weather forecasts, scientists train neural networks by feeding them decades’ worth of weather data. Then a user can input data about the current weather conditions and ask the model to predict the weather for the next several days.

    The AI models are very good at this. Generally, they can achieve the same accuracy as a top-of-the-line, supercomputer-based weather model that uses 10,000 to 100,000 times more time and energy, Hassanzadeh said.
    “These models do really, really well for day-to-day weather,” he said. “But what if next week there’s a freak weather event?”
    The concern is that the neural network is only working off the weather data we currently have, which goes back about 40 years. But that’s not the full range of possible weather.
    “The floods caused by Hurricane Harvey in 2017 were considered a once-in-a-2,000-year event, for example,” Hassanzadeh said. “They can happen.”
    Scientists sometimes refer to these events as “gray swan” events. They’re not quite all the way to a black swan event — something like the asteroid that killed the dinosaurs — but they are locally devastating.
    The team decided to test the limits of the AI models using hurricanes as an example. They trained a neural network using decades of weather data, but removed all the hurricanes stronger than a Category 2. Then they fed it an atmospheric condition that leads to a Category 5 hurricane in a few days. Could the model extrapolate to predict the strength of the hurricane?

    The answer was no.
    “It always underestimated the event. The model knows something is coming, but it always predicts it’ll only be a Category 2 hurricane,” said Yongqiang Sun, research scientist at UChicago and the other corresponding author on the study.
    This kind of error, known as a false negative, is a big deal in weather forecasting. If a forecast tells you a storm will be a Category 5 hurricane and it only turns out to be a Category 2, that means people evacuated who may not have needed to, which is not ideal. But if a forecast underestimates a hurricane that turns out to be a Category 5, the consequences would be far worse.
    Hurricane warnings and why physics matters
    The big difference between neural networks and traditional weather models is that traditional models “understand” physics. Scientists design them to incorporate our understanding of the math and physics that govern atmospheric dynamics, jet streams and other phenomena.
    The neural networks aren’t doing any of that. Like ChatGPT, which is essentially a predictive text machine, they simply look at weather patterns and suggest what comes next, based on what has happened in the past.
    No major service is currently using only AI models for forecasting. But as their use expands, this tendency will need to be factored in, Hassanzadeh said.
    Researchers, from meteorologists to economists, are beginning to use AI for long-term risk assessments. For example, they might ask an AI to generate many examples of weather patterns, so that we can see the most extreme events that might happen in each region in the future. But if an AI cannot predict anything stronger than what it’s seen before, its usefulness would be limited for this critical task. However, they found the model could predict stronger hurricanes if there was any precedent, even elsewhere in the world, in its training data. For example, if the researchers deleted all the evidence of Atlantic hurricanes but left in Pacific hurricanes, the model could extrapolate to predict Atlantic hurricanes.
    “This was a surprising and encouraging finding: it means that the models can forecast an event that was unpresented in one region but occurred once in a while in another region,” Hassanzadeh said.
    Merging approaches
    The solution, the researchers suggested, is to begin incorporating mathematical tools and the principles of atmospheric physics into AI-based models.
    “The hope is that if AI models can really learn atmospheric dynamics, they will be able to figure out how to forecast gray swans,” Hassanzadeh said.
    How to do this is a hot area of research. One promising approach the team is pursuing is called active learning — where AI helps guide traditional physics-based weather models to create more examples of extreme events, which can then be used to improve the AI’s training.
    “Longer simulated or observed datasets aren’t going to work. We need to think about smarter ways to generate data,” said Jonathan Weare, professor at the Courant Institute of Mathematical Sciences at New York University and study co-author. “In this case, that means answering the question ‘where should I place my training data to achieve better performance on extremes?’ Fortunately, we think AI weather models themselves, when paired with the right mathematical tools, can help answer this question.”
    University of Chicago Prof. Dorian Abbot and computational scientist Mohsen Zand were also co-authors on the study, as well as Ashesh Chattopadhyay of the University of California Santa Cruz.
    The study used resources maintained by the University of Chicago Research Computing Center. More

  • in

    Infrared contact lenses allow people to see in the dark, even with their eyes closed

    Neuroscientists and materials scientists have created contact lenses that enable infrared vision in both humans and mice by converting infrared light into visible light. Unlike infrared night vision goggles, the contact lenses, described in the Cell Press journal Cell on May 22, do not require a power source — and they enable the wearer to perceive multiple infrared wavelengths. Because they’re transparent, users can see both infrared and visible light simultaneously, though infrared vision was enhanced when participants had their eyes closed.
    “Our research opens up the potential for non-invasive wearable devices to give people super-vision,” says senior author Tian Xue, a neuroscientist at the University of Science and Technology of China. “There are many potential applications right away for this material. For example, flickering infrared light could be used to transmit information in security, rescue, encryption or anti-counterfeiting settings.”
    The contact lens technology uses nanoparticles that absorb infrared light and convert it into wavelengths that are visible to mammalian eyes (e.g., electromagnetic radiation in the 400-700 nm range). The nanoparticles specifically enable detection of “near-infrared light,” which is infrared light in the 800-1600 nm range, just beyond what humans can already see. The team previously showed that these nanoparticles enable infrared vision in mice when injected into the retina, but they wanted to design a less invasive option.
    To create the contact lenses, the team combined the nanoparticles with flexible, non-toxic polymers that are used in standard soft contact lenses. After showing that the contact lenses were non-toxic, they tested their function in both humans and mice.
    They found that contact lens-wearing mice displayed behaviors suggesting that they could see infrared wavelengths. For example, when the mice were given the choice of a dark box and an infrared-illuminated box, contact-wearing mice chose the dark box whereas contact-less mice showed no preference. The mice also showed physiological signals of infrared vision: the pupils of contact-wearing mice constricted in the presence of infrared light, and brain imaging revealed that infrared light caused their visual processing centers to light up.
    In humans, the infrared contact lenses enabled participants to accurately detect flashing morse code-like signals and to perceive the direction of incoming infrared light. “It’s totally clear cut: without the contact lenses, the subject cannot see anything, but when they put them on, they can clearly see the flickering of the infrared light,” said Xue. “We also found that when the subject closes their eyes, they’re even better able to receive this flickering information, because near-infrared light penetrates the eyelid more effectively than visible light, so there is less interference from visible light.”
    An additional tweak to the contact lenses allows users to differentiate between different spectra of infrared light by engineering the nanoparticles to color-code different infrared wavelengths. For example, infrared wavelengths of 980 nm were converted to blue light, wavelengths of 808 nm were converted to green light, and wavelengths of 1,532 nm were converted to red light. In addition to enabling wearers to perceive more detail within the infrared spectrum, these color-coding nanoparticles could be modified to help color blind people see wavelengths that they would otherwise be unable to detect.

    “By converting red visible light into something like green visible light, this technology could make the invisible visible for color blind people,” says Xue.
    Because the contact lenses have limited ability to capture fine details (due to their close proximity to the retina, which causes the converted light particles to scatter), the team also developed a wearable glass system using the same nanoparticle technology, which enabled participants to perceive higher-resolution infrared information.
    Currently, the contact lenses are only able to detect infrared radiation projected from an LED light source, but the researchers are working to increase the nanoparticles’ sensitivity so that they can detect lower levels of infrared light.
    “In the future, by working together with materials scientists and optical experts, we hope to make a contact lens with more precise spatial resolution and higher sensitivity,” says Xue. More

  • in

    ‘Fast-fail’ AI blood test could steer patients with pancreatic cancer away from ineffective therapies

    An artificial intelligence technique for detecting DNA fragments shed by tumors and circulating in a patient’s blood, developed by Johns Hopkins Kimmel Cancer Center investigators, could help clinicians more quickly identify and determine if pancreatic cancer therapies are working.
    After testing the method, called ARTEMIS-DELFI, in blood samples from patients participating in two large clinical trials of pancreatic cancer treatments, researchers found that it could be used to identify therapeutic responses. ARTEMIS-DELFI and another method developed by investigators, called WGMAF, to study mutations were found to be better predictors of outcome than imaging or other existing clinical and molecular markers two months after treatment initiation. However, ARTEMIS-DELFI was determined to be the superior test as it was simpler and potentially more broadly applicable.
    A description of the work was published May 21 in Science Advances. It was partly supported by grants from the National Institutes of Health.
    Time is of the essence when treating patients with pancreatic cancer, explains senior study author Victor E. Velculescu, M.D., Ph.D., co-director of the cancer genetics and epigenetics program at the cancer center. Many patients with pancreatic cancer receive a diagnosis at a late stage, when cancer may progress rapidly.
    “Providing patients with more potential treatment options is especially vital as a growing number of experimental therapies for pancreatic cancer have become available,” Velculescu says. “We want to know as quickly as we can if the therapy is helping the patient or not. If it is not working, we want to be able to switch to another therapy.”
    Currently, clinicians use imaging tools to monitor cancer treatment response and tumor progression. However, these tools produce results that may not be timely and are less accurate for patients receiving immunotherapies, which can make the results more complicated to interpret. In the study, Velculescu and his colleagues tested two alternate approaches to monitoring treatment response in patients participating in the phase 2 CheckPAC trial of immunotherapy for pancreatic cancer.
    One approach, called WGMAF (tumor-informed plasma whole-genome sequencing), analyzed DNA from tumor biopsies as well as cell-free DNA in blood samples to detect a treatment response. The other, called ARTEMIS-DELFI (tumor-independent genome-wide cfDNA fragmentation profiles and repeat landscapes), used machine learning, a form of artificial intelligence, to scan millions of cell-free DNA fragments only in the patient’s blood samples. Both approaches were able to detect which patients were benefiting from the therapies. However, not all patients had tumor samples, and many patients’ tumor samples had only a small fraction of cancer cells compared to the overall tissue, which also contained normal pancreatic and other cells, thereby confounding the WGMAF test.

    The ARTEMIS-DELFI approach worked with more patients and was simpler logistically, Velculescu says. The team then validated that ARTEMIS-DELFI was an effective treatment response monitoring tool in a second clinical trial called the PACTO trial. The study confirmed that ARTEMIS-DELFI could identify which patients were responding as soon as four weeks after therapy started.
    “The ‘fast-fail’ ARTEMIS-DELFI approach may be particularly useful in pancreatic cancer where changing therapies quickly could be helpful in patients who do not respond to the initial therapy,” says lead study author Carolyn Hruban, who was a graduate student at Johns Hopkins during the study and is now a postdoctoral researcher at the Dana-Farber Cancer Institute. “It’s simpler, likely less expensive, and more broadly applicable than using tumor samples.”
    The next step for the team will be prospective studies that test whether the information provided by ARTEMIS-DELFI helps clinicians more efficiently find an effective therapy and improve patient outcomes. A similar approach could also be used to monitor other cancers. Earlier this year, members of the team published a study in Nature Communications showing that a variation of the cell-free fragmentation monitoring approach called DELFI-TF was helpful in assessing colon cancer therapy response.
    “Our cell-free DNA fragmentation analyses provide a real-time assessment of a patient’s therapy response that can be used to personalize care and improve patient outcomes,” Velculescu says.
    Other co-authors include Daniel C. Bruhm, Shashikant Koul, Akshaya V. Annapragada, Nicholas A. Vulpescu, Sarah Short, Kavya Boyapati, Alessandro Leal, Stephen Cristiano, Vilmos Adleff, Robert B. Scharpf, Zachariah H. Foda, and Jillian Phallen of Johns Hopkins; Inna M. Chen, Susann Theile, and Julia S. Johannsen of Copenhagen University Hospital Herlev and Gentofte, and the University of Copenhagen; and Bahar Alipanahi and Zachary L. Skidmore of Delfi Diagnostics.
    The study was supported by the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation, SU2C Lung Cancer Interception Dream Team Grant; Stand Up to Cancer-Dutch Cancer Society International Translational Cancer Research Dream Team Grant, the Gray Foundation, the Honorable Tina Brozman Foundation, the Commonwealth Foundation, the Cole Foundation, a research grant from Delfi Diagnostics and National Institutes of Health grants CA121113,1T32GM136577, CA006973, CA233259, CA062924 and CA271896.
    Annapragada, Scharpf, and Velculescu are inventors on a patent submitted by Johns Hopkins University for genome-wide repeat and cell-free DNA in cancer (US patent application number 63/532,642). Annapragada, Bruhm, Adleff, Foda, Phallen and Scharpf are inventors on patent applications submitted by the university on related technology and licensed to Delfi Diagnostics. Phallen, Adleff, and Scharpf are founders of Delfi Diagnostics. Adleff and Scharpf are consultants for the company and Skidmore and Alipanahi are employees of the company. Velculescu is a founder of Delfi Diagnostics, member of its Board of Directors, and owns stock in the company. Johns Hopkins University owns equity in the company as well. Velculescu is an inventor on patent applications submitted by The Johns Hopkins University related to cancer genomic analyses and cell-free DNA that have been licensed to one or more entities, including Delfi Diagnostics, LabCorp, Qiagen, Sysmex, Agios, Genzyme, Esoterix, Ventana and ManaT Bio that result in royalties to the inventors and the University. These relationships are managed by Johns Hopkins in accordance with its conflict-of-interest policies. More