More stories

  • in

    AI harnesses tumor genetics to predict treatment response

    In a groundbreaking study published on January 18, 2024, in Cancer Discovery, scientists at University of California San Diego School of Medicine leveraged a machine learning algorithm to tackle one of the biggest challenges facing cancer researchers: predicting when cancer will resist chemotherapy.
    All cells, including cancer cells, rely on complex molecular machinery to replicate DNA as part of normal cell division. Most chemotherapies work by disrupting this DNA replication machinery in rapidly dividing tumor cells. While scientists recognize that a tumor’s genetic composition heavily influences its specific drug response, the vast multitude of mutations found within tumors has made prediction of drug resistance a challenging prospect.
    The new algorithm overcomes this barrier by exploring how numerous genetic mutations collectively influence a tumor’s reaction to drugs that impede DNA replication. Specifically, they tested their model on cervical cancer tumors, successfully forecasting responses to cisplatin, one of the most common chemotherapy drugs. The model was able to identify tumors at most risk for treatment resistance and was also able to identify much of the underlying molecular machinery driving treatment resistance.
    “Clinicians were previously aware of a few individual mutations that are associated with treatment resistance, but these isolated mutations tended to lack significant predictive value. The reason is that a much larger number of mutations can shape a tumor’s treatment response than previously appreciated,” Trey Ideker, PhD, professor in Department of Medicine at UC San Diego of Medicine, explained. “Artificial intelligence bridges that gap in our understanding, enabling us to analyze a complex array of thousands of mutations at once.”
    One of the challenges in understanding how tumors respond to drugs is the inherent complexity of DNA replication — a mechanism targeted by numerous cancer drugs.
    “Hundreds of proteins work together in complex arrangements to replicate DNA,” Ideker noted. “Mutations in any one part of this system can change how the entire tumor responds to chemotherapy.”
    The researchers focused on the standard set of 718 genes commonly used in clinical genetic testing for cancer classification, using mutations within these genes as the initial input for their machine learning model. After training it with publicly accessible drug response data, the model pinpointed 41 molecular assemblies — groups of collaborating proteins — where genetic alterations influence drug efficacy.

    “Cancer is a network-based disease driven by many interconnected components, but previous machine learning models for predicting treatment resistance don’t always reflect this,” said Ideker. “Rather than focusing on a single gene or protein, our model evaluates the broader biochemical networks vital for cancer survival.”
    After training their model, the researchers put it to the test in cervical cancer, in which roughly 35% of tumors persist after treatment. The model was able to accurately identify tumors that were susceptible to therapy, which were associated with improved patient outcomes. The model also effectively pinpointed tumors likely to resist treatment.
    Further still, beyond forecasting treatment responses, the model helped shed light on its decision-making process by identifying the protein assemblies driving treatment resistance in cervical cancer. The researchers emphasize that this aspect of the model — the ability to interpret its reasoning — is key to the model’s success and also for building trustworthy AI systems.
    “Unraveling an AI model’s decision-making process is crucial, sometimes as important as the prediction itself,” said Ideker. “Our model’s transparency is one of its strengths, first because it builds trust in the model, and second because each of these molecular assemblies we’ve identified becomes a potential new target for chemotherapy. We’re optimistic that our model will have broad applications in not only enhancing current cancer treatment, but also in pioneering new ones.” More

  • in

    Online reviews: Filter the fraud, but don’t tell us how

    When you try a new restaurant or book a hotel, do you consider the online reviews? Do you submit online reviews yourself? Do you pay attention if they are filtered and moderated? Does that impact your own online review submissions?
    A research team comprising of Rensselaer Polytechnic Institute’s T. Ravichandran, Ph.D., professor in the Lally School of Management, and Jason Kuruzovich, Ph.D., associate professor in the Lally School of Management; and Lianlian Jiang, Ph.D., assistant professor in the Bauer College of Business at the University of Houston, examined these questions in recently published research. In a world where businesses thrive or die by online reviews, it is important to consider the implications of a platform’s review moderation policies, the transparency of those policies, and how that affects the reviews that are submitted.
    “In 2010, Yelp debuted a video to help users understand how its review filter works and why it was necessary,” said Jiang. “Then, Yelp added a section to display filtered reviews. Previously, Yelp did not disclose information about its review filter. This change presented the perfect opportunity to examine the effect of policy transparency on submitted reviews.”
    Ravichandran and team compared reviews of over 1,000 restaurants on Yelp to those same restaurants on TripAdvisor, whose practices remained unchanged and was not transparent about its review filter. They used a difference-in-difference (DID) approach. They found that the number of reviews submitted to Yelp decreased. Those that were submitted were increasingly negative and shorter in length compared to TripAdvisor. Also, the more positive a review, the shorter it was.
    “Platforms are pressured to have content guidelines and take measures to prevent fraud and ensure that reviews are legitimate and helpful,” said Ravichandran. “However, most platforms are not transparent about their policies, leading consumers to suspect that reviews are manipulated to increase profit under the guise of filtering fraudulent content.”
    Platforms use sophisticated software to flag and filter reviews. Once a review is flagged, it is filtered out and not displayed, and it is not factored into the overall rating for a business.
    “Whether or not to be transparent about review filters is a critical decision for platforms with many considerations,” said Kuruzovich.
    Users may put in less time and effort into their reviews if they suspect that they have a significant chance of being filtered, or they may do the opposite to make their reviews less likely to be filtered. Since most fake reviews are overly positive, users may assume that positive reviews are most likely to be filtered and act accordingly. However, with a transparent policy, those who submit fake reviews may be incentivized to change their ways.
    “Review moderation transparency comes at a cost for platforms,” said Ravichandran. “Users reduce their contribution investment, or the amount of time and effort that they put into their reviews. This, in turn, affects the quality and characteristics of reviews. Although transparency helps to position a platform as unbiased toward advertisers, the resultant decrease in the number of reviews submitted impacts the platform’s usefulness to consumers.”
    “This research informs businesses on best practices and consumer behavior in the digital world,” said Chanaka Edirisinghe, Ph.D., acting dean of the Lally School of Management. “Online reviews pose great opportunity for firms, but also raise complex questions. Platforms must earn the trust of users without sacrificing engagement.” More

  • in

    Study identifies new findings on implant positioning and stability during robotic-assisted knee revision surgery

    An innovative study at Marshall University published in ArthroplastyToday explores the use of robotic-assisted joint replacement in revision knee scenarios, comparing the pre- and post-revision implant positions in a series of revision total knee arthroplasties (TKA) using a state-of-the-art robotic arm system.
    In this retrospective study, the orthopaedic team at the Marshall University Joan C. Edwards School of Medicine and Marshall Health performed 25 revision knee replacements with a robotic assisted computer system. The procedure involved placing new implants at the end of the thighbone and top of the shinbone with the computer’s aid to ensure the knee was stable and balanced throughout the range of motion. Researchers then carefully compared the initial positions of the primary implants with the final planned positions of the robotic revision implants for each patient, assessing the differences in millimeters and degrees.
    The analysis found that exceedingly small changes in implant position significantly influence the function of the knee replacement. Robotic assistance during revision surgery has the potential to measure these slight differences. In addition, the computer system can help the surgeon predict what size implant to use as well as help to balance the knee for stability.
    “Robotic-assisted surgery has the potential to change the way surgeons think about revision knee replacement,” said Matthew Bullock, D.O., associate professor of orthopaedic surgery and co-author on the study. “The precision offered by robotic-assisted surgery not only enhances the surgical process but also holds promise for improved patient outcomes. Besides infection, knee replacements usually fail because they become loose from the bone or because they are unbalanced leading to pain and instability. When this happens patients can have difficulty with activities of daily living such as walking long distances or negotiating stairs.”
    The study underscores the importance of aligning the prosthesis during revision surgery. The research also suggests potential advantages, including appropriately sized implants that can impact the ligament tension which is crucial for functional knee revisions.
    “These findings open new doors in the realm of revision knee arthroplasty,” said Alexander Caughran, M.D., assistant professor of orthopaedic surgery and co-author on the study. “We continue to collect more data for future studies on patient outcomes after robotic revision knee replacement. We anticipate that further research and technological advancements in the realm of artificial intelligence will continue to shape the landscape of orthopaedic surgery.”
    In addition to Bullock and Caughran, co-authors from Marshall University include Micah MacAskill, M.D., resident physician; Richard Peluso, M.D., resident physician; Jonathan Lash, M.D., resident physician; and Timothy Hewett, Ph.D., professor. More

  • in

    Chemists create a 2D heavy fermion

    Researchers at Columbia University have successfully synthesized the first 2D heavy fermion material. They introduce the new material, a layered intermetallic crystal composed of cerium, silicon, and iodine (CeSiI), in a research article published today in Nature.
    Heavy fermion compounds are a class of materials with electrons that are up to 1000x heavier than usual. In these materials, electrons get tangled up with magnetic spins that slow them down and increase their effective mass. Such interactions are thought to play important roles in a number of enigmatic quantum phenomena, including superconductivity, the movement of electrical current with zero resistance.
    Researchers have been exploring heavy fermions for decades, but in the form of bulky, 3D crystals. The new material synthesized by PhD student Victoria Posey in the lab of Columbia chemist Xavier Roy will allow researchers to drop a dimension.
    “We’ve laid a new foundation to explore fundamental physics and to probe unique quantum phases,” said Posey.
    One of the latest materials to come out of the Roy lab, CeSiI is a van der Waals crystal that can be peeled into layers that are just a few atoms thick. That makes it easier to manipulate and combine with other materials than a bulk crystal, in addition to possessing potential quantum properties that occur in 2D. “It’s amazing that Posey and the Roy lab could make a heavy fermion so small and thin,” said senior author Abhay Pasupathy, a physicist at Columbia and Brookhaven National Laboratory. “Just like we saw with the recent Nobel Prize to quantum dots, you can do many interesting things when you shrink dimensions.”
    With its middle sheet of silicon sandwiched between magnetic cerium atoms, Posey and her colleagues suspected that CeSiI, first described in a paper in 1998, might have some interesting electronic properties. Its first stop (after Posey figured out how to prepare the extremely air-sensitive crystal for transport) was a Scanning Tunneling Microscope (STM) in Abhay Pasupathy’s physics lab at Columbia. With the STM, they observed a particular spectrum shape characteristic of heavy fermions. Posey then synthesized a non-magnetic equivalent to CeSiI and weighed the electrons of both materials via their heat capacities. CeSiI’s were heavier. “By comparing the two — one with magnetic spins and one without — we can confirm we’ve created a heavy fermion,” said Posey.
    Samples then made their way across campus and the country for additional analyses, including to Pasupathy’s lab at Brookhaven National Laboratory for photoemission spectroscopy; to Philip Kim’s lab at Harvard for electron transport measurements; and to the National High Magnetic Field Laboratory in Florida to study its magnetic properties. Along the way, theorists Andrew Millis at Columbia and Angel Rubio at Max Planck helped explain the teams’ observations.
    From here, Columbia’s researchers will do what they do best with 2D materials: stack, strain, poke, and prod them to see what unique quantum behaviors can be coaxed out of them. Pasupathy plans to add CeSiI to his arsenal of materials in the search for quantum criticality, the point where a material shifts from one unique phase to another. At the crossover, interesting phenomena like superconductivity may await.
    “Manipulating CeSiI at the 2D limit will let us explore new pathways to achieve quantum criticality,” said Michael Ziebel, a postdoc in the Roy group and co-corresponding author, “and this can guide us in the design of new materials.”
    Back in the chemistry department, Posey, who has perfected the air-free synthesis techniques needed, is systematically replacing the atoms in the crystal — for example, swapping silicon for other metals, like aluminum or gallium — to create related heavy fermions with their own unique properties to study. “We initially thought CeSiI was a one-off,” said Roy. “But this project has blossomed into a new kind of chemistry in my group.” More

  • in

    Higher measurement accuracy opens new window to the quantum world

    Using the well-known terbium titanate as an example, the team demonstrated that the method delivers highly reliable results. The thermal Hall effect provides information about coherent multi-particle states in quantum materials, based on their interaction with lattice vibrations (phonons).
    A team at HZB has developed a new measurement method that, for the first time, accurately detects tiny temperature differences in the range of 100 microkelvin in the thermal Hall effect. Previously, these temperature differences could not be measured quantitatively due to thermal noise. Using the well-known terbium titanate as an example, the team demonstrated that the method delivers highly reliable results. The thermal Hall effect provides information about coherent multi-particle states in quantum materials, based on their interaction with lattice vibrations (phonons).
    The laws of quantum physics apply to all materials. However, in so-called quantum materials, these laws give rise to particularly unusual properties. For example, magnetic fields or changes in temperature can cause excitations, collective states or quasiparticles that are accompanied by phase transitions to exotic states. This can be utilised in a variety of ways, provided it can be understood, managed and controlled: For example, in future information technologies that can store or process data with minimal energy requirements.
    The thermal Hall effect (THE) plays a key role in identifying exotic states in condensed matter. The effect is based on tiny transverse temperature differences that occur when a thermal current is passed through a sample and a perpendicular magnetic field is applied. In particular, the quantitative measurement of the thermal Hall effect allows to separate the exotic excitations from conventional behaviour. The thermal Hall effect is observed in a variety of materials, including spin liquids, spin ice, parent phases of high-temperature superconductors and materials with strongly polar properties. However, the thermal differences that occur perpendicular to the temperature gradient in the sample are extremely small: in typical millimetre-sized samples, they are in the range of microkelvins to millikelvins. Until now, it has been difficult to detect these heat differences experimentally because the heat introduced by the measurement electronics and sensors masks the effect.
    A novel sample holder
    The team led by PD Dr Klaus Habicht has now carried out pioneering work. Together with specialists from the HZB sample environment, they have developed a novel sample rod with a modular structure that can be inserted into various cryomagnets. The sample head measures the thermal Hall effect using capacitive thermometry. This takes advantage of the temperature dependence of the capacitance of specially manufactured miniature capacitors. With this setup, the experts have succeeded in significantly reducing heat transfer through sensors and electronics, and in attenuating interference signals and noise with several innovations. To validate the measurement method, they analysed a sample of terbium titanate, whose thermal conductivity in different crystal directions under a magnetic field is well known. The measured data were in excellent agreement with the literature.
    Further improvement of the measurement method
    “The ability to resolve temperature differences in the sub-millikelvin range fascinates me greatly and is a key to studying quantum materials in more detail,” says first author Dr Danny Kojda. “We have now jointly developed a sophisticated experimental design, clear measurement protocols and precise analysis procedures that allow high-resolution and reproducible measurements.” Department head Klaus Habicht adds: “Our work also provides information how to further improve the resolution in future instruments designed for low sample temperatures. I would like to thank everyone involved, especially the sample environment team. I hope that the experimental setup will be firmly integrated into the HZB infrastructure and that the proposed upgrades will be implemented.”
    Outlook: Topological properties of phonons
    Habicht’s group will now use measurements of the thermal Hall effect to investigate the topological properties of lattice vibrations or phonons in quantum materials. “The microscopic mechanisms and the physics of the scattering processes for the thermal Hall effect in ionic crystals are far from being fully understood. The exciting question is why electrically neutral quasiparticles in non-magnetic insulators are nevertheless deflected in the magnetic field,” says Habicht. With the new instrument, the team has now created the prerequisites to answer this question. More

  • in

    Ultrafast laser pulses could lessen data storage energy needs

    A discovery from an experiment with magnets and lasers could be a boon to energy-efficient data storage.
    “We wanted to study the physics of light-magnet interaction,” said Rahul Jangid, who led the data analysis for the project while earning his Ph.D. in materials science and engineering at UC Davis under associate professor Roopali Kukreja. “What happens when you hit a magnetic domain with very short pulses of laser light?”
    Domains are areas within a magnet that flip from north to south poles. This property is used for data storage, for example in computer hard drives.
    Jangid and his colleagues found that when a magnet is hit with a pulsed laser, the domain walls in the ferromagnetic layers move at a speed of approximately 66 km/s, which is about 100 times faster than the speed limit previously thought.
    Domain walls moving at this speed could drastically affect the way data is stored and processed, offering a means of faster, more stable memory and reducing energy consumption in spintronic devices such as hard disk drives that use the spin of electrons within magnetic metallic multilayers to store, process or transmit information.
    “No one thought it was possible to move these walls that fast because they should hit their limit,” said Jangid. “It sounds absolutely bananas, but it’s true.”
    It’s “bananas,” because of the Walker breakdown phenomenon, which says that domain walls can only be pushed so far at a given velocity before they effectively break down and stop moving. This research, however, gives evidence that the domain walls can be driven at previously unknown velocities using lasers.

    While most personal devices like laptops and cell phones use faster flash drives, data centers use cheaper, slower hard disk drives. However, each time a bit of information is processed, or flipped, the drive uses a magnetic field to conduct heat through a coil of wire, burning a lot of energy. If, rather, a drive could use laser pulses on the magnetic layers, the device would operate at a lower voltage and bit flips would take significantly less energy to process.
    Current projections indicate that by 2030, information and communications technology will account for 21% of the world’s energy demand, exacerbating climate change. This finding, which was highlighted in a paper by Jangid and co-authors titled “Extreme Domain Wall Speeds under Ultrafast Optical Excitation” in the journal Physical Review Letters on Dec. 19, comes at a time when finding energy-efficient technologies is paramount.
    When laser meets magnet
    To conduct the experiment, Jangid and his collaborators, including researchers from the National Institute of Science and Technology; UC San Diego; University of Colorado, Colorado Springs and Stockholm University used the Free Electron Laser Radiation for Multidisciplinary Investigations, or FERMI facility, a free electron laser source based in Trieste, Italy.
    “Free electron lasers are insane facilities,” Jangid said. “It’s a 2-mile-long vacuum tube, and you take a small number of electrons, accelerate them up to the speed of light, and at the end wiggle them to create X-rays so bright, that if you’re not careful, your sample could be vaporized. Think of it like taking all the sunlight falling on the Earth and focusing it all on a penny — that’s how much photon flux we have at free electron lasers.”
    At FERMI, the group utilized X-rays to measure what occurs when a nano-scale magnet with multiple layers of cobalt, iron and nickel is excited by femtosecond pulses. A femtosecond is defined as 10 to the negative 15th of a second or one-millionth of one-billionth of a second.

    “There are more femtoseconds in one second than there are days in the age of the universe,” Jangid said. “These are extremely small, extremely fast measurements that are difficult to wrap your head around.”
    Jangid, who was analyzing the data saw that it was these ultrafast laser pulses exciting the ferromagnetic layers that resulted in the movement of the domain walls. Based on how fast those domain walls were moving, the study posits that these ultrafast laser pulses can switch a stored bit of information approximately 1,000 times faster than the magnetic field or spin current-based methods being used now.
    The future of ultrafast phenomena
    The technology is far from being practically applied, as current lasers consume a lot of power. However, a process similar to the way compact discs, or CDs, use lasers to store information and CD players use lasers to play it back could potentially work in the future, Jangid said.
    The next steps include further exploring the physics of mechanisms that enable ultrafast domain wall velocities higher than the previously known limits, as well as imaging the domain wall motion.
    This research will continue at UC Davis under Kukreja. Jangid is now pursuing similar research at National Synchrotron Light Source 2 at Brookhaven National Laboratory.
    “There are so many aspects of ultrafast phenomenon that we are just starting to understand,” Jangid said. “I’m eager to tackle the open questions that could unlock transformative advancements in low power spintronics, data storage, and information processing.” More

  • in

    Tiny AI-based bio-loggers revealing the interesting bits of a bird’s day

    Have you ever wondered what wildlife animals do all day? Documentaries offer a glimpse into their lives, but animals under the watchful eye do not do anything interesting. The true essence of their behaviors remains elusive. Now, researchers from Japan have developed a camera that allows us to capture these behaviors.
    In a study recently published in PNAS Nexus, researchers from Osaka University have created a small sensor-based data logger (called a bio-logger) that automatically detects and records video of infrequent behaviors in wild seabirds without supervision by researchers.
    Infrequent behaviors, such as diving into the water for food, can lead to new insights or even new directions in research. But observing enough of these behaviors to infer any results is difficult, especially when these behaviors take place in an environment that is not hospitable to humans, such as the open ocean. As a result, the detailed behaviors of these animals remain largely unknown.
    “Video cameras attached to the animal are an excellent way to observe behavior,” says Kei Tanigaki, lead author of the study. However, video cameras are very power hungry, and this leads to a trade-off. “Either the video only records until the battery runs out, in which case you might miss the rare behavior, or you use a larger, heavier battery, which is not suitable for the animal.”
    To avoid having to make this choice for the wild seabirds under study, the team use low-power sensors, such as accelerometers, to determine when an unusual behavior is taking place. The camera is then turned on, the behavior is recorded, and the camera powers off until the next time. This bio-logger is the first to use artificial intelligence to do this task.
    “We use a method called an isolation forest,” says Takuya Maekawa, senior author. “This method detects outlier events well, but like many other artificial intelligence algorithms, it is computationally complex. This means, like the video cameras, it is power hungry.” For the bio-loggers, the researchers needed a light-weight algorithm, so they trained the original isolation forest on their data and then used it as a “teacher” to train a smaller “student” outlier detector installed on the bio-logger.
    The final bio-logger is 23 g, which is less than 5% of the body weight of the Streaked Shearwater birds under study. Eighteen bio-loggers were deployed, a total of 205 hours of low-power sensor data were collected, and 76 5-min videos were collected. The researchers were able to collect enough data to reveal novel aspects of head-shaking and foraging behaviors of the birds.
    This approach, which overcomes the battery-life limitation of most bio-loggers, will help us understand the behaviors of wildlife that venture into human-inhabited areas. It will also enable animals in extreme environments inaccessible to humans to be observed. This means that many other rare behaviors — from sweet-potato washing by Japanese monkeys to penguins feeding on jellyfish — can now be studied in the future. More

  • in

    New AI makes better permafrost maps

    New insights from artificial intelligence about permafrost coverage in the Arctic may soon give policy makers and land managers the high-resolution view they need to predict climate-change-driven threats to infrastructure such as oil pipelines, roads and national security facilities.
    “The Arctic is warming four times faster than the rest of the globe, and permafrost is a component of the Arctic that’s changing really rapidly,” said Evan Thaler, a Chick Keller Postdoctoral Fellow at Los Alamos National Laboratory. Thaler is corresponding author of a paper published in the journal Earth and Space Science on an innovative application of AI to permafrost data.
    “Current models don’t give the resolution needed to understand how permafrost thaw is changing the environment and affecting infrastructure,” Thaler said. “Our model creates high-resolution maps telling us where permafrost is now and where it is likely to change in the future.”
    The AI models also identify the landscape and ecological features driving the predictions, such as vegetative greenness, landscape slope angle and the duration of snow cover.
    AI versus field data
    Thaler was part of a team with fellow Los Alamos researchers Joel Rowland, Jon Schwenk and Katrina Bennett, plus collaborators from Lawrence Berkeley National Laboratory, that used a form of AI called supervised machine learning. The work tested the accuracy of three different AI approaches against field data collected by Los Alamos researchers from three watersheds with patchy permafrost on the Seward Peninsula in Alaska.
    Permafrost, or ground that stays below freezing temperature two years or more, covers about one-sixth of the exposed land in the Northern Hemisphere, Thaler said. Thawing permafrost is already disrupting roads, oil pipelines and other facilities built over it and carries a range of environmental hazards as well.

    As air temperatures warm under climate change, the thawing ground releases water. It flows to lower terrain, rivers, lakes and the ocean, causing land-surface subsidence, transporting minerals, altering the direction of groundwater, changing soil chemistry and releasing carbon to the atmosphere.
    Useful results
    The resolution of the most widely used current pan-arctic model for permafrost is about one-third square mile, far too coarse to predict how changing permafrost will undermine a road or pipeline, for instance. The new Los Alamos AI model determines surface permafrost coverage to a resolution of just under 100 square feet, smaller than a typical parking space and far more practical for assessing risk at a specific location.
    Using their AI model trained on data from three sites on the Seward Peninsula, the team generated a map showing large areas without any permafrost around the Seward sites, matching the field data with 83% accuracy. Using the pan-arctic model for comparison, the team generated a map of the same sites with only 50% accuracy.
    “It’s the highest accuracy pan-arctic product to date, but it obviously isn’t good enough for site-specific predictions,” Thaler said. “The pan-arctic product predicts 100% of that site is permafrost, but our model predicts only 68%, which we know is closer to the real percentage based on field data.”
    Feeding the AI models
    This initial study proved the concept of the Los Alamos model on the Seward data, delivering acceptable accuracy for terrain similar to the location where the field data was collected. To measure each model’s transferability, the team also trained it on data from one site then ran the model using data from a second site with different terrain that the model had not been trained on. None of the models transferred well by creating a map matching actual findings at the second site.

    Thaler said the team will do additional work on the AI algorithms to improve the model’s transferability to other areas across the Arctic. “We want to be able to train on one data set and then apply the model to a place it hasn’t seen before. We just need more data from more diverse landscapes to train the models, and we hope to collect that data soon,” he said.
    Part of the study involved comparing the accuracy of three different AI approaches — extremely randomized trees, support vector machines and an artificial neural network — to see which model came closest to matching the “ground truth” data gathered in field observations at the Seward Peninsula. Part of that data was used to train the AI models. Each model then generated a map based on unseen data predicting the extent of near-surface permafrost.
    While the Los Alamos research demonstrated a marked improvement over the best — and widely used — pan-arctic model, the results from the team’s three AI models were mixed, with the support vector machines showing the most promise for transferability. More