More stories

  • in

    Pain relief without pills? VR nature scenes trigger the brain’s healing switch

    Immersing in virtual reality (VR) nature scenes helped relieve symptoms that are often seen in people living with long-term pain, with those who felt more present experiencing the strongest effects.
    A new study led by the University of Exeter, published in the journal Pain, tested the impact of immersive 360-degree nature films delivered using VR compared with 2D video images in reducing experience of pain, finding VR almost twice as effective.
    Long-term (chronic) pain typically lasts more than three months and is particularly difficult to treat. The researchers simulated this type of pain in healthy participants, finding that nature VR had an effect similar to that of painkillers, which endured for at least five minutes after the VR experience had ended.
    Dr Sam Hughes, Senior Lecturer in Pain Neuroscience at the University of Exeter, led the study. He said: “We’ve seen a growing body of evidence show that exposure to nature can help reduce short term, everyday pain, but there has been less research into how this might work for people living with chronic or longer-term pain. Also, not everyone is able to get out for walks in nature, particularly those living with long term health conditions like chronic pain. Our study is the first to look at the effect of prolonged exposure to a virtual reality nature scene on symptoms seen during long term pain sensitivity. Our results suggest that immersive nature experiences can reduce the development of this pain sensitivity through an enhanced sense of presence and through harnessing the brains in-built pain suppression systems”
    The study, which was funded by the Academy of Medical Sciences, involved 29 healthy participants who were shown two types of nature scene after having pain delivered on the forearm using electric shocks. On the first visit, they measured the changes in pain that occur over a 50-minute period following the electric shocks and showed how the healthy participants developed sensitivity to sharp pricking stimuli in the absence of any nature scenes. The results showed that the participants developed a type of sensitivity that closely resembles that seen in people living with nerve pain — which occurs due to changes in how pain signals are processed in the brain and spinal cord.
    On the second visit, they immersed the same participants in a 45-minute virtual reality 360-degree experience of the waterfalls of Oregon to see how this could change how the development of pain sensitivity. The scene was specially chosen to maximize therapeutic effects.
    In the second visit, they explored the same scene, but on a 2D screen.

    They completed questionnaires on their experience of pain after watching the scenes in each case, and also on how present they felt in each experience, and to what extent they felt the nature scenes to be restorative[LV1] .
    On a separate visit, participants underwent MRI brain scans at the University of Exeter’s Mireille Gillings Neuroimaging Centre. Researchers administered a cold gel to illicit a type of ongoing pain and then scanned participants to study how their brains respond.
    The researchers found that the immersive VR experience significantly reduced the development and spread of feelings of pain sensitivity to pricking stimuli, and these pain-reducing effects were still there even at the end of the 45-minute experience.
    The more present the person felt during the VR experience, the stronger this pain-relieving effect. The fMRI brain scans also revealed that people with stronger connectivity in brain regions involved in modulating pain responses experienced less pain. The results suggest that nature scenes delivered using VR can help to change how pain signals are transmitted in the brain and spinal cord during long-term pain conditions.
    Dr Sonia Medina, of the University of Exeter Medical School and one of the authors on the study, said: “We think VR has a particularly strong effect on reducing experience of pain because it’s so immersive. It really created that feeling of being present in nature – and we found the pain – reducing effect was greatest in people for whom that perception was strongest. We hope our study leads to more research to investigate further how exposure to nature effects our pain responses, so we could one day see nature scenes incorporated into ways of reducing pain for people in settings like care homes or hospitals.”
    The paper is titled ‘Immersion in nature through virtual reality attenuates the development and spread of mechanical secondary hyperalgesia: a role for insulo-thalamic effective connectivity’ and is published in the journal Pain. More

  • in

    This spectrometer is smaller than a pixel, and it sees what we can’t

    Researchers have successfully demonstrated a spectrometer that is orders of magnitude smaller than current technologies and can accurately measure wavelengths of light from ultraviolet to the near-infrared. The technology makes it possible to create hand-held spectroscopy devices and holds promise for the development of devices that incorporate an array of the new sensors to serve as next-generation imaging spectrometers.
    “Spectrometers are critical tools for helping us understand the chemical and physical properties of various materials based on how light changes when it interacts with those materials,” says Brendan O’Connor, corresponding author of a paper on the work and a professor of mechanical and aerospace engineering at North Carolina State University. “They are used in applications that range from manufacturing to biomedical diagnostics. However, the smallest spectrometers on the market are still fairly bulky.
    “We’ve created a spectrometer that operates quickly, at low voltage, and that is sensitive to a wide spectrum of light,” O’Connor says. “Our demonstration prototype is only a few square millimeters in size – it could fit on your phone. You could make it as small as a pixel, if you wanted to.”
    The technology makes use of a tiny photodetector capable of sensing wavelengths of light after the light interacts with a target material. By applying different voltages to the photodetector, you can manipulate which wavelengths of light the photodetector is most sensitive to.
    “If you rapidly apply a range of voltages to the photodetector, and measure all of the wavelengths of light being captured at each voltage, you have enough data that a simple computational program can recreate an accurate signature of the light that is passing through or reflecting off of the target material,” O’Connor says. “The range of voltages is less than one volt, and the entire process can take place in less than a millisecond.”
    Previous attempts to create miniaturized photodetectors have relied on complex optics, used high voltages, or have not been as sensitive to such a broad range of wavelengths.
    In proof-of-concept testing, the researchers found their pixel-sized spectrometer was as accurate as a conventional spectrometer and had sensitivity comparable to commercial photodetection devices.
    “In the long term, our goal is to bring spectrometers to the consumer market,” O’Connor says. “The size and energy demand of the technology make it feasible to incorporate into a smartphone, and we think this makes some exciting applications possible. From a research standpoint, this also paves the way for improved access to imaging spectroscopy, microscopic spectroscopy, and other applications that would be useful in the lab.”
    The paper, “Single pixel spectrometer based on a bias-tunable tandem organic photodetector,” is published in the journal Device. First author of the paper is Harry Schrickx, a former Ph.D. student at NC State. The paper was co-authored by Abdullah Al Shafe, a former Ph.D. student at NC State; Caleb Moore, a former undergraduate at NC State; Yusen Pei, a Ph.D. student at NC State; Franky So, the Walter and Ida Freeman Distinguished Professor of Materials Science and Engineering at NC State; and Michael Kudenov, the John and Catherine Amein Family Distinguished Professor of Electrical and Computer Engineering at NC State.
    The work was done with support from the National Science Foundation under grants 1809753 and 2324190, and from the Office of Naval Research under grant N000142412101. More

  • in

    Scientists just cracked the cryptographic code behind quantum supremacy

    Experts say quantum computing is the future of computers. Unlike conventional computers, quantum computers leverage the properties of quantum physics such as superposition and interference, theoretically outperforming current equipment to an exponential degree.
    When a quantum computer is able to solve a problem unfeasible for current technologies, this is called the quantum advantage. However, this edge is not guaranteed for all calculations, raising fundamental questions regarding the conditions under which such an advantage exists. While previous studies have proposed various sufficient conditions for quantum advantage, the necessity of these conditions has remained unclear.
    Motivated by this uncertainty, a team of researchers at Kyoto University has endeavored to understand the necessary and sufficient conditions for quantum advantage, using an approach combining techniques from quantum computing and cryptography, the science of coding information securely.
    Specifically, the team focused on interactive protocols called inefficient-verifier proofs of quantumness, which allow a verifier without a quantum computer to interact with a quantum prover and verify that it indeed possesses quantum computational power. In their study, the team demonstrated that the existence of these proofs depends on the existence of a certain cryptographic primitive called a one-way puzzle.
    By integrating these methods, the team introduced a novel framework uniting the seemingly unrelated concepts of quantum advantage and cryptographic security. As a result, the team was able to completely characterize quantum advantage for the first time.
    “We were able to identify the necessary and sufficient conditions for quantum advantage by proving an equivalence between the existence of quantum advantage and the security of certain quantum cryptographic primitives,” says corresponding author Yuki Shirakawa.
    The results imply that when quantum advantage does not exist, then the security of almost all cryptographic primitives — previously believed to be secure — is broken. Importantly, these primitives are not limited to quantum cryptography but also include widely-used conventional cryptographic primitives as well as post-quantum ones that are rapidly evolving.
    The established equivalence between quantum computing and cryptography also provides a stronger cryptographic foundation for future experimental demonstrations of quantum advantage, as well as for ongoing theoretical investigations in the field.
    “Quantum advantage is a highly expected and actively studied concept, but it is still not fully understood. Our study represents a significant step toward a deeper understanding of this property,” says Shirakawa.
    The team expects that future research will extend characterization to other types of quantum advantage and lead to a more general theoretical framework. More

  • in

    The real-life Kryptonite found in Serbia—and why it could power the future

    Jadarite has been likened to Superman’s ‘kryptonite’ based on their similar chemical compositions. It was discovered in the Jadar Valley of Serbia and officially recognized as a new mineral in 2006. Whilst lacking any actual superpowers, jadarite has great potential as an important resource of lithium and boron.Kryptonite’s twin on Earth Described as ‘Earth’s kryptonite twin’, jadarite is a rare and fascinating mineral that quickly caught the attention of scientists and Superman fans alike.
    The mineral was discovered by exploration geologists from Rio Tinto in 2004 in the Jadar Valley of Serbia. Its chemical composition is exactly like the fictional kryptonite right out of the comic books — with a few differences. Where kryptonite glows green and weakens superheroes, jadarite offers immense potential for Earth’s energy transition away from fossil fuels.
    A new mineral on the scene
    Jadarite was identified by Rio Tinto geologists during exploration drilling and didn’t match any known mineral at the time. After analysis by the Natural History Museum in London and the National Research Council of Canada, it was officially recognised as a new mineral in 2006.
    Jadarite is a “sodium lithium boron silicate hydroxide” mineral, coincidentally the same scientific name written on a case containing kryptonite stolen by Lex Luther from a museum in the film Superman Returns.

    While the film version of kryptonite contains fluorine and glows an eery green, the chemical formula of the real version is LiNaSiB₃O₇(OH) and is a much less supernatural dull white — though it does fluoresce a pinkish-orange under UV light.
    Super in its own right
    Michael Page, a scientist with Australia’s Nuclear Science and Technology Organisation (ANSTO), said that the mineral is ‘super’ in its own right.
    “While lacking any supernatural powers the real jadarite has great potential as an important source of lithium and boron,” Michael said.
    “In fact, the Jadar deposit where it was first discovered is considered one of the largest lithium deposits in the world, making it a potential game-changer for the global green energy transition.”
    ANSTO is one of the three supporting agencies of the Australian Critical Minerals R&D Hub, alongside Geoscience Australia and CSIRO, hosted by CSIRO. One of the Hub’s key missions is to better connect Australia’s R&D ecosystem, including Australian industry, to enable access and utilization of critical minerals to strengthen Australia’s value chain domestically and across the globe.
    The work that ANSTO does has a significant focus on how these critical minerals, such as jadarite, can be utilised to support Australian industry in a commercial capacity.
    “At ANSTO, we work with industry to develop process solutions for many critical elements including lithium, and the challenges posed by a new type of mineral resource are very exciting,” Michael said.
    ANSTO has produced battery grade lithium chemicals from many different mineral deposits, such as spodumene, lepidolite and even jadarite, ensuring that Australian miners receive the support they need to meet the challenges of the energy transition. More

  • in

    Trapped by moon dust: The physics error that fooled NASA for years

    When a multimillion-dollar extraterrestrial vehicle gets stuck in soft sand or gravel — as did the Mars rover Spirit in 2009 — Earth-based engineers take over like a virtual tow truck, issuing a series of commands that move its wheels or reverse its course in a delicate, time-consuming effort to free it and continue its exploratory mission.
    While Spirit remained permanently stuck, in the future, better terrain testing right here on terra firma could help avert these celestial crises.
    Using computer simulations, University of Wisconsin-Madison mechanical engineers have uncovered a flaw in how rovers are tested on Earth. That error leads to overly optimistic conclusions about how rovers will behave once they’re deployed on extraterrestrial missions.
    An important element in preparing for these missions is an accurate understanding of how a rover will traverse extraterrestrial surfaces in low gravity to prevent it from getting stuck in soft terrain or rocky areas.
    On the moon, the gravitational pull is six times weaker than on Earth. For decades, researchers testing rovers have accounted for that difference in gravity by creating a prototype that is a sixth of the mass of the actual rover. They test these lightweight rovers in deserts, observing how it moves across sand to gain insights into how it would perform on the moon.
    It turns out, however, that this standard testing approach overlooked a seemingly inconsequential detail: the pull of Earth’s gravity on the desert sand.
    Through simulation, Dan Negrut, a professor of mechanical engineering at UW-Madison, and his collaborators determined that Earth’s gravity pulls down on sand much more strongly than the gravity on Mars or the moon does. On Earth, sand is more rigid and supportive — reducing the likelihood it will shift under a vehicle’s wheels. But the moon’s surface is “fluffier” and therefore shifts more easily — meaning rovers have less traction, which can hinder their mobility.

    “In retrospect, the idea is simple: We need to consider not only the gravitational pull on the rover but also the effect of gravity on the sand to get a better picture of how the rover will perform on the moon,” Negrut says. “Our findings underscore the value of using physics-based simulation to analyze rover mobility on granular soil.”
    The team recently detailed its findings in the Journal of Field Robotics.
    The researchers’ discovery resulted from their work on a NASA-funded project to simulate the VIPER rover, which had been planned for a lunar mission. The team leveraged Project Chrono, an open-source physics simulation engine developed at UW-Madison in collaboration with scientists from Italy. This software allows researchers to quickly and accurately model complex mechanical systems — like full-size rovers operating on “squishy” sand or soil surfaces.
    While simulating the VIPER rover, they noticed discrepancies between the Earth-based test results and their simulations of the rover’s mobility on the moon. Digging deeper with Chrono simulations revealed the testing flaw.
    The benefits of this research also extend well beyond NASA and space travel. For applications on Earth, Chrono has been used by hundreds of organizations to better understand complex mechanical systems — from precision mechanical watches to U.S. Army trucks and tanks operating in off-road conditions.
    “It’s rewarding that our research is highly relevant in helping to solve many real-world engineering challenges,” Negrut says. “I’m proud of what we’ve accomplished. It’s very difficult as a university lab to put out industrial-strength software that is used by NASA.”
    Chrono is free and publicly available for unfettered use worldwide, but the UW-Madison team puts in significant ongoing work to develop and maintain the software and provide user support.

    “It’s very unusual in academia to produce a software product at this level,” Negrut says. “There are certain types of applications relevant to NASA and planetary exploration where our simulator can solve problems that no other tool can solve, including simulators from huge tech companies, and that’s exciting.”
    Since Chrono is open source, Negrut and his team are focused on continually innovating and enhancing the software to stay relevant.
    “All our ideas are in the public domain and the competition can adopt them quickly, which is drives us to keep moving forward,” he says. “We have been fortunate over the last decade to receive support from the National Science Foundation, U.S. Army Research Office and NASA. This funding has really made a difference, since we do not charge anyone for the use of our software.”
    Co-authors on the paper include Wei Hu of Shanghai Jiao Tong University, Pei Li of UW-Madison, Arno Rogg and Alexander Schepelmann of NASA, Samuel Chandler of ProtoInnovations, LLC, and Ken Kamrin of MIT.
    This work was supported by NASA STTR (80NSSC20C0252), the National Science Foundation (OAC2209791) and the U.S. Army Research Office, (W911NF1910431 and W911NF1810476). More

  • in

    Harvard’s ultra-thin chip could revolutionize quantum computing

    New research shows that metasurfaces could be used as strong linear quantum optical networks This approach could eliminate the need for waveguides and other conventional optical components Graph theory is helpful for designing the functionalities of quantum optical networks into a single metasurfaceIn the race toward practical quantum computers and networks, photons — fundamental particles of light — hold intriguing possibilities as fast carriers of information at room temperature. Photons are typically controlled and coaxed into quantum states via waveguides on extended microchips, or through bulky devices built from lenses, mirrors, and beam splitters. The photons become entangled – enabling them to encode and process quantum information in parallel – through complex networks of these optical components. But such systems are notoriously difficult to scale up due to the large numbers and imperfections of parts required to do any meaningful computation or networking.Could all those optical components could be collapsed into a single, flat, ultra-thin array of subwavelength elements that control light in the exact same way, but with far fewer fabricated parts?
    Optics researchers in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) did just that. The research team led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, created specially designed metasurfaces — flat devices etched with nanoscale light-manipulating patterns — to act as ultra-thin upgrades for quantum-optical chips and setups.
    The research was published in Science and funded by the Air Force Office of Scientific Research (AFOSR).
    Capasso and his team showed that a metasurface can create complex, entangled states of photons to carry out quantum operations – like those done with larger optical devices with many different components.
    “We’re introducing a major technological advantage when it comes to solving the scalability problem,” said graduate student and first author Kerolos M.A. Yousef. “Now we can miniaturize an entire optical setup into a single metasurface that is very stable and robust.”
    Metasurfaces: Robust and scalable quantum photonics processors

    Their results hint at the possibility of paradigm-shifting optical quantum devices based not on conventional, difficult-to-scale components like waveguides and beam splitters, or even extended optical microchips, but instead on error-resistant metasurfaces that offer a host of advantages: designs that don’t require intricate alignments, robustness to perturbations, cost-effectiveness, simplicity of fabrication, and low optical loss. Broadly speaking, the work embodies metasurface-based quantum optics which, beyond carving a path toward room-temperature quantum computers and networks, could also benefit quantum sensing or offer “lab-on-a-chip” capabilities for fundamental science
    Designing a single metasurface that can finely control properties like brightness, phase, and polarization presented unique challenges because of the mathematical complexity that arises once the number of photons and therefore the number of qubits begins to increase. Every additional photon introduces many new interference pathways, which in a conventional setup would require a rapidly growing number of beam splitters and output ports.
    Graph theory for metasurface design
    To bring order to the complexity, the researchers leaned on a branch of mathematics called graph theory, which uses points and lines to represent connections and relationships. By representing entangled photon states as many connected lines and points, they were able to visually determine how photons interfere with each other, and to predict their effects in experiments. Graph theory is also used in certain types of quantum computing and quantum error correction but is not typically considered in the context of metasurfaces, including their design and operation.
    The resulting paper was a collaboration with the lab of Marko Loncar, whose team specializes in quantum optics and integrated photonics and provided needed expertise and equipment.
    “I’m excited about this approach, because it could efficiently scale optical quantum computers and networks — which has long been their biggest challenge compared to other platforms like superconductors or atoms,” said research scientist Neal Sinclair. “It also offers fresh insight into the understanding, design, and application of metasurfaces, especially for generating and controlling quantum light. With the graph approach, in a way, metasurface design and the optical quantum state become two sides of the same coin.”
    The research received support from federal sources including the AFOSR under award No. FA9550-21-1-0312. The work was performed at the Harvard University Center for Nanoscale Systems More

  • in

    AI turns immune cells into precision cancer killers—in just weeks

    Precision cancer treatment on a larger scale is moving closer after researchers have developed an AI platform that can tailor protein components and arm the patient’s immune cells to fight cancer. The new method, published in the scientific journal Science, demonstrates for the first time, that it is possible to design proteins in the computer for redirecting immune cells to target cancer cells through pMHC molecules.
    This dramatically shortens the process of finding effective molecules for cancer treatment from years to a few weeks.
    “We are essentially creating a new set of eyes for the immune system. Current methods for individual cancer treatment are based on finding so-called T-cell receptors in the immune system of a patient or donor that can be used for treatment. This is a very time-consuming and challenging process. Our platform designs molecular keys to target cancer cells using the AI platform, and it does so at incredible speed, so that a new lead molecule can be ready within 4-6 weeks,” says Associate Professor at the Technical University of Denmark (DTU) and last author of the study Timothy P. Jenkins.
    Targeted missiles against cancer
    The AI platform, developed by a team from DTU and the American Scripps Research Institute, aims to solve a major challenge in cancer immunotherapy by demonstrating how scientists can generate target treatments for tumor cells and avoid damaging healthy tissue.
    Normally, T cells naturally identify cancer cells by recognizing specific protein fragments, known as peptides, presented on the cell surface by molecules called pMHCs.It is a slow and challenging process to utilize this knowledge for therapy, often because the variation in the body’s own T-cell receptors makes it challenging to create a personalized treatment.
    Boosting the body’s immune system
    In the study, the researchers tested the strength of the AI platform on a well-known cancer target, NY-ESO-1, which is found in a wide range of cancers. The team succeeded in designing a minibinder that bound tightly to the NY-ESO-1 pMHC molecules. When the designed protein was inserted into T cells, it created a unique new cell product named ‘IMPAC-T’ cells by the researchers, which effectively guided the T cells to kill cancer cells in laboratory experiments.

    “It was incredibly exciting to take these minibinders, which were created entirely on a computer, and see them work so effectively in the laboratory,” says postdoc Kristoffer Haurum Johansen, co-author of the study and researcher at DTU.
    The researchers also applied the pipeline to design binders for a cancer target identified in a metastatic melanoma patient, successfully generating binders for this target as well. This documented that the method also can be used for tailored immunotherapy against novel cancer targets.
    Screening of treatments
    A crucial step in the researchers’ innovation was the development of a ‘virtual safety check’. The team used AI to screen their designed minibinders and assess them in relation to pMHC molecules found on healthy cells. This method enabled them to filter out minibinders that could cause dangerous side effects before any experiments were carried out.
    “Precision in cancer treatment is crucial. By predicting and ruling out cross-reactions already in the design phase, we were able to reduce the risk associated with the designed proteins and increase the likelihood of designing a safe and effective therapy,” says DTU professor and co-author of the study Sine Reker Hadrup.
    Five years to treatment
    Timothy Patrick Jenkins expects that it will take up to five years before the new method is ready for initial clinical trials in humans. Once the method is ready, the treatment process will resemble current cancer treatments using genetically modified T cells, known as CAR-T cells, which are currently used to treat lymphoma and leukemia.Patients will first have blood drawn at the hospital, similar to a routine blood test. Their immune cells will then be extracted from this blood sample and modified in the laboratory to carry the AI-designed minibinders. These enhanced immune cells are returned to the patient, where they act like targeted missiles, precisely finding and eliminating cancer cells in the body. More

  • in

    Google’s deepfake hunter sees what you can’t—even in videos without faces

    In an era where manipulated videos can spread disinformation, bully people, and incite harm, UC Riverside researchers have created a powerful new system to expose these fakes.
    Amit Roy-Chowdhury, a professor of electrical and computer engineering, and doctoral candidate Rohit Kundu, both from UCR’s Marlan and Rosemary Bourns College of Engineering, teamed up with Google scientists to develop an artificial intelligence model that detects video tampering — even when manipulations go far beyond face swaps and altered speech. (Roy-Chowdhury is also the co-director of the UC Riverside Artificial Intelligence Research and Education (RAISE) Institute, a new interdisciplinary research center at UCR.)
    Their new system, called the Universal Network for Identifying Tampered and synthEtic videos (UNITE), detects forgeries by examining not just faces but full video frames, including backgrounds and motion patterns. This analysis makes it one of the first tools capable of identifying synthetic or doctored videos that do not rely on facial content.
    “Deepfakes have evolved,” Kundu said. “They’re not just about face swaps anymore. People are now creating entirely fake videos — from faces to backgrounds — using powerful generative models. Our system is built to catch all of that.”
    UNITE’s development comes as text-to-video and image-to-video generation have become widely available online. These AI platforms enable virtually anyone to fabricate highly convincing videos, posing serious risks to individuals, institutions, and democracy itself.
    “It’s scary how accessible these tools have become,” Kundu said. “Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.”
    Kundu explained that earlier deepfake detectors focused almost entirely on face cues.

    “If there’s no face in the frame, many detectors simply don’t work,” he said. “But disinformation can come in many forms. Altering a scene’s background can distort the truth just as easily.”
    To address this, UNITE uses a transformer-based deep learning model to analyze video clips. It detects subtle spatial and temporal inconsistencies — cues often missed by previous systems. The model draws on a foundational AI framework known as SigLIP, which extracts features not bound to a specific person or object. A novel training method, dubbed “attention-diversity loss,” prompts the system to monitor multiple visual regions in each frame, preventing it from focusing solely on faces.
    The result is a universal detector capable of flagging a range of forgeries — from simple facial swaps to complex, fully synthetic videos generated without any real footage.
    “It’s one model that handles all these scenarios,” Kundu said. “That’s what makes it universal.”
    The researchers presented their findings at the high ranking 2025 Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tenn. Titled “Towards a Universal Synthetic Video Detector: From Face or Background Manipulations to Fully AI-Generated Content,” their paper, led by Kundu, outlines UNITE’s architecture and training methodology. Co-authors include Google researchers Hao Xiong, Vishal Mohanty, and Athula Balachandra. Co-sponsored by the IEEE Computer Society and the Computer Vision Foundation, CVPR is among the highest-impact scientific publication venues in the world.
    The collaboration with Google, where Kundu interned, provided access to expansive datasets and computing resources needed to train the model on a broad range of synthetic content, including videos generated from text or still images — formats that often stump existing detectors.
    Though still in development, UNITE could soon play a vital role in defending against video disinformation. Potential users include social media platforms, fact-checkers, and newsrooms working to prevent manipulated videos from going viral.
    “People deserve to know whether what they’re seeing is real,” Kundu said. “And as AI gets better at faking reality, we have to get better at revealing the truth.” More