More stories

  • in

    Scientists discover a way simulate the Universe on a laptop

    As astronomers gather more data than ever before, studying the cosmos has become an increasingly complex task. A new innovation is changing that reality. Researchers have now developed a way to analyze enormous cosmic data sets using only a laptop and a few hours of processing time.
    Leading this effort is Dr. Marco Bonici, a postdoctoral researcher at the Waterloo Centre for Astrophysics at the University of Waterloo. Bonici and an international team created Effort.jl, short for EFfective Field theORy surrogate. This tool uses advanced numerical techniques and smart data-preprocessing methods to deliver exceptional computational performance while maintaining the accuracy required in cosmology. The team designed it as a powerful emulator for the Effective Field Theory of Large-Scale Structure (EFTofLSS), allowing researchers to process vast datasets more efficiently than ever before.
    Turning Frustration Into Innovation
    The idea for Effort.jl emerged from Bonici’s experience running time-consuming computer models. Each time he adjusted even a single parameter, it could take days of extra computation to see the results. That challenge inspired him to build a faster, more flexible solution that could handle such adjustments in hours rather than days.
    “Using Effort.jl, we can run through complex data sets on models like EFTofLSS, which have previously needed a lot of time and computer power,” Bonici explained. “With projects like DESI and Euclid expanding our knowledge of the universe and creating even larger astronomical datasets to explore, Effort.jl allows researchers to analyze data faster, inexpensively and multiple times while making small changes based on nuances in the data.”
    Smarter Simulations for a Faster Universe
    Effort.jl belongs to a class of tools known as emulators. These are trained computational shortcuts that replicate the behavior of large, resource-intensive simulations but run dramatically faster. By using emulators, scientists can explore many possible cosmic scenarios in a fraction of the time and apply advanced techniques such as gradient-based sampling to study intricate physical models with greater efficiency.

    “We were able to validate the predictions coming out of Effort.jl by aligning them with those coming out of EFTofLSS,” Bonici said. “The margin of error was small and showed us that the calculations coming out of Effort.jl are strong. Effort.jl can also handle observational quirks like distortions in data and can be customized very easily to the needs of the researcher.”
    Human Expertise Still Matters
    Despite its impressive capabilities, Effort.jl is not a substitute for scientific understanding. Cosmologists still play a vital role in setting parameters, interpreting results, and applying physical insight to ensure meaningful conclusions. The combination of expert knowledge and computational power is what makes the system so effective.
    Looking ahead, Effort.jl is expected to take on even larger cosmological datasets and work alongside other analytical tools. Researchers also see potential for its methods in areas beyond astrophysics, including weather and climate modeling.
    The paper, “Effort.jl: a fast and differentiable emulator for the Effective Field Theory of the Large Scale Structure of the Universe,” was published in the Journal of Cosmology and Astroparticle Physics. More

  • in

    A revolutionary DNA search engine is speeding up genetic discovery

    Rare genetic diseases can now be detected in patients, and tumor-specific mutations identified — a milestone made possible by DNA sequencing, which transformed biomedical research decades ago. In recent years, the introduction of new sequencing technologies (next-generation sequencing) has driven a wave of breakthroughs. During 2020 and 2021, for instance, these methods enabled the rapid decoding and worldwide monitoring of the SARS-CoV-2 genome.
    At the same time, an increasing number of researchers are making their sequencing results publicly accessible. This has led to an explosion of data, stored in major databases such as the American SRA (Sequence Read Archive) and the European ENA (European Nucleotide Archive). Together, these archives now hold about 100 petabytes of information — roughly equivalent to the total amount of text found across the entire internet, with a single petabyte equaling one million gigabytes.
    Until now, biomedical scientists needed enormous computing resources to search through these vast genetic repositories and compare them with their own data, making comprehensive searches nearly impossible. Researchers at ETH Zurich have now developed a way to overcome that limitation.
    Full-text search instead of downloading entire data sets
    The team created a tool called MetaGraph, which dramatically streamlines and accelerates the process. Instead of downloading entire datasets, MetaGraph enables direct searches within the raw DNA or RNA data — much like using an internet search engine. Scientists simply enter a genetic sequence of interest into a search field and, within seconds or minutes depending on the query, can see where that sequence appears in global databases.
    “It’s a kind of Google for DNA,” explains Professor Gunnar Rätsch, a data scientist in ETH Zurich’s Department of Computer Science. Previously, researchers could only search for descriptive metadata and then had to download the full datasets to access raw sequences. That approach was slow, incomplete, and expensive.
    According to the study authors, MetaGraph is also remarkably cost-efficient. Representing all publicly available biological sequences would require only a few computer hard drives, and large queries would cost no more than about 0.74 dollars per megabase.

    Because the new DNA search engine is both fast and accurate, it could significantly accelerate research — particularly in identifying emerging pathogens or analyzing genetic factors linked to antibiotic resistance. The system may even help locate beneficial viruses that destroy harmful bacteria (bacteriophages) hidden within these massive databases.
    Compression by a factor of 300
    In their study published on October 8 in Nature, the ETH team demonstrated how MetaGraph works. The tool organizes and compresses genetic data using advanced mathematical graphs that structure information more efficiently, similar to how spreadsheet software arranges values. “Mathematically speaking, it is a huge matrix with millions of columns and trillions of rows,” Rätsch explains.
    Creating indexes to make large datasets searchable is a familiar concept in computer science, but the ETH approach stands out for how it connects raw data with metadata while achieving an extraordinary compression rate of about 300 times. This reduction works much like summarizing a book — it removes redundancies while preserving the essential narrative and relationships, retaining all relevant information in a much smaller form.
    “We are pushing the limits of what is possible in order to keep the data sets as compact as possible without losing necessary information,” says Dr. André Kahles, who, like Rätsch, is a member of the Biomedical Informatics Group at ETH Zurich. By contrast with other DNA search masks currently being researched, the ETH researchers’ approach is scalable. This means that the larger the amount of data queried, the less additional computing power the tool requires.
    Half of the data is already available now
    First introduced in 2020, MetaGraph has been steadily refined. The tool is now publicly accessible for searches (https://metagraph.ethz.ch/search) and already indexes millions of DNA, RNA, and protein sequences from viruses, bacteria, fungi, plants, animals, and humans. Currently, nearly half of all available global sequence datasets are included, with the remainder expected to follow by the end of the year. Since MetaGraph is open source, it could also attract interest from pharmaceutical companies managing large volumes of internal research data.
    Kahles even believes it is possible that the DNA search engine will one day be used by private individuals: “In the early days, even Google didn’t know exactly what a search engine was good for. If the rapid development in DNA sequencing continues, it may become commonplace to identify your balcony plants more precisely.” More

  • in

    Breakthrough optical processor lets AI compute at the speed of light

    Modern artificial intelligence (AI) systems, from robotic surgery to high-frequency trading, rely on processing streams of raw data in real time. Extracting important features quickly is critical, but conventional digital processors are hitting physical limits. Traditional electronics can no longer reduce latency or increase throughput enough to keep up with today’s data-heavy applications.
    Turning to Light for Faster Computing
    Researchers are now looking to light as a solution. Optical computing — using light instead of electricity to handle complex calculations — offers a way to dramatically boost speed and efficiency. One promising approach involves optical diffraction operators, thin plate-like structures that perform mathematical operations as light passes through them. These systems can process many signals at once with low energy use. However, maintaining the stable, coherent light needed for such computations at speeds above 10 GHz has proven extremely difficult.
    To overcome this challenge, a team led by Professor Hongwei Chen at Tsinghua University in China developed a groundbreaking device known as the Optical Feature Extraction Engine, or OFE2. Their work, published in Advanced Photonics Nexus, demonstrates a new way to perform high-speed optical feature extraction suitable for multiple real-world applications.
    How OFE2 Prepares and Processes Data
    A key advance in OFE2 is its innovative data preparation module. Supplying fast, parallel optical signals to the core optical components without losing phase stability is one of the toughest problems in the field. Fiber-based systems often introduce unwanted phase fluctuations when splitting and delaying light. The Tsinghua team solved this by designing a fully integrated on-chip system with adjustable power splitters and precise delay lines. This setup converts serial data into several synchronized optical channels. In addition, an integrated phase array allows OFE2 to be easily reconfigured for different computational tasks.
    Once prepared, the optical signals pass through a diffraction operator that performs the feature extraction. This process is similar to a matrix-vector multiplication, where light waves interact to create focused “bright spots” at specific output points. By fine-tuning the phase of the input light, these spots can be directed toward chosen output ports, enabling OFE2 to capture subtle variations in the input data over time.

    Record-Breaking Optical Performance
    Operating at an impressive 12.5 GHz, OFE2 achieves a single matrix-vector multiplication in just 250.5 picoseconds — the fastest known result for this type of optical computation. “We firmly believe this work provides a significant benchmark for advancing integrated optical diffraction computing to exceed a 10 GHz rate in real-world applications,” says Chen.
    The research team tested OFE2 across multiple domains. In image processing, it successfully extracted edge features from visual data, creating paired “relief and engraving” maps that improved image classification and increased accuracy in tasks such as identifying organs in CT scans. Systems using OFE2 required fewer electronic parameters than standard AI models, proving that optical preprocessing can make hybrid AI networks both faster and more efficient.
    The team also applied OFE2 to digital trading, where it processed live market data to generate profitable buy and sell actions. After being trained with optimized strategies, OFE2 converted incoming price signals directly into trading decisions, achieving consistent returns. Because these calculations happen at the speed of light, traders could act on opportunities with almost no delay.
    Lighting the Way Toward the Future of AI
    Together, these achievements signal a major shift in computing. By moving the most demanding parts of AI processing from power-hungry electronic chips to lightning-fast photonic systems, technologies like OFE2 could usher in a new era of real-time, low-energy AI. “The advancements presented in our study push integrated diffraction operators to a higher rate, providing support for compute-intensive services in areas such as image recognition, assisted healthcare, and digital finance. We look forward to collaborating with partners who have data-intensive computational needs,” concludes Chen. More

  • in

    AI restores James Webb telescope’s crystal-clear vision

    Two PhD students from Sydney have helped restore the sharp vision of the world’s most powerful space observatory without ever leaving the ground. Louis Desdoigts, now a postdoctoral researcher at Leiden University in the Netherlands, and his colleague Max Charles celebrated their achievement with tattoos of the instrument they repaired inked on their arms — an enduring reminder of their contribution to space science.
    A Groundbreaking Software Fix
    Researchers at the University of Sydney developed an innovative software solution that corrected blurriness in images captured by NASA’s multi-billion-dollar James Webb Space Telescope (JWST). Their breakthrough restored the full precision of one of the telescope’s key instruments, achieving what would once have required a costly astronaut repair mission.
    This success builds on the JWST’s only Australian-designed component, the Aperture Masking Interferometer (AMI). Created by Professor Peter Tuthill from the University of Sydney’s School of Physics and the Sydney Institute for Astronomy, the AMI allows astronomers to capture ultra-high-resolution images of stars and exoplanets. It works by combining light from different sections of the telescope’s main mirror, a process known as interferometry. When the JWST began its scientific operations, researchers noticed that AMI’s performance was being affected by faint electronic distortions in its infrared camera detector. These distortions caused subtle image fuzziness, reminiscent of the Hubble Space Telescope’s well-known early optical flaw that had to be corrected through astronaut spacewalks.
    Solving a Space Problem from Earth
    Instead of attempting a physical repair, PhD students Louis Desdoigts and Max Charles, working with Professor Tuthill and Associate Professor Ben Pope (at Macquarie University), devised a purely software-based calibration technique to fix the distortion from Earth.
    Their system, called AMIGO (Aperture Masking Interferometry Generative Observations), uses advanced simulations and neural networks to replicate how the telescope’s optics and electronics function in space. By pinpointing an issue where electric charge slightly spreads to neighboring pixels — a phenomenon called the brighter-fatter effect — the team designed algorithms that digitally corrected the images, fully restoring AMI’s performance.

    “Instead of sending astronauts to bolt on new parts, they managed to fix things with code,” Professor Tuthill said. “It’s a brilliant example of how Australian innovation can make a global impact in space science.”
    Sharper Views of the Universe
    The results have been striking. With AMIGO in use, the James Webb Space Telescope has delivered its clearest images yet, capturing faint celestial objects in unprecedented detail. This includes direct images of a dim exoplanet and a red-brown dwarf orbiting the nearby star HD 206893, about 133 light years from Earth.
    A related study led by Max Charles further demonstrated AMI’s renewed precision. Using the improved calibration, the telescope produced sharp images of a black hole jet, the fiery surface of Jupiter’s moon Io, and the dust-filled stellar winds of WR 137 — showing that JWST can now probe deeper and clearer than before.
    “This work brings JWST’s vision into even sharper focus,” Dr. Desdoigts said. “It’s incredibly rewarding to see a software solution extend the telescope’s scientific reach — and to know it was possible without ever leaving the lab.”
    Dr. Desdoigts has now landed a prestigious postdoctoral research position at Leiden University in the Netherlands.
    Both studies have been published on the pre-press server arXiv. Dr. Desdoigts’ paper has been peer-reviewed and will shortly be published in the Publications of the Astronomical Society of Australia. We have published this release to coincide with the latest round of James Webb Space Telescope General Observer, Survey and Archival Research programs.
    Associate Professor Benjamin Pope, who presented on these findings at SXSW Sydney, said the research team was keen to get the new code into the hands of researchers working on JWST as soon as possible. More

  • in

    Living computers powered by mushrooms

    Fungal networks could one day replace the tiny metal components that process and store computer data, according to new research.
    Mushrooms are known for their toughness and unusual biological properties, qualities that make them attractive for bioelectronics. This emerging field blends biology and technology to design innovative, sustainable materials for future computing systems.
    Turning Mushrooms Into Living Memory Devices
    Researchers at The Ohio State University recently discovered that edible fungi, such as shiitake mushrooms, can be cultivated and guided to function as organic memristors. These components act like memory cells that retain information about previous electrical states.
    Their experiments showed that mushroom-based devices could reproduce the same kind of memory behavior seen in semiconductor chips. They may also enable the creation of other eco-friendly, brain-like computing tools that cost less to produce.
    “Being able to develop microchips that mimic actual neural activity means you don’t need a lot of power for standby or when the machine isn’t being used,” said John LaRocco, lead author of the study and a research scientist in psychiatry at Ohio State’s College of Medicine. “That’s something that can be a huge potential computational and economic advantage.”
    The Promise of Fungal Electronics
    LaRocco noted that fungal electronics are not a brand-new idea, but they are becoming increasingly practical for sustainable computing. Because fungal materials are biodegradable and inexpensive to produce, they can help reduce electronic waste. In contrast, conventional semiconductors often require rare minerals and large amounts of energy to manufacture and operate.

    “Mycelium as a computing substrate has been explored before in less intuitive setups, but our work tries to push one of these memristive systems to its limits,” he said.
    The team’s findings were published in PLOS One.
    How Scientists Tested Mushroom Memory
    To test their capabilities, researchers grew samples of shiitake and button mushrooms. Once matured, they were dehydrated to preserve them and then attached to custom electronic circuits. The mushrooms were exposed to controlled electric currents at different voltages and frequencies.
    “We would connect electrical wires and probes at different points on the mushrooms because distinct parts of it have different electrical properties,” said LaRocco. “Depending on the voltage and connectivity, we were seeing different performances.”
    Surprising Results from Mushroom Circuits
    After two months of testing, the researchers found that their mushroom-based memristor could switch between electrical states up to 5,850 times per second with about 90% accuracy. Although performance decreased at higher electrical frequencies, the team noticed that connecting multiple mushrooms together helped restore stability — much like neural connections in the human brain.

    Qudsia Tahmina, co-author of the study and an associate professor of electrical and computer engineering at Ohio State, said the results highlight how easily mushrooms can be adapted for computing. “Society has become increasingly aware of the need to protect our environment and ensure that we preserve it for future generations,” said Tahmina.”So that could be one of the driving factors behind new bio-friendly ideas like these.”
    Building on the flexibility mushrooms offer also suggests there are possibilities for scaling up fungal computing, said Tahmina. For instance, larger mushroom systems may be useful in edge computing and aerospace exploration; smaller ones in enhancing the performance of autonomous systems and wearable devices.
    Looking Ahead: The Future of Fungal Computing
    Although organic memristors are still in their early stages, scientists aim to refine cultivation methods and shrink device sizes in future work. Achieving smaller, more efficient fungal components will be key to making them viable alternatives to traditional microchips.
    “Everything you’d need to start exploring fungi and computing could be as small as a compost heap and some homemade electronics, or as big as a culturing factory with pre-made templates,” said LaRocco. “All of them are viable with the resources we have in front of us now.”
    Other Ohio State contributors to the study include Ruben Petreaca, John Simonis, and Justin Hill. The research was supported by the Honda Research Institute. More

  • in

    The math says life shouldn’t exist, but somehow it does

    A groundbreaking study is taking a fresh look at one of science’s oldest questions: how did life arise from nonliving material on early Earth? Researcher Robert G. Endres of Imperial College London has created a new mathematical framework suggesting that the spontaneous appearance of life may have been far less likely than many scientists once believed.
    The Improbable Odds of Life Emerging Naturally
    The research examines how extraordinarily difficult it would be for organized biological information to form under plausible prebiotic conditions. Endres illustrates this by comparing it to trying to write a coherent article for a leading science website by tossing random letters onto a page. As complexity increases, the probability of success quickly drops to near zero.
    To explore the issue, Endres applied principles from information theory and algorithmic complexity to estimate what it would take for the first simple cell, known as a protocell, to assemble itself from basic chemical ingredients. This approach revealed that the odds of such a process happening naturally are astonishingly low.
    Why Chance Alone May Not Be Enough
    The findings suggest that random chemical reactions and natural processes may not fully explain how life appeared within the limited time available on early Earth. Because systems naturally tend toward disorder, building the intricate molecular organization required for life would have been a major challenge.
    Although this doesn’t mean that life’s origin was impossible, Endres argues that current scientific models may be missing key elements. He emphasizes that identifying the physical principles behind life’s emergence from nonliving matter remains one of the greatest unsolved problems in biological physics.

    Considering a Speculative Alternative
    The study also briefly considers directed panspermia, a controversial idea proposed by Francis Crick and Leslie Orgel. This hypothesis suggests that life could have been intentionally introduced to Earth by advanced extraterrestrial civilizations. While Endres acknowledges the idea as logically possible, he notes that it runs counter to Occam’s razor, the principle that favors simpler explanations.
    Rather than ruling out natural origins, the research provides a way to quantify how difficult the process may have been. It points to the potential need for new physical laws or mechanisms that could help overcome the immense informational and organizational barriers to life. The work represents an important move toward a more mathematically grounded understanding of how living systems might arise.
    A Continuing Mystery
    This study is a reminder that some of the most profound questions in science remain unanswered. By merging mathematics with biology, researchers are beginning to uncover new layers of insight into one of humanity’s oldest mysteries: how existence itself began.
    Adapted from an article originally published on Universe Today. More

  • in

    Stanford’s tiny eye chip helps the blind see again

    A tiny wireless chip placed at the back of the eye, combined with a pair of advanced smart glasses, has partially restored vision to people suffering from an advanced form of age-related macular degeneration. In a clinical study led by Stanford Medicine and international collaborators, 27 of the 32 participants regained the ability to read within a year of receiving the implant.
    With the help of digital features such as adjustable zoom and enhanced contrast, some participants achieved visual sharpness comparable to 20/42 vision.
    The study’s findings were published on Oct. 20 in the New England Journal of Medicine.
    A Milestone in Restoring Functional Vision
    The implant, named PRIMA and developed at Stanford Medicine, is the first prosthetic eye device to restore usable vision to individuals with otherwise untreatable vision loss. The technology enables patients to recognize shapes and patterns, a level of vision known as form vision.
    “All previous attempts to provide vision with prosthetic devices resulted in basically light sensitivity, not really form vision,” said Daniel Palanker, PhD, a professor of ophthalmology and a co-senior author of the paper. “We are the first to provide form vision.”
    The research was co-led by José-Alain Sahel, MD, professor of ophthalmology at the University of Pittsburgh School of Medicine, with Frank Holz, MD, of the University of Bonn in Germany, serving as lead author.

    How the PRIMA System Works
    The system includes two main parts: a small camera attached to a pair of glasses and a wireless chip implanted in the retina. The camera captures visual information and projects it through infrared light to the implant, which converts it into electrical signals. These signals substitute for the damaged photoreceptors that normally detect light and send visual data to the brain.
    The PRIMA project represents decades of scientific effort, involving numerous prototypes, animal testing, and an initial human trial.
    Palanker first conceived the idea two decades ago while working with ophthalmic lasers to treat eye disorders. “I realized we should use the fact that the eye is transparent and deliver information by light,” he said.
    “The device we imagined in 2005 now works in patients remarkably well.”
    Replacing Lost Photoreceptors
    Participants in the latest trial had an advanced stage of age-related macular degeneration known as geographic atrophy, which progressively destroys central vision. This condition affects over 5 million people worldwide and is the leading cause of irreversible blindness among older adults.

    In macular degeneration, the light-sensitive photoreceptor cells in the central retina deteriorate, leaving only limited peripheral vision. However, many of the retinal neurons that process visual information remain intact, and PRIMA capitalizes on these surviving structures.
    The implant, measuring just 2 by 2 millimeters, is placed in the area of the retina where photoreceptors have been lost. Unlike natural photoreceptors that respond to visible light, the chip detects infrared light emitted from the glasses.
    “The projection is done by infrared because we want to make sure it’s invisible to the remaining photoreceptors outside the implant,” Palanker said.
    Combining Natural and Artificial Vision
    This design allows patients to use both their natural peripheral vision and the new prosthetic central vision simultaneously, improving their ability to orient themselves and move around.
    “The fact that they see simultaneously prosthetic and peripheral vision is important because they can merge and use vision to its fullest,” Palanker said.
    Since the implant is photovoltaic — relying solely on light to generate electrical current — it operates wirelessly and can be safely placed beneath the retina. Earlier versions of artificial eye devices required external power sources and cables that extended outside the eye.
    Reading Again
    The new trial included 38 patients older than 60 who had geographic atrophy due to age-related macular degeneration and worse than 20/320 vision in at least one eye.
    Four to five weeks after implantation of the chip in one eye, patients began using the glasses. Though some patients could make out patterns immediately, all patients’ visual acuity improved over months of training.
    “It may take several months of training to reach top performance — which is similar to what cochlear implants require to master prosthetic hearing,” Palanker said.
    Of the 32 patients who completed the one-year trial, 27 could read and 26 demonstrated clinically meaningful improvement in visual acuity, which was defined as the ability to read at least two additional lines on a standard eye chart. On average, participants’ visual acuity improved by 5 lines; one improved by 12 lines.
    The participants used the prosthesis in their daily lives to read books, food labels and subway signs. The glasses allowed them to adjust contrast and brightness and magnify up to 12 times. Two-thirds reported medium to high user satisfaction with the device.
    Nineteen participants experienced side effects, including ocular hypertension (high pressure in the eye), tears in the peripheral retina and subretinal hemorrhage (blood collecting under the retina). None were life-threatening, and almost all resolved within two months.
    Future Visions
    For now, the PRIMA device provides only black-and-white vision, with no shades in between, but Palanker is developing software that will soon enable the full range of grayscale.
    “Number one on the patients’ wish list is reading, but number two, very close behind, is face recognition,” he said. “And face recognition requires grayscale.”
    He is also engineering chips that will offer higher resolution vision. Resolution is limited by the size of pixels on the chip. Currently, the pixels are 100 microns wide, with 378 pixels on each chip. The new version, already tested in rats, may have pixels as small as 20 microns wide, with 10,000 pixels on each chip.
    Palanker also wants to test the device for other types of blindness caused by lost photoreceptors.
    “This is the first version of the chip, and resolution is relatively low,” he said. “The next generation of the chip, with smaller pixels, will have better resolution and be paired with sleeker-looking glasses.”
    A chip with 20-micron pixels could give a patient 20/80 vision, Palanker said. “But with electronic zoom, they could get close to 20/20.”
    Researchers from the University of Bonn, Germany; Hôpital Fondation A. de Rothschild, France; Moorfields Eye Hospital and University College London; Ludwigshafen Academic Teaching Hospital; University of Rome Tor Vergata; Medical Center Schleswig-Holstein, University of Lübeck; L’Hôpital Universitaire de la Croix-Rousse and Université Claude Bernard Lyon 1; Azienda Ospedaliera San Giovanni Addolorata; Centre Monticelli Paradis and L’Université d’Aix-Marseille; Intercommunal Hospital of Créteil and Henri Mondor Hospital; Knappschaft Hospital Saar; Nantes University; University Eye Hospital Tübingen; University of Münster Medical Center; Bordeaux University Hospital; Hôpital National des 15-20; Erasmus University Medical Center; University of Ulm; Science Corp.; University of California, San Francisco; University of Washington; University of Pittsburgh School of Medicine; and Sorbonne Université contributed to the study.
    The study was supported by funding from Science Corp., the National Institute for Health and Care Research, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology. More

  • in

    AI turns x-rays into time machines for arthritis care

    A new artificial intelligence system developed by researchers at the University of Surrey can forecast what a patient’s knee X-ray might look like one year in the future. This breakthrough could reshape how millions of people living with osteoarthritis understand and manage their condition.
    The research, presented at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2025), describes a powerful AI model capable of generating realistic “future” X-rays along with a personalized risk score that estimates disease progression. Together, these outputs give doctors and patients a visual roadmap of how osteoarthritis may evolve over time.
    A Major Step Forward in Predicting Osteoarthritis Progression
    Osteoarthritis, a degenerative joint disorder that affects more than 500 million people globally, is the leading cause of disability among older adults. The Surrey system was trained on nearly 50,000 knee X-rays from about 5,000 patients, making it one of the largest datasets of its kind. It can predict disease progression roughly nine times faster than similar AI tools and operates with greater efficiency and accuracy. Researchers believe this combination of speed and precision could help integrate the technology into clinical practice more quickly.
    David Butler, the study’s lead author from the University of Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP) and the Institute for People-Centred AI, explained:
    “We’re used to medical AI tools that give a number or a prediction, but not much explanation. Our system not only predicts the likelihood of your knee getting worse — it actually shows you a realistic image of what that future knee could look like. Seeing the two X-rays side by side — one from today and one for next year — is a powerful motivator. It helps doctors act sooner and gives patients a clearer picture of why sticking to their treatment plan or making lifestyle changes really matters. We think this can be a turning point in how we communicate risk and improve osteoarthritic knee care and other related conditions.”
    How the System Visualizes Change
    At the core of the new system is an advanced generative model known as a diffusion model. It creates a “future” version of a patient’s X-ray and identifies 16 key points in the joint to highlight areas being tracked for potential changes. This feature enhances transparency by showing clinicians exactly which parts of the knee the AI is monitoring, helping build confidence and understanding in its predictions.

    The Surrey team believes their approach could be adapted for other chronic diseases. Similar AI tools might one day predict lung damage in smokers or track the progression of heart disease, providing the same kind of visual insights and early warning that this system offers for osteoarthritis. Researchers are now seeking collaborations to bring the technology into hospitals and everyday healthcare use.
    Greater Transparency and Early Intervention
    Gustavo Carneiro, Professor of AI and Machine Learning at Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP), said:
    “Earlier AI systems could estimate the risk of osteoarthritis progression, but they were often slow, opaque and limited to numbers rather than clear images. Our approach takes a big step forward by generating realistic future X-rays quickly and by pinpointing the areas of the joint most likely to change. That extra visibility helps clinicians identify high-risk patients sooner and personalize their care in ways that were not previously practical.” More