More stories

  • in

    Artificial neurons that behave like real brain cells

    Scientists at the USC Viterbi School of Engineering and the School of Advanced Computing have created artificial neurons that reproduce the intricate electrochemical behavior of real brain cells. The discovery, published in Nature Electronics, marks a major milestone in neuromorphic computing, a field that designs hardware modeled after the human brain. This advancement could shrink chip sizes by orders of magnitude, cut energy use dramatically, and push artificial intelligence closer to achieving artificial general intelligence.
    Unlike digital processors or earlier neuromorphic chips that only simulate brain activity through mathematical models, these new neurons physically reproduce how real neurons operate. Just as natural brain activity is triggered by chemical signals, these artificial versions use actual chemical interactions to start computational processes. This means they are not just symbolic representations but tangible recreations of biological function.
    A New Class of Brain-Like Hardware
    The research, led by Professor Joshua Yang of USC’s Department of Computer and Electrical Engineering, builds on his earlier pioneering work on artificial synapses more than a decade ago. The team’s new approach centers on a device called a “diffusive memristor.” Their findings describe how these components could lead to a new generation of chips that both complement and enhance traditional silicon-based electronics. While silicon systems rely on electrons to perform computations, Yang’s diffusive memristors use the motion of atoms instead, creating a process that more closely resembles how biological neurons transmit information. The result could be smaller, more efficient chips that process information the way the brain does and potentially pave the way toward artificial general intelligence (AGI).
    In the brain, both electrical and chemical signals drive communication between nerve cells. When an electrical impulse reaches the end of a neuron at a junction called a synapse, it converts into a chemical signal to transmit information to the next neuron. Once received, that signal is converted back into an electrical impulse that continues through the neuron. Yang and his colleagues have replicated this complex process in their devices with striking accuracy. A major advantage of their design is that each artificial neuron fits within the footprint of a single transistor, whereas older designs required tens or even hundreds.
    In biological neurons, charged particles known as ions help create the electrical impulses that enable activity in the nervous system. The human brain relies on ions such as potassium, sodium, and calcium to make this happen.
    Using Silver Ions to Recreate Brain Dynamics
    In the new study, Yang — who also directs the USC Center of Excellence on Neuromorphic Computing — used silver ions embedded in oxide materials to generate electrical pulses that mimic natural brain functions. These include fundamental processes like learning, movement, and planning.

    “Even though it’s not exactly the same ions in our artificial synapses and neurons, the physics governing the ion motion and the dynamics are very similar,” says Yang.
    Yang explains, “Silver is easy to diffuse and gives us the dynamics we need to emulate the biosystem so that we can achieve the function of the neurons, with a very simple structure.” The new device that can enable a brain-like chip is called the “diffusive memristor” because of the ion motion and the dynamic diffusion that occurs with the use of silver.
    He adds, the team chose to utilize ion dynamics for building artificial intelligent systems “because that is what happens in the human brain, for a good reason and since the human brain, is the ‘winner in evolution-the most efficient intelligent engine.”
    “It’s more efficient,” says Yang.
    Why Efficiency Matters in AI Hardware
    Yang emphasizes that the issue with modern computing isn’t lack of power but inefficiency. “It’s not that our chips or computers are not powerful enough for whatever they are doing. It’s that they aren’t efficient enough. They use too much energy,” he explains. This is especially important given how much energy today’s large-scale artificial intelligence systems consume to process massive datasets.

    Yang goes on to explain that unlike the brain, “Our existing computing systems were never intended to process massive amounts of data or to learn from just a few examples on their own. One way to boost both energy and learning efficiency is to build artificial systems that operate according to principles observed in the brain.”
    If you are looking for pure speed, electrons that run modern computing would be the best for fast operations. But, he explains, “Ions are a better medium than electrons for embodying principles of the brain. Because electrons are lightweight and volatile, computing with them enables software-based learning rather than hardware-based learning, which is fundamentally different from how the brain operates.”
    In contrast, he says, “The brain learns by moving ions across membranes, achieving energy-efficient and adaptive learning directly in hardware, or more precisely, in what people may call ‘wetware’.”
    For example, a young child can learn to recognize handwritten digits after seeing only a few examples of each, whereas a computer typically needs thousands to achieve the same task. Yet, the human brain accomplishes this remarkable learning while consuming only about 20 watts of power, compared to the megawatts required by today’s supercomputers.
    Potential Impact and Next Steps
    Yang and his team see this technology as a major step toward replicating natural intelligence. However, he acknowledges that the silver used in these experiments is not yet compatible with standard semiconductor manufacturing processes. Future work will explore other ionic materials that can achieve similar effects.
    The diffusive memristors are efficient in both energy and size. A typical smartphone may contain around ten chips, each with billions of transistors switching on and off to perform calculations.
    “Instead [with this innovation], we just use a footprint of one transistor for each neuron. We are designing the building blocks that eventually led us to reduce the chip size by orders of magnitude, reduce the energy consumption by orders of magnitude, so it can be sustainable to perform AI in the future, with similar level of intelligence without burning energy that we cannot sustain,” says Yang.
    Now that we have demonstrated capable and compact building blocks, artificial synapses and neurons, the next step is to integrate large numbers of them and test how closely we can replicate the brain’s efficiency and capabilities. “Even more exciting,” says Yang, “is the prospect that such brain-faithful systems could help us uncover new insights into how the brain itself works.” More

  • in

    Breakthrough links magnetism and electricity for faster tech

    Engineers at the University of Delaware have uncovered a new way to connect magnetic and electric forces in computing, a finding that could pave the way for computers that run dramatically faster while consuming far less energy.
    Tiny Magnetic Waves Generate Electric Signals
    In a study published in Proceedings of the National Academy of Sciences, researchers from the university’s Center for Hybrid, Active and Responsive Materials (CHARM), a National Science Foundation-funded Materials Research Science and Engineering Center, report that magnons — tiny magnetic waves that move through solid materials — are capable of generating measurable electric signals.
    This discovery suggests that future computer chips could merge magnetic and electric systems directly, removing the need for the constant energy exchange that limits the performance of today’s devices.
    How Magnons Transmit Information
    Traditional electronics rely on the flow of charged electrons, which lose energy as heat when moving through circuits. In contrast, magnons carry information through the synchronized “spin” of electrons, creating wave-like patterns across a material. According to theoretical models developed by the UD team, when these magnetic waves travel through antiferromagnetic materials, they can induce electric polarization, effectively creating a measurable voltage.
    Toward Ultrafast, Energy-Efficient Computing
    Antiferromagnetic magnons can move at terahertz frequencies — around a thousand times faster than magnetic waves in conventional materials. This exceptional speed points to a promising path for ultrafast, low-power computing. The researchers are now working to verify their theoretical predictions through experiments and to investigate how magnons interact with light, which could lead to even more efficient ways of controlling them.

    Advancing Quantum Material Research
    This work contributes to CHARM’s larger goal of developing hybrid quantum materials for cutting-edge technologies. The center’s researchers study how different types of materials — such as magnetic, electronic, and quantum systems — can be combined and controlled to create next-generation technologies. CHARM’s goal is to design smart materials that respond to their environments and enable breakthroughs in computing, energy, and communication.
    The study’s co-authors are Federico Garcia-Gaitan, Yafei Ren, M. Benjamin Jungfleisch, John Q. Xiao, Branislav K. Nikolić, Joshua Zide, and Garnett W. Bryant (NIST/University of Maryland). Funding was provided by the National Science Foundation under award DMR-2011824 More

  • in

    Quantum light breakthrough could transform technology

    High-order harmonic generation (HHG) is a process that transforms light into much higher frequencies, allowing scientists to explore areas of the electromagnetic spectrum that are otherwise difficult to reach. However, generating terahertz (THz) frequencies using HHG has remained a major obstacle because most materials are too symmetrical to support this conversion.
    Graphene has long been a promising candidate for HHG research, but its perfect symmetry restricts it to producing only odd harmonics — frequencies that are odd multiples of the original light source. Even harmonics, which are essential for expanding practical uses of this technology, have been much harder to achieve.
    Quantum Materials Break the Barrier
    In a recent study published in Light: Science & Applications, a research group led by Prof. Miriam Serena Vitiello has achieved a major advance in optical science. By working with exotic quantum materials, the team successfully extended HHG into new and previously unreachable parts of the electromagnetic spectrum.
    Their work centers on topological insulators (TIs), a special class of materials that behave as electrical insulators inside but conduct electricity along their surfaces. These materials exhibit unusual quantum behavior due to strong spin-orbit coupling and time-reversal symmetry. Although scientists had predicted that TIs could support advanced forms of harmonic generation, no one had yet demonstrated it experimentally — until now.
    Amplifying Light With Quantum Nanostructures
    The researchers designed specialized nanostructures called split ring resonators and integrated them with thin layers of Bi2Se₃ and van der Waals heterostructures made from (InₓBi₁₋ₓ)2Se₃. These resonators significantly intensified the incoming light, allowing the team to observe HHG at both even and odd THz frequencies, an exceptional accomplishment.

    They recorded frequency up-conversion between 6.4 THz (even) and 9.7 THz (odd), uncovering how both the symmetrical interior and the asymmetrical surface of the topological materials contribute to light generation. This result represents one of the first clear demonstrations of how topological effects can shape harmonic behavior in the THz range.
    Toward Next-Generation Terahertz Technology
    This experimental achievement not only validates long-standing theoretical predictions but also establishes a new foundation for developing compact terahertz light sources, sensors, and ultrafast optoelectronic components. It gives researchers a new way to study the complex interplay between symmetry, quantum states, and light-matter interactions at the nanoscale.
    As industries continue to demand smaller, faster, and more efficient devices, such progress highlights the growing potential of quantum materials to drive real-world innovation. The discovery also points toward the creation of compact, tunable terahertz light sources powered by optical methods — an advance that could reshape technologies in high-speed communications, medical imaging, and quantum computing. More

  • in

    Too much screen time may be hurting kids’ hearts

    More time using electronic devices or watching TV among children and young adults was linked with higher cardiometabolic disease risk, including high blood pressure, high cholesterol and insulin resistance, based on data from more than 1,000 participants in Denmark. The association between screen time and cardiometabolic risks was strongest in youth who slept fewer hours, suggesting that screen use may harm health by “stealing” time from sleep, researchers said. Researchers said the findings underscore the importance of addressing screen habits among young people as a potential way to protect long-term heart and metabolic health.Screen time tied to early heart and metabolic risksChildren and teens who spend many hours on TVs, phones, tablets, computers or gaming systems appear to face higher chances of cardiometabolic problems, such as elevated blood pressure, unfavorable cholesterol levels and insulin resistance. The findings are reported in the Journal of the American Heart Association, an open-access, peer-reviewed journal of the American Heart Association.
    A 2023 scientific statement from the American Heart Association reported that “cardiometabolic risk is accruing at younger and younger ages,” and that only 29% of U.S. youth ages 2 to 19 had favorable cardiometabolic health in 2013-2018 National Health and Nutrition Examination Survey data.
    Danish cohorts show a consistent pattern
    An evaluation of more than 1,000 participants from two Danish studies found a clear connection: more recreational screen time was significantly associated with greater cardiovascular and overall cardiometabolic risk among children and adolescents.
    “Limiting discretionary screen time in childhood and adolescence may protect long-term heart and metabolic health,” said study lead author David Horner, M.D., PhD., a researcher at the Copenhagen Prospective Studies on Asthma in Childhood (COPSAC) at the University of Copenhagen in Denmark. “Our study provides evidence that this connection starts early and highlights the importance of having balanced daily routines.”
    What researchers measured

    The team analyzed two COPSAC groups: one of 10-year-olds followed in 2010 and one of 18-year-olds followed in 2000. They examined how leisure screen use related to cardiometabolic risk factors. Screen time included watching TV and movies, gaming and time on phones, tablets or computers for fun.
    To capture overall risk, researchers created a composite cardiometabolic score based on multiple components of metabolic syndrome, including waist size, blood pressure, high-density lipoprotein or HDL “good” cholesterol, triglycerides and blood sugar levels. They adjusted for sex and age. The score reflects each participant’s risk relative to the study average (in standard deviations): 0 indicates average risk, and 1 indicates one standard deviation above average.
    Each hour adds up
    The analysis showed that every additional hour of recreational screen time was linked with an increase of about 0.08 standard deviations in the cardiometabolic score for the 10-year-olds and 0.13 standard deviations for the 18-year-olds. “This means a child with three extra hours of screen time a day would have roughly a quarter to half a standard-deviation higher risk than their peers,” Horner said.
    “It’s a small change per hour, but when screen time accumulates to three, five or even six hours a day, as we saw in many adolescents, that adds up,” he said. “Multiply that across a whole population of children, and you’re looking at a meaningful shift in early cardiometabolic risk that could carry into adulthood.”
    Sleep appears to intensify the risk
    Short sleep and later bedtimes strengthened the relationship between screen time and cardiometabolic risk. Youth who slept less showed notably higher risk linked to the same amount of screen exposure.

    “In childhood, sleep duration not only moderated this relationship but also partially explained it: about 12% of the association between screen time and cardiometabolic risk was mediated through shorter sleep duration,” Horner said. “These findings suggest that insufficient sleep may not only magnify the impact of screen time but could be a key pathway linking screen habits to early metabolic changes.”
    Metabolic “fingerprint” linked to screen use
    In a machine learning analysis, investigators identified a distinctive pattern of blood metabolites that appeared to correlate with screen time.
    “We were able to detect a set of blood-metabolite changes, a ‘screen-time fingerprint,’ validating the potential biological impact of the screen time behavior,” he said. “Using the same metabolomics data, we also assessed whether screen time was linked to predicted cardiovascular risk in adulthood, finding a positive trend in childhood and a significant association in adolescence. This suggests that screen-related metabolic changes may carry early signals of long-term heart health risk.
    “Recognizing and discussing screen habits during pediatric appointments could become part of broader lifestyle counseling, much like diet or physical activity,” he said. “These results also open the door to using metabolomic signatures as early objective markers of lifestyle risk.”
    Practical guidance from experts
    Amanda Marma Perak, M.D., M.S.CI., FAHA, chair of the American Heart Association’s Young Hearts Cardiovascular Disease Prevention Committee, who was not involved in this research, said focusing on sleep is a great starting point to change screen time patterns.
    “If cutting back on screen time feels difficult, start by moving screentime earlier and focusing on getting into bed earlier and for longer,” said Perak, an assistant professor of pediatrics and preventive medicine at Northwestern University Feinberg School of Medicine in Chicago.
    Adults can also set an example, she said. “All of us use screens, so it’s important to guide kids, teens and young adults to healthy screen use in a way that grows with them. As a parent, you can model healthy screen use — when to put it away, how to use it, how to avoid multitasking. And as kids get a little older, be more explicit, narrating why you put away your devices during dinner or other times together.
    “Make sure they know how to entertain and soothe themselves without a screen and can handle being bored! Boredom breeds brilliance and creativity, so don’t be bothered when your kids complain they’re bored. Loneliness and discomfort will happen throughout life, so those are opportunities to support and mentor your kids in healthy ways to respond that don’t involve scrolling.”
    Important caveats and next questions
    Because this work is observational, it reveals associations rather than direct cause and effect. In addition, screen use for the 10-year-olds and 18-year-olds was reported by parents through questionnaires, which may not perfectly reflect actual time spent on screens.
    Horner noted that future studies could test whether reducing screen exposure in the hours before bedtime, when screen light may disrupt circadian rhythms and delay sleep onset, helps lower cardiometabolic risk.
    Study details, background and design The two prospective research groups at COPSAC in Denmark consisted of mother-child pairs, with analysis of data collected at planned clinical visits and study assessments from the birth of the children through age 10 in the 2010 study group and age 18 in the 2000 study group. Through questionnaires, parents of children in the 10-year-old group and 18-year-olds detailed the number of hours the young participants spent watching TV or movies, gaming on a console/TV and using phones, tablets or computers for leisure. For the 2010 group, the number of hours of screen time was available for 657 children at age 6 and 630 children at age 10. Average screen time was two hours per day at age 6, and 3.2 hours per day at age 10, representing a significant increase over time. For the 2000 group of 18-year-olds, screen time was available for 364 individuals. Screen time at 18 years was significantly higher at an average of 6.1 hours per day. Sleep was measured by sensors over a 14-day period. More

  • in

    Scientists discover a way simulate the Universe on a laptop

    As astronomers gather more data than ever before, studying the cosmos has become an increasingly complex task. A new innovation is changing that reality. Researchers have now developed a way to analyze enormous cosmic data sets using only a laptop and a few hours of processing time.
    Leading this effort is Dr. Marco Bonici, a postdoctoral researcher at the Waterloo Centre for Astrophysics at the University of Waterloo. Bonici and an international team created Effort.jl, short for EFfective Field theORy surrogate. This tool uses advanced numerical techniques and smart data-preprocessing methods to deliver exceptional computational performance while maintaining the accuracy required in cosmology. The team designed it as a powerful emulator for the Effective Field Theory of Large-Scale Structure (EFTofLSS), allowing researchers to process vast datasets more efficiently than ever before.
    Turning Frustration Into Innovation
    The idea for Effort.jl emerged from Bonici’s experience running time-consuming computer models. Each time he adjusted even a single parameter, it could take days of extra computation to see the results. That challenge inspired him to build a faster, more flexible solution that could handle such adjustments in hours rather than days.
    “Using Effort.jl, we can run through complex data sets on models like EFTofLSS, which have previously needed a lot of time and computer power,” Bonici explained. “With projects like DESI and Euclid expanding our knowledge of the universe and creating even larger astronomical datasets to explore, Effort.jl allows researchers to analyze data faster, inexpensively and multiple times while making small changes based on nuances in the data.”
    Smarter Simulations for a Faster Universe
    Effort.jl belongs to a class of tools known as emulators. These are trained computational shortcuts that replicate the behavior of large, resource-intensive simulations but run dramatically faster. By using emulators, scientists can explore many possible cosmic scenarios in a fraction of the time and apply advanced techniques such as gradient-based sampling to study intricate physical models with greater efficiency.

    “We were able to validate the predictions coming out of Effort.jl by aligning them with those coming out of EFTofLSS,” Bonici said. “The margin of error was small and showed us that the calculations coming out of Effort.jl are strong. Effort.jl can also handle observational quirks like distortions in data and can be customized very easily to the needs of the researcher.”
    Human Expertise Still Matters
    Despite its impressive capabilities, Effort.jl is not a substitute for scientific understanding. Cosmologists still play a vital role in setting parameters, interpreting results, and applying physical insight to ensure meaningful conclusions. The combination of expert knowledge and computational power is what makes the system so effective.
    Looking ahead, Effort.jl is expected to take on even larger cosmological datasets and work alongside other analytical tools. Researchers also see potential for its methods in areas beyond astrophysics, including weather and climate modeling.
    The paper, “Effort.jl: a fast and differentiable emulator for the Effective Field Theory of the Large Scale Structure of the Universe,” was published in the Journal of Cosmology and Astroparticle Physics. More

  • in

    A revolutionary DNA search engine is speeding up genetic discovery

    Rare genetic diseases can now be detected in patients, and tumor-specific mutations identified — a milestone made possible by DNA sequencing, which transformed biomedical research decades ago. In recent years, the introduction of new sequencing technologies (next-generation sequencing) has driven a wave of breakthroughs. During 2020 and 2021, for instance, these methods enabled the rapid decoding and worldwide monitoring of the SARS-CoV-2 genome.
    At the same time, an increasing number of researchers are making their sequencing results publicly accessible. This has led to an explosion of data, stored in major databases such as the American SRA (Sequence Read Archive) and the European ENA (European Nucleotide Archive). Together, these archives now hold about 100 petabytes of information — roughly equivalent to the total amount of text found across the entire internet, with a single petabyte equaling one million gigabytes.
    Until now, biomedical scientists needed enormous computing resources to search through these vast genetic repositories and compare them with their own data, making comprehensive searches nearly impossible. Researchers at ETH Zurich have now developed a way to overcome that limitation.
    Full-text search instead of downloading entire data sets
    The team created a tool called MetaGraph, which dramatically streamlines and accelerates the process. Instead of downloading entire datasets, MetaGraph enables direct searches within the raw DNA or RNA data — much like using an internet search engine. Scientists simply enter a genetic sequence of interest into a search field and, within seconds or minutes depending on the query, can see where that sequence appears in global databases.
    “It’s a kind of Google for DNA,” explains Professor Gunnar Rätsch, a data scientist in ETH Zurich’s Department of Computer Science. Previously, researchers could only search for descriptive metadata and then had to download the full datasets to access raw sequences. That approach was slow, incomplete, and expensive.
    According to the study authors, MetaGraph is also remarkably cost-efficient. Representing all publicly available biological sequences would require only a few computer hard drives, and large queries would cost no more than about 0.74 dollars per megabase.

    Because the new DNA search engine is both fast and accurate, it could significantly accelerate research — particularly in identifying emerging pathogens or analyzing genetic factors linked to antibiotic resistance. The system may even help locate beneficial viruses that destroy harmful bacteria (bacteriophages) hidden within these massive databases.
    Compression by a factor of 300
    In their study published on October 8 in Nature, the ETH team demonstrated how MetaGraph works. The tool organizes and compresses genetic data using advanced mathematical graphs that structure information more efficiently, similar to how spreadsheet software arranges values. “Mathematically speaking, it is a huge matrix with millions of columns and trillions of rows,” Rätsch explains.
    Creating indexes to make large datasets searchable is a familiar concept in computer science, but the ETH approach stands out for how it connects raw data with metadata while achieving an extraordinary compression rate of about 300 times. This reduction works much like summarizing a book — it removes redundancies while preserving the essential narrative and relationships, retaining all relevant information in a much smaller form.
    “We are pushing the limits of what is possible in order to keep the data sets as compact as possible without losing necessary information,” says Dr. André Kahles, who, like Rätsch, is a member of the Biomedical Informatics Group at ETH Zurich. By contrast with other DNA search masks currently being researched, the ETH researchers’ approach is scalable. This means that the larger the amount of data queried, the less additional computing power the tool requires.
    Half of the data is already available now
    First introduced in 2020, MetaGraph has been steadily refined. The tool is now publicly accessible for searches (https://metagraph.ethz.ch/search) and already indexes millions of DNA, RNA, and protein sequences from viruses, bacteria, fungi, plants, animals, and humans. Currently, nearly half of all available global sequence datasets are included, with the remainder expected to follow by the end of the year. Since MetaGraph is open source, it could also attract interest from pharmaceutical companies managing large volumes of internal research data.
    Kahles even believes it is possible that the DNA search engine will one day be used by private individuals: “In the early days, even Google didn’t know exactly what a search engine was good for. If the rapid development in DNA sequencing continues, it may become commonplace to identify your balcony plants more precisely.” More

  • in

    Breakthrough optical processor lets AI compute at the speed of light

    Modern artificial intelligence (AI) systems, from robotic surgery to high-frequency trading, rely on processing streams of raw data in real time. Extracting important features quickly is critical, but conventional digital processors are hitting physical limits. Traditional electronics can no longer reduce latency or increase throughput enough to keep up with today’s data-heavy applications.
    Turning to Light for Faster Computing
    Researchers are now looking to light as a solution. Optical computing — using light instead of electricity to handle complex calculations — offers a way to dramatically boost speed and efficiency. One promising approach involves optical diffraction operators, thin plate-like structures that perform mathematical operations as light passes through them. These systems can process many signals at once with low energy use. However, maintaining the stable, coherent light needed for such computations at speeds above 10 GHz has proven extremely difficult.
    To overcome this challenge, a team led by Professor Hongwei Chen at Tsinghua University in China developed a groundbreaking device known as the Optical Feature Extraction Engine, or OFE2. Their work, published in Advanced Photonics Nexus, demonstrates a new way to perform high-speed optical feature extraction suitable for multiple real-world applications.
    How OFE2 Prepares and Processes Data
    A key advance in OFE2 is its innovative data preparation module. Supplying fast, parallel optical signals to the core optical components without losing phase stability is one of the toughest problems in the field. Fiber-based systems often introduce unwanted phase fluctuations when splitting and delaying light. The Tsinghua team solved this by designing a fully integrated on-chip system with adjustable power splitters and precise delay lines. This setup converts serial data into several synchronized optical channels. In addition, an integrated phase array allows OFE2 to be easily reconfigured for different computational tasks.
    Once prepared, the optical signals pass through a diffraction operator that performs the feature extraction. This process is similar to a matrix-vector multiplication, where light waves interact to create focused “bright spots” at specific output points. By fine-tuning the phase of the input light, these spots can be directed toward chosen output ports, enabling OFE2 to capture subtle variations in the input data over time.

    Record-Breaking Optical Performance
    Operating at an impressive 12.5 GHz, OFE2 achieves a single matrix-vector multiplication in just 250.5 picoseconds — the fastest known result for this type of optical computation. “We firmly believe this work provides a significant benchmark for advancing integrated optical diffraction computing to exceed a 10 GHz rate in real-world applications,” says Chen.
    The research team tested OFE2 across multiple domains. In image processing, it successfully extracted edge features from visual data, creating paired “relief and engraving” maps that improved image classification and increased accuracy in tasks such as identifying organs in CT scans. Systems using OFE2 required fewer electronic parameters than standard AI models, proving that optical preprocessing can make hybrid AI networks both faster and more efficient.
    The team also applied OFE2 to digital trading, where it processed live market data to generate profitable buy and sell actions. After being trained with optimized strategies, OFE2 converted incoming price signals directly into trading decisions, achieving consistent returns. Because these calculations happen at the speed of light, traders could act on opportunities with almost no delay.
    Lighting the Way Toward the Future of AI
    Together, these achievements signal a major shift in computing. By moving the most demanding parts of AI processing from power-hungry electronic chips to lightning-fast photonic systems, technologies like OFE2 could usher in a new era of real-time, low-energy AI. “The advancements presented in our study push integrated diffraction operators to a higher rate, providing support for compute-intensive services in areas such as image recognition, assisted healthcare, and digital finance. We look forward to collaborating with partners who have data-intensive computational needs,” concludes Chen. More

  • in

    AI restores James Webb telescope’s crystal-clear vision

    Two PhD students from Sydney have helped restore the sharp vision of the world’s most powerful space observatory without ever leaving the ground. Louis Desdoigts, now a postdoctoral researcher at Leiden University in the Netherlands, and his colleague Max Charles celebrated their achievement with tattoos of the instrument they repaired inked on their arms — an enduring reminder of their contribution to space science.
    A Groundbreaking Software Fix
    Researchers at the University of Sydney developed an innovative software solution that corrected blurriness in images captured by NASA’s multi-billion-dollar James Webb Space Telescope (JWST). Their breakthrough restored the full precision of one of the telescope’s key instruments, achieving what would once have required a costly astronaut repair mission.
    This success builds on the JWST’s only Australian-designed component, the Aperture Masking Interferometer (AMI). Created by Professor Peter Tuthill from the University of Sydney’s School of Physics and the Sydney Institute for Astronomy, the AMI allows astronomers to capture ultra-high-resolution images of stars and exoplanets. It works by combining light from different sections of the telescope’s main mirror, a process known as interferometry. When the JWST began its scientific operations, researchers noticed that AMI’s performance was being affected by faint electronic distortions in its infrared camera detector. These distortions caused subtle image fuzziness, reminiscent of the Hubble Space Telescope’s well-known early optical flaw that had to be corrected through astronaut spacewalks.
    Solving a Space Problem from Earth
    Instead of attempting a physical repair, PhD students Louis Desdoigts and Max Charles, working with Professor Tuthill and Associate Professor Ben Pope (at Macquarie University), devised a purely software-based calibration technique to fix the distortion from Earth.
    Their system, called AMIGO (Aperture Masking Interferometry Generative Observations), uses advanced simulations and neural networks to replicate how the telescope’s optics and electronics function in space. By pinpointing an issue where electric charge slightly spreads to neighboring pixels — a phenomenon called the brighter-fatter effect — the team designed algorithms that digitally corrected the images, fully restoring AMI’s performance.

    “Instead of sending astronauts to bolt on new parts, they managed to fix things with code,” Professor Tuthill said. “It’s a brilliant example of how Australian innovation can make a global impact in space science.”
    Sharper Views of the Universe
    The results have been striking. With AMIGO in use, the James Webb Space Telescope has delivered its clearest images yet, capturing faint celestial objects in unprecedented detail. This includes direct images of a dim exoplanet and a red-brown dwarf orbiting the nearby star HD 206893, about 133 light years from Earth.
    A related study led by Max Charles further demonstrated AMI’s renewed precision. Using the improved calibration, the telescope produced sharp images of a black hole jet, the fiery surface of Jupiter’s moon Io, and the dust-filled stellar winds of WR 137 — showing that JWST can now probe deeper and clearer than before.
    “This work brings JWST’s vision into even sharper focus,” Dr. Desdoigts said. “It’s incredibly rewarding to see a software solution extend the telescope’s scientific reach — and to know it was possible without ever leaving the lab.”
    Dr. Desdoigts has now landed a prestigious postdoctoral research position at Leiden University in the Netherlands.
    Both studies have been published on the pre-press server arXiv. Dr. Desdoigts’ paper has been peer-reviewed and will shortly be published in the Publications of the Astronomical Society of Australia. We have published this release to coincide with the latest round of James Webb Space Telescope General Observer, Survey and Archival Research programs.
    Associate Professor Benjamin Pope, who presented on these findings at SXSW Sydney, said the research team was keen to get the new code into the hands of researchers working on JWST as soon as possible. More