More stories

  • in

    Stanford’s tiny eye chip helps the blind see again

    A tiny wireless chip placed at the back of the eye, combined with a pair of advanced smart glasses, has partially restored vision to people suffering from an advanced form of age-related macular degeneration. In a clinical study led by Stanford Medicine and international collaborators, 27 of the 32 participants regained the ability to read within a year of receiving the implant.
    With the help of digital features such as adjustable zoom and enhanced contrast, some participants achieved visual sharpness comparable to 20/42 vision.
    The study’s findings were published on Oct. 20 in the New England Journal of Medicine.
    A Milestone in Restoring Functional Vision
    The implant, named PRIMA and developed at Stanford Medicine, is the first prosthetic eye device to restore usable vision to individuals with otherwise untreatable vision loss. The technology enables patients to recognize shapes and patterns, a level of vision known as form vision.
    “All previous attempts to provide vision with prosthetic devices resulted in basically light sensitivity, not really form vision,” said Daniel Palanker, PhD, a professor of ophthalmology and a co-senior author of the paper. “We are the first to provide form vision.”
    The research was co-led by José-Alain Sahel, MD, professor of ophthalmology at the University of Pittsburgh School of Medicine, with Frank Holz, MD, of the University of Bonn in Germany, serving as lead author.

    How the PRIMA System Works
    The system includes two main parts: a small camera attached to a pair of glasses and a wireless chip implanted in the retina. The camera captures visual information and projects it through infrared light to the implant, which converts it into electrical signals. These signals substitute for the damaged photoreceptors that normally detect light and send visual data to the brain.
    The PRIMA project represents decades of scientific effort, involving numerous prototypes, animal testing, and an initial human trial.
    Palanker first conceived the idea two decades ago while working with ophthalmic lasers to treat eye disorders. “I realized we should use the fact that the eye is transparent and deliver information by light,” he said.
    “The device we imagined in 2005 now works in patients remarkably well.”
    Replacing Lost Photoreceptors
    Participants in the latest trial had an advanced stage of age-related macular degeneration known as geographic atrophy, which progressively destroys central vision. This condition affects over 5 million people worldwide and is the leading cause of irreversible blindness among older adults.

    In macular degeneration, the light-sensitive photoreceptor cells in the central retina deteriorate, leaving only limited peripheral vision. However, many of the retinal neurons that process visual information remain intact, and PRIMA capitalizes on these surviving structures.
    The implant, measuring just 2 by 2 millimeters, is placed in the area of the retina where photoreceptors have been lost. Unlike natural photoreceptors that respond to visible light, the chip detects infrared light emitted from the glasses.
    “The projection is done by infrared because we want to make sure it’s invisible to the remaining photoreceptors outside the implant,” Palanker said.
    Combining Natural and Artificial Vision
    This design allows patients to use both their natural peripheral vision and the new prosthetic central vision simultaneously, improving their ability to orient themselves and move around.
    “The fact that they see simultaneously prosthetic and peripheral vision is important because they can merge and use vision to its fullest,” Palanker said.
    Since the implant is photovoltaic — relying solely on light to generate electrical current — it operates wirelessly and can be safely placed beneath the retina. Earlier versions of artificial eye devices required external power sources and cables that extended outside the eye.
    Reading Again
    The new trial included 38 patients older than 60 who had geographic atrophy due to age-related macular degeneration and worse than 20/320 vision in at least one eye.
    Four to five weeks after implantation of the chip in one eye, patients began using the glasses. Though some patients could make out patterns immediately, all patients’ visual acuity improved over months of training.
    “It may take several months of training to reach top performance — which is similar to what cochlear implants require to master prosthetic hearing,” Palanker said.
    Of the 32 patients who completed the one-year trial, 27 could read and 26 demonstrated clinically meaningful improvement in visual acuity, which was defined as the ability to read at least two additional lines on a standard eye chart. On average, participants’ visual acuity improved by 5 lines; one improved by 12 lines.
    The participants used the prosthesis in their daily lives to read books, food labels and subway signs. The glasses allowed them to adjust contrast and brightness and magnify up to 12 times. Two-thirds reported medium to high user satisfaction with the device.
    Nineteen participants experienced side effects, including ocular hypertension (high pressure in the eye), tears in the peripheral retina and subretinal hemorrhage (blood collecting under the retina). None were life-threatening, and almost all resolved within two months.
    Future Visions
    For now, the PRIMA device provides only black-and-white vision, with no shades in between, but Palanker is developing software that will soon enable the full range of grayscale.
    “Number one on the patients’ wish list is reading, but number two, very close behind, is face recognition,” he said. “And face recognition requires grayscale.”
    He is also engineering chips that will offer higher resolution vision. Resolution is limited by the size of pixels on the chip. Currently, the pixels are 100 microns wide, with 378 pixels on each chip. The new version, already tested in rats, may have pixels as small as 20 microns wide, with 10,000 pixels on each chip.
    Palanker also wants to test the device for other types of blindness caused by lost photoreceptors.
    “This is the first version of the chip, and resolution is relatively low,” he said. “The next generation of the chip, with smaller pixels, will have better resolution and be paired with sleeker-looking glasses.”
    A chip with 20-micron pixels could give a patient 20/80 vision, Palanker said. “But with electronic zoom, they could get close to 20/20.”
    Researchers from the University of Bonn, Germany; Hôpital Fondation A. de Rothschild, France; Moorfields Eye Hospital and University College London; Ludwigshafen Academic Teaching Hospital; University of Rome Tor Vergata; Medical Center Schleswig-Holstein, University of Lübeck; L’Hôpital Universitaire de la Croix-Rousse and Université Claude Bernard Lyon 1; Azienda Ospedaliera San Giovanni Addolorata; Centre Monticelli Paradis and L’Université d’Aix-Marseille; Intercommunal Hospital of Créteil and Henri Mondor Hospital; Knappschaft Hospital Saar; Nantes University; University Eye Hospital Tübingen; University of Münster Medical Center; Bordeaux University Hospital; Hôpital National des 15-20; Erasmus University Medical Center; University of Ulm; Science Corp.; University of California, San Francisco; University of Washington; University of Pittsburgh School of Medicine; and Sorbonne Université contributed to the study.
    The study was supported by funding from Science Corp., the National Institute for Health and Care Research, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology. More

  • in

    AI turns x-rays into time machines for arthritis care

    A new artificial intelligence system developed by researchers at the University of Surrey can forecast what a patient’s knee X-ray might look like one year in the future. This breakthrough could reshape how millions of people living with osteoarthritis understand and manage their condition.
    The research, presented at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2025), describes a powerful AI model capable of generating realistic “future” X-rays along with a personalized risk score that estimates disease progression. Together, these outputs give doctors and patients a visual roadmap of how osteoarthritis may evolve over time.
    A Major Step Forward in Predicting Osteoarthritis Progression
    Osteoarthritis, a degenerative joint disorder that affects more than 500 million people globally, is the leading cause of disability among older adults. The Surrey system was trained on nearly 50,000 knee X-rays from about 5,000 patients, making it one of the largest datasets of its kind. It can predict disease progression roughly nine times faster than similar AI tools and operates with greater efficiency and accuracy. Researchers believe this combination of speed and precision could help integrate the technology into clinical practice more quickly.
    David Butler, the study’s lead author from the University of Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP) and the Institute for People-Centred AI, explained:
    “We’re used to medical AI tools that give a number or a prediction, but not much explanation. Our system not only predicts the likelihood of your knee getting worse — it actually shows you a realistic image of what that future knee could look like. Seeing the two X-rays side by side — one from today and one for next year — is a powerful motivator. It helps doctors act sooner and gives patients a clearer picture of why sticking to their treatment plan or making lifestyle changes really matters. We think this can be a turning point in how we communicate risk and improve osteoarthritic knee care and other related conditions.”
    How the System Visualizes Change
    At the core of the new system is an advanced generative model known as a diffusion model. It creates a “future” version of a patient’s X-ray and identifies 16 key points in the joint to highlight areas being tracked for potential changes. This feature enhances transparency by showing clinicians exactly which parts of the knee the AI is monitoring, helping build confidence and understanding in its predictions.

    The Surrey team believes their approach could be adapted for other chronic diseases. Similar AI tools might one day predict lung damage in smokers or track the progression of heart disease, providing the same kind of visual insights and early warning that this system offers for osteoarthritis. Researchers are now seeking collaborations to bring the technology into hospitals and everyday healthcare use.
    Greater Transparency and Early Intervention
    Gustavo Carneiro, Professor of AI and Machine Learning at Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP), said:
    “Earlier AI systems could estimate the risk of osteoarthritis progression, but they were often slow, opaque and limited to numbers rather than clear images. Our approach takes a big step forward by generating realistic future X-rays quickly and by pinpointing the areas of the joint most likely to change. That extra visibility helps clinicians identify high-risk patients sooner and personalize their care in ways that were not previously practical.” More

  • in

    Quantum crystals could spark the next tech revolution

    Picture a future where factories can create materials and chemical compounds more quickly, at lower cost, and with fewer production steps. Imagine your laptop processing complex data in seconds or a supercomputer learning and adapting as efficiently as the human brain. These possibilities depend on one fundamental factor: how electrons behave inside materials. Researchers at Auburn University have now developed a groundbreaking type of material that allows scientists to precisely control these tiny charged particles. Their findings, published in ACS Materials Letters, describe how the team achieved adjustable coupling between isolated-metal molecular complexes, called solvated electron precursors, where electrons are not tied to specific atoms but instead move freely within open spaces.
    Electrons are central to nearly every chemical and technological process. They drive energy transfer, bonding, and electrical conductivity, serving as the foundation for both chemical synthesis and modern electronics. In chemical reactions, electrons enable redox processes, bond formation, and catalytic activity. In technology, managing how electrons move and interact underpins everything from electronic circuits and AI systems to solar cells and quantum computers. Typically, electrons are confined to atoms, which restricts their potential uses. However, in materials known as electrides, electrons move independently, opening the door to remarkable new capabilities.
    “By learning how to control these free electrons, we can design materials that do things nature never intended,” explains Dr. Evangelos Miliordos, Associate Professor of Chemistry at Auburn and senior author of the study, which was based on advanced computational modeling.
    To achieve this, the Auburn team created innovative material structures called Surface Immobilized Electrides by attaching solvated electron precursors to stable surfaces such as diamond and silicon carbide. This configuration makes the electronic characteristics of the electrides both durable and tunable. By changing how the molecules are arranged, electrons can either cluster into isolated “islands” that behave like quantum bits for advanced computing or spread into extended “seas” that promote complex chemical reactions.
    This versatility is what gives the discovery its transformative potential. One version could lead to the development of powerful quantum computers capable of solving problems beyond the reach of today’s technology. Another could provide the basis for cutting-edge catalysts that speed up essential chemical reactions, potentially revolutionizing how fuels, pharmaceuticals, and industrial materials are produced.
    “As our society pushes the limits of current technology, the demand for new kinds of materials is exploding,” says Dr. Marcelo Kuroda, Associate Professor of Physics at Auburn. “Our work shows a new path to materials that offer both opportunities for fundamental investigations on interactions in matter as well as practical applications.”
    Earlier versions of electrides were unstable and difficult to scale. By depositing them directly on solid surfaces, the Auburn team has overcome these barriers, proposing a family of materials structures that could move from theoretical models to real-world devices. “This is fundamental science, but it has very real implications,” says Dr. Konstantin Klyukin, Assistant Professor of Materials Engineering at Auburn. “We’re talking about technologies that could change the way we compute and the way we manufacture.”
    The theoretical study was led by faculty across chemistry, physics, and materials engineering at Auburn University. “This is just the beginning,” Miliordos adds. “By learning how to tame free electrons, we can imagine a future with faster computers, smarter machines, and new technologies we haven’t even dreamed of yet.”
    The study, “Electrides with Tunable Electron Delocalization for Applications in Quantum Computing and Catalysis,” was also coauthored by graduate students Andrei Evdokimov and Valentina Nesterova. It was supported by the U.S. National Science Foundation and Auburn University computing resources. More

  • in

    Scientists build artificial neurons that work like real ones

    Engineers at the University of Massachusetts Amherst have developed an artificial neuron whose electrical activity closely matches that of natural brain cells. The innovation builds on the team’s earlier research using protein nanowires made from electricity-producing bacteria. This new approach could pave the way for computers that run with the efficiency of living systems and may even connect directly with biological tissue.
    “Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”
    The human body operates with remarkable electrical efficiency — more than 100 times greater than that of a typical computer circuit. The brain alone contains billions of neurons, specialized cells that send and receive electrical signals throughout the body. Performing a task such as writing a story uses only about 20 watts of power in the human brain, whereas a large language model can require more than a megawatt to accomplish the same thing.
    Engineers have long sought to design artificial neurons for more energy-efficient computing, but reducing their voltage to match biological levels has been a major obstacle. “Previous versions of artificial neurons used 10 times more voltage — and 100 times more power — than the one we have created,” says Jun Yao, associate professor of electrical and computer engineering at UMass Amherst and the paper’s senior author. Because of this, earlier designs were far less efficient and couldn’t connect directly with living neurons, which are sensitive to stronger electrical signals.
    “Ours register only 0.1 volts, which about the same as the neurons in our bodies,” says Yao.
    There are a wide range of applications for Fu and Yao’s new neuron, from redesigning computers along bio-inspired, and far more efficient principles, to electronic devices that could speak to our bodies directly.
    “We currently have all kinds of wearable electronic sensing systems,” says Yao, “but they are comparatively clunky and inefficient. Every time they sense a signal from our body, they have to electrically amplify it so that a computer can analyze it. That intermediate step of amplification increases both power consumption and the circuit’s complexity, but sensors built with our low-voltage neurons could do without any amplification at all.”
    The secret ingredient in the team’s new low-powered neuron is a protein nanowire synthesized from the remarkable bacteria Geobacter sulfurreducens, which also has the superpower of producing electricity. Yao, along with various colleagues, have used the bacteria’s protein nanowires to design a whole host of extraordinary efficient devices: a biofilm, powered by sweat, that can power personal electronics; an “electronic nose” that can sniff out disease; and a device, which can be built of nearly anything, that can harvest electricity from thin air itself.
    This research was supported by the Army Research Office, the U.S. National Science Foundation, the National Institutes of Health and the Alfred P. Sloan Foundation. More

  • in

    This 250-year-old equation just got a quantum makeover

    How likely you think something is to happen depends on what you already believe about the situation. This simple idea forms the basis of Bayes’ rule, a mathematical approach to calculating probabilities first introduced in 1763. Now, an international group of scientists has demonstrated how Bayes’ rule can also apply in the quantum realm.
    “I would say it is a breakthrough in mathematical physics,” said Professor Valerio Scarani, Deputy Director and Principal Investigator at the Centre for Quantum Technologies, and member of the team. His co-authors on the work published on 28 August 2025 in Physical Review Letters are Assistant Professor Ge Bai at the Hong Kong University of Science and Technology in China, and Professor Francesco Buscemi at Nagoya University in Japan.
    “Bayes’ rule has been helping us make smarter guesses for 250 years. Now we have taught it some quantum tricks,” said Prof Buscemi.
    Although other researchers had previously suggested quantum versions of Bayes’ rule, this team is the first to derive a true quantum Bayes’ rule based on a core physical principle.
    Conditional probability
    Bayes’ rule takes its name from Thomas Bayes, who described his method for calculating conditional probabilities in “An Essay Towards Solving a Problem in the Doctrine of Chances.”
    Imagine someone who tests positive for the flu. They might have suspected illness already, but this new result changes their assessment of the situation. Bayes’ rule provides a systematic way to update that belief, factoring in the likelihood of the test being wrong as well as the person’s prior assumptions.

    The rule treats probabilities as measures of belief rather than absolute facts. This interpretation has sparked debate among statisticians, with some arguing that probability should represent objective frequency rather than subjective confidence. Still, when uncertainty and belief play a role, Bayes’ rule is widely recognized as a rational framework for decision-making. It underpins countless applications today, from medical testing and weather forecasting to data science and machine learning.
    Principle of minimum change
    When calculating probabilities with Bayes’ rule, the principle of minimum change is obeyed. Mathematically, the principle of minimum change minimizes the distance between the joint probability distributions of the initial and updated belief. Intuitively, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. In the case of the flu test, for example, a negative test would not imply that the person is healthy, but rather that they are less likely to have the flu.
    In their work, Prof Scarani, who is also from NUS Department of Physics, Asst Prof Bai, and Prof Buscemi began with a quantum analogue to the minimum change principle. They quantified change in terms of quantum fidelity, which is a measure of the closeness between quantum states.
    Researchers always thought a quantum Bayes’ rule should exist because quantum states define probabilities. For example, the quantum state of a particle provides the probability of it being found at different locations. The goal is to determine the whole quantum state, but the particle is only found at one location when a measurement is performed. This new information will then update the belief, boosting the probability around that location.
    The team derived their quantum Bayes’ rule by maximizing the fidelity between two objects that represent the forward and the reverse process, in analogy with a classical joint probability distribution. Maximizing fidelity is equivalent to minimizing change. They found in some cases their equations matched the Petz recovery map, which was proposed by Dénes Petz in the 1980s and was later identified as one of the most likely candidates for the quantum Bayes’ rule based just on its properties.
    “This is the first time we have derived it from a higher principle, which could be a validation for using the Petz map,” said Prof Scarani. The Petz map has potential applications in quantum computing for tasks such as quantum error correction and machine learning. The team plans to explore whether applying the minimum change principle to other quantum measures might reveal other solutions. More

  • in

    90% of science is lost. This new AI just found it

    Most scientific data never reach their full potential to drive new discoveries.
    Out of every 100 datasets produced, about 80 stay within the lab, 20 are shared but seldom reused, fewer than two meet FAIR standards, and only one typically leads to new findings.
    The consequences are significant: slower progress in cancer treatment, climate models that lack sufficient evidence, and studies that cannot be replicated.
    To change this, the open-science publisher Frontiers has introduced Frontiers FAIR² Data Management, described as the world’s first comprehensive, AI-powered research data service. It is designed to make data both reusable and properly credited by combining all essential steps — curation, compliance checks, AI-ready formatting, peer review, an interactive portal, certification, and permanent hosting — into one seamless process. The goal is to ensure that today’s research investments translate into faster advances in health, sustainability, and technology.
    FAIR² builds on the FAIR principles (Findable, Accessible, Interoperable and Reusable) with an expanded open framework that guarantees every dataset is AI-compatible and ethically reusable by both humans and machines. The FAIR² Data Management system is the first working implementation of this model, arriving at a moment when research output is growing rapidly and artificial intelligence is reshaping how discoveries are made. It turns high-level principles into real, scalable infrastructure with measurable impact.
    Dr. Kamila Markram, co-founder and CEO of Frontiers, explains:
    “Ninety percent of science vanishes into the void. With Frontiers FAIR² Data Management, no dataset and no discovery need ever be lost again — every contribution can now fuel progress, earn the credit it deserves, and unleash science.”
    AI at the Core

    Work that once required months of manual effort — from organizing and verifying datasets to generating metadata and publishable outputs — is now completed in minutes by the AI Data Steward, powered by Senscience, the Frontiers venture behind FAIR².
    Researchers who submit their data receive four integrated outputs: a certified Data Package, a peer-reviewed and citable Data Article, an Interactive Data Portal featuring visualizations and AI chat, and a FAIR² Certificate. Each element includes quality controls and clear summaries that make the data easier to understand for general users and more compatible across research disciplines.
    Together, these outputs ensure that every dataset is preserved, validated, citable, and reusable, helping accelerate discovery while giving researchers proper recognition. Frontiers FAIR² also enhances visibility and accessibility, supporting responsible reuse by scientists, policymakers, practitioners, communities, and even AI systems, allowing society to extract greater value from its investment in science.
    Flagship Pilot Datasets SARS-CoV-2 Variant Properties — Covering 3,800 spike protein variants, this dataset links structural predictions from AlphaFold2 and ESMFold with ACE2 binding and expression data. It offers a powerful resource for pandemic preparedness, enabling deeper understanding of variant behavior and fitness. Preclinical Brain Injury MRI — A harmonized dataset of 343 diffusion MRI scans from four research centers, standardized across protocols and aligned for comparability. It supports reproducible biomarker discovery, robust cross-site analysis, and advances in preclinical traumatic brain injury research.

    Environmental Pressure Indicators (1990-2050) — Combining observed data and modeled forecasts across 43 countries over six decades, this dataset tracks emissions, waste, population, and GDP. It underpins sustainability benchmarking and evidence-based climate policy planning. Indo-Pacific Atoll Biodiversity — Spanning 280 atolls across five regions, this dataset integrates biodiversity records, reef habitats, climate indicators, and human-use histories. It provides an unprecedented basis for ecological modeling, conservation prioritization, and cross-regional research on vulnerable island ecosystems. Researchers testing the pilots noted that Frontiers FAIR² not only preserves and shares data but also builds confidence in its reuse — through quality checks, clear summaries for non-specialists, and the reliability to combine datasets across disciplines, all while ensuring scientists receive credit.
    All pilot datasets comply with the FAIR² Open Specification, making them responsibly curated, reusable, and trusted for long-term human and machine use so today’s data can accelerate tomorrow’s solutions to society’s most pressing challenges.
    Recognition and Reuse
    Each reuse multiplies the value of the original dataset, ensuring that no discovery is wasted, every contribution can spark the next breakthrough, and researchers gain recognition for their work.
    Dr. Sean Hill, co-founder and CEO of Senscience, the Frontiers AI venture behind FAIR² Data Management, notes:
    “Science invests billions generating data, but most of it is lost — and researchers rarely get credit. With Frontiers FAIR², every dataset is cited, every scientist recognized — finally rewarding the essential work of data creation. That’s how cures, climate solutions, and new technologies will reach society faster — this is how we unleash science.”
    What Researchers Are Saying
    Dr. Ángel Borja, Principal Researcher, AZTI, Marine Research, Basque Research and Technology Alliance (BRTA):
    “I highly [recommend using] this kind of data curation and publication of articles, because you can generate information very quickly and it’s useful formatting for any end users.”
    Erik Schultes, Senior Researcher, Leiden Academic Centre for Drug Research (LACDR); FAIR Implementation Lead, GO FAIR Foundation:
    “Frontiers FAIR² captured the scientific aspects of the project perfectly.”
    Femke Heddema, Researcher and Health Data Systems Innovation Manager, PharmAccess:
    “Frontiers FAIR² makes the execution of FAIR principles smoother for researchers and digital health implementers, proving that making datasets like MomCare reusable doesn’t have to be complex. By enabling transparent, accessible, and actionable data, Frontiers FAIR² opens the door to new opportunities in health research.”
    Dr. Neil Harris, Professor in Residence, Department of Neurosurgery, Brain Injury Research Center, University of California, Los Angeles (UCLA):
    “Implementation of [Frontiers] FAIR² can provide an objective check on data for both missingness and quality that is useful on so many levels. These types of unbiased assessments and data summaries can aid understanding by non-domain experts to ultimately enhance data sharing. As the field progresses to using big data in more disparate sub-disciplines, these data checks and summaries will become crucial to maintaining a good grasp of how we might use and combine the multitude of already acquired data within our current analyses.”
    Maryann Martone, Chief Editor, Open Data Commons:
    “[Frontiers] FAIR² is one of the easiest and most effective ways to make data FAIR. Every PI wants their data to be findable, accessible, comparable, and reusable — in the lab, with collaborators, and across the scientific community. The real bottleneck has always been the time and effort required. [Frontiers] FAIR² dramatically lowers that barrier, putting truly FAIR data within reach for most labs.”
    Dr. Vincent Woon Kok Sin, Assistant Professor, Carbon Neutrality and Climate Change Thrust, Society Hub, The Hong Kong University of Science and Technology (HKUST):
    “[Frontiers] FAIR² makes our global waste dataset more visible and accessible, helping researchers worldwide who often struggle with scarce and fragmented data. I hope this will broaden collaboration and accelerate insights for sustainable waste management.”
    Dr. Sebastian Steibl, Postdoctoral Researcher, Naturalis Biodiversity Center and the University of Auckland:
    “True data accessibility goes beyond just uploading datasheets to a repository. It means making data easy to view, explore, and understand without necessarily requiring years of training. The [Frontiers] FAIR² platform, with an AI chatbot and interactive visual data exploration and summary tools, makes our biodiversity and environmental data broadly accessible and usable not just to scholars, but also practitioners, policymakers, and local community initiatives.” More

  • in

    Quantum simulations that once needed supercomputers now run on laptops

    Picture diving deep into the quantum realm, where unimaginably small particles can exist and interact in more than a trillion possible ways at the same time.
    It’s as complex as it sounds. To understand these mind-bending systems and their countless configurations, physicists usually turn to powerful supercomputers or artificial intelligence for help.
    But what if many of those same problems could be handled by a regular laptop?
    Scientists have long believed this was theoretically possible, yet actually achieving it has proven far more difficult.
    Researchers at the University at Buffalo have now taken a major step forward. They have expanded a cost-effective computational technique known as the truncated Wigner approximation (TWA), a kind of physics shortcut that simplifies quantum mathematics, so it can handle systems once thought to demand enormous computing power.
    Just as significant, their approach — outlined in a study published in September in PRX Quantum, a journal of the American Physical Society — offers a practical, easy-to-use TWA framework that lets researchers input their data and obtain meaningful results within hours.
    “Our approach offers a significantly lower computational cost and a much simpler formulation of the dynamical equations,” says the study’s corresponding author, Jamir Marino, PhD, assistant professor of physics in the UB College of Arts and Sciences. “We think this method could, in the near future, become the primary tool for exploring these kinds of quantum dynamics on consumer-grade computers.”
    Marino, who joined UB this fall, began this work while at Johannes Gutenberg University Mainz in Germany. His co-authors include two of his former students there, Hossein Hosseinabadi and Oksana Chelpanova, the latter now a postdoctoral researcher in Marino’s lab at UB.

    The research received support from the National Science Foundation, the German Research Foundation, and the European Union.
    Taking a semiclassical approach
    Not every quantum system can be solved exactly. Doing so would be impractical, as the required computing power grows exponentially as the system becomes more complex.
    Instead, physicists often turn to what’s known as semiclassical physics — a middle-ground approach that keeps just enough quantum behavior to stay accurate, while discarding details that have little effect on the outcome.
    TWA is one such semiclassical approach that dates back to the 1970s, but is limited to isolated, idealized quantum systems where no energy is gained or lost.
    So Marino’s team expanded TWA to the messier systems found in the real world, where particles are constantly pushed and pulled by outside forces and leak energy into their surroundings, otherwise known as dissipative spin dynamics.

    “Plenty of groups have tried to do this before us. It’s known that certain complicated quantum systems could be solved efficiently with a semiclassical approach,” Marino says. “However, the real challenge has been to make it accessible and easy to do.”
    Making quantum dynamics easy
    In the past, researchers looking to use TWA faced a wall of complexity. They had to re-derive the math from scratch each time they applied the method to a new quantum problem.
    So, Marino’s team turned what used to be pages of dense, nearly impenetrable math into a straightforward conversion table that translates a quantum problem into solvable equations.
    “Physicists can essentially learn this method in one day, and by about the third day, they are running some of the most complex problems we present in the study,” Chelpanova says.
    Saving supercomputers for the big problems
    The hope is that the new method will save supercomputing clusters and AI models for the truly complicated quantum systems. These are systems that can’t be solved with a semiclassical approach. Systems with not just a trillion possible states, but more states than there are atoms in the universe.
    “A lot of what appears complicated isn’t actually complicated,” Marino says. “Physicists can use supercomputing resources on the systems that need a full-fledged quantum approach and solve the rest quickly with our approach.” More

  • in

    Scientists create a magnetic lantern that moves like it’s alive

    Researchers have developed a polymer structure shaped like a “Chinese lantern” that can quickly change into more than a dozen curved, three-dimensional forms when it is compressed or twisted. This transformation can be triggered and controlled remotely with a magnetic field, opening possibilities for a wide range of practical uses.To build the lantern, the team began with a thin polymer sheet cut into a diamond-shaped parallelogram. They then sliced a series of evenly spaced lines through the center of the sheet, forming parallel ribbons connected by solid strips of material at the top and bottom. When the ends of these top and bottom strips are joined, the sheet naturally folds into a round, lantern-like shape.
    “This basic shape is, by itself, bistable,” says Jie Yin, corresponding author of a paper on the work and a professor of mechanical and aerospace engineering at North Carolina State University. “In other words, it has two stable forms. It is stable in its lantern shape, of course. But if you compress the structure, pushing down from the top, it will slowly begin to deform until it reaches a critical point, at which point it snaps into a second stable shape that resembles a spinning top. In the spinning-top shape, the structure has stored all of the energy you used to compress it. So, once you begin to pull up on the structure, you will reach a point where all of that energy is released at once, causing it to snap back into the lantern shape very quickly.”
    “We found that we could create many additional shapes by applying a twist to the shape, by folding the solid strips at the top or bottom of the lantern in or out, or any combination of those things,” says Yaoye Hong, first author of the paper and a former Ph.D. student at NC State who is now a postdoctoral researcher at the University of Pennsylvania. “Each of these variations is also multistable. Some can snap back and forth between two stable states. One has four stable states, depending on whether you’re compressing the structure, twisting the structure, or compressing and twisting the structure simultaneously.”
    The researchers also gave the lanterns magnetic control by attaching a thin magnetic film to the bottom strip. This allowed them to remotely twist or compress the structures using a magnetic field. They demonstrated several possible uses for the design, including a gentle magnetic gripper that can catch and release fish without harm, a flow-control filter that opens and closes underwater, and a compact shape that suddenly extends upward to reopen a collapsed tube. A video of the experiment is available below the article.To better understand and predict the lantern’s behavior, the team also created a mathematical model showing how the geometry of each angle affects both the final shape and how much elastic energy is stored in each stable configuration.
    “This model allows us to program the shape we want to create, how stable it is, and how powerful it can be when stored potential energy is allowed to snap into kinetic energy,” says Hong. “And all of those things are critical for creating shapes that can perform desired applications.”
    “Moving forward, these lantern units can be assembled into 2D and 3D architectures for broad applications in shape-morphing mechanical metamaterials and robotics,” says Yin. “We will be exploring that.”
    The paper, “Reprogrammable snapping morphogenesis in freestanding ribbon-cluster meta-units via stored elastic energy,” was published on Oct. 10 in the journal Nature Materials. The paper was co-authored by Caizhi Zhou and Haitao Qing, both Ph.D. students at NC State; and by Yinding Chi, a former Ph.D. student at NC State who is now a postdoctoral researcher at Penn.
    This work was done with support from the National Science Foundation under grants 2005374, 2369274 and 2445551. More