More stories

  • in

    Scientists build artificial neurons that work like real ones

    Engineers at the University of Massachusetts Amherst have developed an artificial neuron whose electrical activity closely matches that of natural brain cells. The innovation builds on the team’s earlier research using protein nanowires made from electricity-producing bacteria. This new approach could pave the way for computers that run with the efficiency of living systems and may even connect directly with biological tissue.
    “Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”
    The human body operates with remarkable electrical efficiency — more than 100 times greater than that of a typical computer circuit. The brain alone contains billions of neurons, specialized cells that send and receive electrical signals throughout the body. Performing a task such as writing a story uses only about 20 watts of power in the human brain, whereas a large language model can require more than a megawatt to accomplish the same thing.
    Engineers have long sought to design artificial neurons for more energy-efficient computing, but reducing their voltage to match biological levels has been a major obstacle. “Previous versions of artificial neurons used 10 times more voltage — and 100 times more power — than the one we have created,” says Jun Yao, associate professor of electrical and computer engineering at UMass Amherst and the paper’s senior author. Because of this, earlier designs were far less efficient and couldn’t connect directly with living neurons, which are sensitive to stronger electrical signals.
    “Ours register only 0.1 volts, which about the same as the neurons in our bodies,” says Yao.
    There are a wide range of applications for Fu and Yao’s new neuron, from redesigning computers along bio-inspired, and far more efficient principles, to electronic devices that could speak to our bodies directly.
    “We currently have all kinds of wearable electronic sensing systems,” says Yao, “but they are comparatively clunky and inefficient. Every time they sense a signal from our body, they have to electrically amplify it so that a computer can analyze it. That intermediate step of amplification increases both power consumption and the circuit’s complexity, but sensors built with our low-voltage neurons could do without any amplification at all.”
    The secret ingredient in the team’s new low-powered neuron is a protein nanowire synthesized from the remarkable bacteria Geobacter sulfurreducens, which also has the superpower of producing electricity. Yao, along with various colleagues, have used the bacteria’s protein nanowires to design a whole host of extraordinary efficient devices: a biofilm, powered by sweat, that can power personal electronics; an “electronic nose” that can sniff out disease; and a device, which can be built of nearly anything, that can harvest electricity from thin air itself.
    This research was supported by the Army Research Office, the U.S. National Science Foundation, the National Institutes of Health and the Alfred P. Sloan Foundation. More

  • in

    This 250-year-old equation just got a quantum makeover

    How likely you think something is to happen depends on what you already believe about the situation. This simple idea forms the basis of Bayes’ rule, a mathematical approach to calculating probabilities first introduced in 1763. Now, an international group of scientists has demonstrated how Bayes’ rule can also apply in the quantum realm.
    “I would say it is a breakthrough in mathematical physics,” said Professor Valerio Scarani, Deputy Director and Principal Investigator at the Centre for Quantum Technologies, and member of the team. His co-authors on the work published on 28 August 2025 in Physical Review Letters are Assistant Professor Ge Bai at the Hong Kong University of Science and Technology in China, and Professor Francesco Buscemi at Nagoya University in Japan.
    “Bayes’ rule has been helping us make smarter guesses for 250 years. Now we have taught it some quantum tricks,” said Prof Buscemi.
    Although other researchers had previously suggested quantum versions of Bayes’ rule, this team is the first to derive a true quantum Bayes’ rule based on a core physical principle.
    Conditional probability
    Bayes’ rule takes its name from Thomas Bayes, who described his method for calculating conditional probabilities in “An Essay Towards Solving a Problem in the Doctrine of Chances.”
    Imagine someone who tests positive for the flu. They might have suspected illness already, but this new result changes their assessment of the situation. Bayes’ rule provides a systematic way to update that belief, factoring in the likelihood of the test being wrong as well as the person’s prior assumptions.

    The rule treats probabilities as measures of belief rather than absolute facts. This interpretation has sparked debate among statisticians, with some arguing that probability should represent objective frequency rather than subjective confidence. Still, when uncertainty and belief play a role, Bayes’ rule is widely recognized as a rational framework for decision-making. It underpins countless applications today, from medical testing and weather forecasting to data science and machine learning.
    Principle of minimum change
    When calculating probabilities with Bayes’ rule, the principle of minimum change is obeyed. Mathematically, the principle of minimum change minimizes the distance between the joint probability distributions of the initial and updated belief. Intuitively, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. In the case of the flu test, for example, a negative test would not imply that the person is healthy, but rather that they are less likely to have the flu.
    In their work, Prof Scarani, who is also from NUS Department of Physics, Asst Prof Bai, and Prof Buscemi began with a quantum analogue to the minimum change principle. They quantified change in terms of quantum fidelity, which is a measure of the closeness between quantum states.
    Researchers always thought a quantum Bayes’ rule should exist because quantum states define probabilities. For example, the quantum state of a particle provides the probability of it being found at different locations. The goal is to determine the whole quantum state, but the particle is only found at one location when a measurement is performed. This new information will then update the belief, boosting the probability around that location.
    The team derived their quantum Bayes’ rule by maximizing the fidelity between two objects that represent the forward and the reverse process, in analogy with a classical joint probability distribution. Maximizing fidelity is equivalent to minimizing change. They found in some cases their equations matched the Petz recovery map, which was proposed by Dénes Petz in the 1980s and was later identified as one of the most likely candidates for the quantum Bayes’ rule based just on its properties.
    “This is the first time we have derived it from a higher principle, which could be a validation for using the Petz map,” said Prof Scarani. The Petz map has potential applications in quantum computing for tasks such as quantum error correction and machine learning. The team plans to explore whether applying the minimum change principle to other quantum measures might reveal other solutions. More

  • in

    90% of science is lost. This new AI just found it

    Most scientific data never reach their full potential to drive new discoveries.
    Out of every 100 datasets produced, about 80 stay within the lab, 20 are shared but seldom reused, fewer than two meet FAIR standards, and only one typically leads to new findings.
    The consequences are significant: slower progress in cancer treatment, climate models that lack sufficient evidence, and studies that cannot be replicated.
    To change this, the open-science publisher Frontiers has introduced Frontiers FAIR² Data Management, described as the world’s first comprehensive, AI-powered research data service. It is designed to make data both reusable and properly credited by combining all essential steps — curation, compliance checks, AI-ready formatting, peer review, an interactive portal, certification, and permanent hosting — into one seamless process. The goal is to ensure that today’s research investments translate into faster advances in health, sustainability, and technology.
    FAIR² builds on the FAIR principles (Findable, Accessible, Interoperable and Reusable) with an expanded open framework that guarantees every dataset is AI-compatible and ethically reusable by both humans and machines. The FAIR² Data Management system is the first working implementation of this model, arriving at a moment when research output is growing rapidly and artificial intelligence is reshaping how discoveries are made. It turns high-level principles into real, scalable infrastructure with measurable impact.
    Dr. Kamila Markram, co-founder and CEO of Frontiers, explains:
    “Ninety percent of science vanishes into the void. With Frontiers FAIR² Data Management, no dataset and no discovery need ever be lost again — every contribution can now fuel progress, earn the credit it deserves, and unleash science.”
    AI at the Core

    Work that once required months of manual effort — from organizing and verifying datasets to generating metadata and publishable outputs — is now completed in minutes by the AI Data Steward, powered by Senscience, the Frontiers venture behind FAIR².
    Researchers who submit their data receive four integrated outputs: a certified Data Package, a peer-reviewed and citable Data Article, an Interactive Data Portal featuring visualizations and AI chat, and a FAIR² Certificate. Each element includes quality controls and clear summaries that make the data easier to understand for general users and more compatible across research disciplines.
    Together, these outputs ensure that every dataset is preserved, validated, citable, and reusable, helping accelerate discovery while giving researchers proper recognition. Frontiers FAIR² also enhances visibility and accessibility, supporting responsible reuse by scientists, policymakers, practitioners, communities, and even AI systems, allowing society to extract greater value from its investment in science.
    Flagship Pilot Datasets SARS-CoV-2 Variant Properties — Covering 3,800 spike protein variants, this dataset links structural predictions from AlphaFold2 and ESMFold with ACE2 binding and expression data. It offers a powerful resource for pandemic preparedness, enabling deeper understanding of variant behavior and fitness. Preclinical Brain Injury MRI — A harmonized dataset of 343 diffusion MRI scans from four research centers, standardized across protocols and aligned for comparability. It supports reproducible biomarker discovery, robust cross-site analysis, and advances in preclinical traumatic brain injury research.

    Environmental Pressure Indicators (1990-2050) — Combining observed data and modeled forecasts across 43 countries over six decades, this dataset tracks emissions, waste, population, and GDP. It underpins sustainability benchmarking and evidence-based climate policy planning. Indo-Pacific Atoll Biodiversity — Spanning 280 atolls across five regions, this dataset integrates biodiversity records, reef habitats, climate indicators, and human-use histories. It provides an unprecedented basis for ecological modeling, conservation prioritization, and cross-regional research on vulnerable island ecosystems. Researchers testing the pilots noted that Frontiers FAIR² not only preserves and shares data but also builds confidence in its reuse — through quality checks, clear summaries for non-specialists, and the reliability to combine datasets across disciplines, all while ensuring scientists receive credit.
    All pilot datasets comply with the FAIR² Open Specification, making them responsibly curated, reusable, and trusted for long-term human and machine use so today’s data can accelerate tomorrow’s solutions to society’s most pressing challenges.
    Recognition and Reuse
    Each reuse multiplies the value of the original dataset, ensuring that no discovery is wasted, every contribution can spark the next breakthrough, and researchers gain recognition for their work.
    Dr. Sean Hill, co-founder and CEO of Senscience, the Frontiers AI venture behind FAIR² Data Management, notes:
    “Science invests billions generating data, but most of it is lost — and researchers rarely get credit. With Frontiers FAIR², every dataset is cited, every scientist recognized — finally rewarding the essential work of data creation. That’s how cures, climate solutions, and new technologies will reach society faster — this is how we unleash science.”
    What Researchers Are Saying
    Dr. Ángel Borja, Principal Researcher, AZTI, Marine Research, Basque Research and Technology Alliance (BRTA):
    “I highly [recommend using] this kind of data curation and publication of articles, because you can generate information very quickly and it’s useful formatting for any end users.”
    Erik Schultes, Senior Researcher, Leiden Academic Centre for Drug Research (LACDR); FAIR Implementation Lead, GO FAIR Foundation:
    “Frontiers FAIR² captured the scientific aspects of the project perfectly.”
    Femke Heddema, Researcher and Health Data Systems Innovation Manager, PharmAccess:
    “Frontiers FAIR² makes the execution of FAIR principles smoother for researchers and digital health implementers, proving that making datasets like MomCare reusable doesn’t have to be complex. By enabling transparent, accessible, and actionable data, Frontiers FAIR² opens the door to new opportunities in health research.”
    Dr. Neil Harris, Professor in Residence, Department of Neurosurgery, Brain Injury Research Center, University of California, Los Angeles (UCLA):
    “Implementation of [Frontiers] FAIR² can provide an objective check on data for both missingness and quality that is useful on so many levels. These types of unbiased assessments and data summaries can aid understanding by non-domain experts to ultimately enhance data sharing. As the field progresses to using big data in more disparate sub-disciplines, these data checks and summaries will become crucial to maintaining a good grasp of how we might use and combine the multitude of already acquired data within our current analyses.”
    Maryann Martone, Chief Editor, Open Data Commons:
    “[Frontiers] FAIR² is one of the easiest and most effective ways to make data FAIR. Every PI wants their data to be findable, accessible, comparable, and reusable — in the lab, with collaborators, and across the scientific community. The real bottleneck has always been the time and effort required. [Frontiers] FAIR² dramatically lowers that barrier, putting truly FAIR data within reach for most labs.”
    Dr. Vincent Woon Kok Sin, Assistant Professor, Carbon Neutrality and Climate Change Thrust, Society Hub, The Hong Kong University of Science and Technology (HKUST):
    “[Frontiers] FAIR² makes our global waste dataset more visible and accessible, helping researchers worldwide who often struggle with scarce and fragmented data. I hope this will broaden collaboration and accelerate insights for sustainable waste management.”
    Dr. Sebastian Steibl, Postdoctoral Researcher, Naturalis Biodiversity Center and the University of Auckland:
    “True data accessibility goes beyond just uploading datasheets to a repository. It means making data easy to view, explore, and understand without necessarily requiring years of training. The [Frontiers] FAIR² platform, with an AI chatbot and interactive visual data exploration and summary tools, makes our biodiversity and environmental data broadly accessible and usable not just to scholars, but also practitioners, policymakers, and local community initiatives.” More

  • in

    Quantum simulations that once needed supercomputers now run on laptops

    Picture diving deep into the quantum realm, where unimaginably small particles can exist and interact in more than a trillion possible ways at the same time.
    It’s as complex as it sounds. To understand these mind-bending systems and their countless configurations, physicists usually turn to powerful supercomputers or artificial intelligence for help.
    But what if many of those same problems could be handled by a regular laptop?
    Scientists have long believed this was theoretically possible, yet actually achieving it has proven far more difficult.
    Researchers at the University at Buffalo have now taken a major step forward. They have expanded a cost-effective computational technique known as the truncated Wigner approximation (TWA), a kind of physics shortcut that simplifies quantum mathematics, so it can handle systems once thought to demand enormous computing power.
    Just as significant, their approach — outlined in a study published in September in PRX Quantum, a journal of the American Physical Society — offers a practical, easy-to-use TWA framework that lets researchers input their data and obtain meaningful results within hours.
    “Our approach offers a significantly lower computational cost and a much simpler formulation of the dynamical equations,” says the study’s corresponding author, Jamir Marino, PhD, assistant professor of physics in the UB College of Arts and Sciences. “We think this method could, in the near future, become the primary tool for exploring these kinds of quantum dynamics on consumer-grade computers.”
    Marino, who joined UB this fall, began this work while at Johannes Gutenberg University Mainz in Germany. His co-authors include two of his former students there, Hossein Hosseinabadi and Oksana Chelpanova, the latter now a postdoctoral researcher in Marino’s lab at UB.

    The research received support from the National Science Foundation, the German Research Foundation, and the European Union.
    Taking a semiclassical approach
    Not every quantum system can be solved exactly. Doing so would be impractical, as the required computing power grows exponentially as the system becomes more complex.
    Instead, physicists often turn to what’s known as semiclassical physics — a middle-ground approach that keeps just enough quantum behavior to stay accurate, while discarding details that have little effect on the outcome.
    TWA is one such semiclassical approach that dates back to the 1970s, but is limited to isolated, idealized quantum systems where no energy is gained or lost.
    So Marino’s team expanded TWA to the messier systems found in the real world, where particles are constantly pushed and pulled by outside forces and leak energy into their surroundings, otherwise known as dissipative spin dynamics.

    “Plenty of groups have tried to do this before us. It’s known that certain complicated quantum systems could be solved efficiently with a semiclassical approach,” Marino says. “However, the real challenge has been to make it accessible and easy to do.”
    Making quantum dynamics easy
    In the past, researchers looking to use TWA faced a wall of complexity. They had to re-derive the math from scratch each time they applied the method to a new quantum problem.
    So, Marino’s team turned what used to be pages of dense, nearly impenetrable math into a straightforward conversion table that translates a quantum problem into solvable equations.
    “Physicists can essentially learn this method in one day, and by about the third day, they are running some of the most complex problems we present in the study,” Chelpanova says.
    Saving supercomputers for the big problems
    The hope is that the new method will save supercomputing clusters and AI models for the truly complicated quantum systems. These are systems that can’t be solved with a semiclassical approach. Systems with not just a trillion possible states, but more states than there are atoms in the universe.
    “A lot of what appears complicated isn’t actually complicated,” Marino says. “Physicists can use supercomputing resources on the systems that need a full-fledged quantum approach and solve the rest quickly with our approach.” More

  • in

    Scientists create a magnetic lantern that moves like it’s alive

    Researchers have developed a polymer structure shaped like a “Chinese lantern” that can quickly change into more than a dozen curved, three-dimensional forms when it is compressed or twisted. This transformation can be triggered and controlled remotely with a magnetic field, opening possibilities for a wide range of practical uses.To build the lantern, the team began with a thin polymer sheet cut into a diamond-shaped parallelogram. They then sliced a series of evenly spaced lines through the center of the sheet, forming parallel ribbons connected by solid strips of material at the top and bottom. When the ends of these top and bottom strips are joined, the sheet naturally folds into a round, lantern-like shape.
    “This basic shape is, by itself, bistable,” says Jie Yin, corresponding author of a paper on the work and a professor of mechanical and aerospace engineering at North Carolina State University. “In other words, it has two stable forms. It is stable in its lantern shape, of course. But if you compress the structure, pushing down from the top, it will slowly begin to deform until it reaches a critical point, at which point it snaps into a second stable shape that resembles a spinning top. In the spinning-top shape, the structure has stored all of the energy you used to compress it. So, once you begin to pull up on the structure, you will reach a point where all of that energy is released at once, causing it to snap back into the lantern shape very quickly.”
    “We found that we could create many additional shapes by applying a twist to the shape, by folding the solid strips at the top or bottom of the lantern in or out, or any combination of those things,” says Yaoye Hong, first author of the paper and a former Ph.D. student at NC State who is now a postdoctoral researcher at the University of Pennsylvania. “Each of these variations is also multistable. Some can snap back and forth between two stable states. One has four stable states, depending on whether you’re compressing the structure, twisting the structure, or compressing and twisting the structure simultaneously.”
    The researchers also gave the lanterns magnetic control by attaching a thin magnetic film to the bottom strip. This allowed them to remotely twist or compress the structures using a magnetic field. They demonstrated several possible uses for the design, including a gentle magnetic gripper that can catch and release fish without harm, a flow-control filter that opens and closes underwater, and a compact shape that suddenly extends upward to reopen a collapsed tube. A video of the experiment is available below the article.To better understand and predict the lantern’s behavior, the team also created a mathematical model showing how the geometry of each angle affects both the final shape and how much elastic energy is stored in each stable configuration.
    “This model allows us to program the shape we want to create, how stable it is, and how powerful it can be when stored potential energy is allowed to snap into kinetic energy,” says Hong. “And all of those things are critical for creating shapes that can perform desired applications.”
    “Moving forward, these lantern units can be assembled into 2D and 3D architectures for broad applications in shape-morphing mechanical metamaterials and robotics,” says Yin. “We will be exploring that.”
    The paper, “Reprogrammable snapping morphogenesis in freestanding ribbon-cluster meta-units via stored elastic energy,” was published on Oct. 10 in the journal Nature Materials. The paper was co-authored by Caizhi Zhou and Haitao Qing, both Ph.D. students at NC State; and by Yinding Chi, a former Ph.D. student at NC State who is now a postdoctoral researcher at Penn.
    This work was done with support from the National Science Foundation under grants 2005374, 2369274 and 2445551. More

  • in

    Scientists stunned by wild Martian dust devils racing at hurricane speeds

    Although Mars has an extremely thin atmosphere, it still experiences powerful winds that play a major role in shaping the planet’s climate and in distributing its ever-present dust. These winds stir up dust into swirling columns called dust devils—rotating plumes of air and fine particles that sweep across the Martian surface. While the winds themselves are invisible, the dust devils they lift can be seen clearly in spacecraft images. Because they trace the flow of moving air, scientists use them as natural markers to study wind behavior that would otherwise remain unseen.A new study led by Dr. Valentin Bickel from the Center for Space and Habitability at the University of Bern reveals that both dust devils and the winds driving them are much faster than scientists previously believed. These stronger winds may be responsible for much of the dust lofted into the Martian atmosphere, which has a major impact on the planet’s weather and long-term climate. The research, conducted in collaboration with the University of Bern’s Department of Space Research and Planetology, the Open University in the UK, and the German Aerospace Center (DLR), was recently published in Science Advances.
    Movement of dust devils studied with the help of deep learning
    “Using a state-of-the-art deep learning approach, we were able to identify dust devils in over 50,000 satellite images,” explains first author Valentin Bickel. The team used images from the Bern-based Mars camera CaSSIS (Color and Stereo Surface Imaging System) and the stereo camera HRSC (High Resolution Stereo Camera). CaSSIS is on board the European Space Agency’s (ESA) ExoMars Trace Gas Orbiter, while the HRSC camera is on board the ESA orbiter Mars Express. “Our study is therefore based exclusively on data from European Mars exploration,” Bickel continues.
    Next, the team studied stereo images of about 300 of these dust devils to determine their movement and speed. Co-author Nicolas Thomas, who led the development of the CaSSIS camera system at the University of Bern and whose work is funded by SERI’s Swiss Space Office through ESA’s PRODEX program, explains: “Stereo images are images of the same spot on the surface of Mars, but taken a few seconds apart. These images can therefore be used to measure the movement of dust devils.”
    Bickel emphasizes: “If you put the stereo images together in a sequence, you can observe how dynamically the dust devils move across the surface.” (see the images on the website of the University of Bern)
    Winds on Mars stronger than previously assumed
    The results show that the dust devils and the winds surrounding them on Mars can reach speeds of up to 44 m/s, i.e. around 160 km/h, across the entire planet, which is much faster than previously assumed (previous measurements on the surface had shown that winds mostly remain below 50 km/h and – in rare cases – can reach a maximum of 100 km/h).

    The high wind speed in turn influences the dust cycle on the Red Planet: “These strong, straight-line winds are very likely to bring a considerable amount of dust into the Martian atmosphere – much more than previously assumed,” says Bickel. He continues: “Our data show where and when the winds on Mars seem to be strong enough to lift dust from the surface. This is the first time that such findings are available on a global scale for a period of around two decades.”
    Future Mars missions can benefit from the research results
    The results obtained are also particularly important for future Mars missions. “A better understanding of the wind conditions on Mars is crucial for the planning and execution of future landed missions,” explains Daniela Tirsch from the Institute of Space Research at the German Aerospace Center (DLR) and co-author of the study. “With the help of the new findings on wind dynamics, we can model the Martian atmosphere and the associated surface processes more precisely,” Tirsch continues. These models are essential to better assess risks for future missions and adapt technical systems accordingly. The new study thus provides important findings for a number of research areas on Mars, such as research into the formation of dunes and slope streaks, as well as the creation of weather and climate models of Mars.
    The researchers plan to further intensify the observations of dust devils and supplement the data obtained with targeted and coordinated observations of dust devils using CaSSIS and HRSC. “In the long term, our research should help to make the planning of Mars missions more efficient,” concludes Bickel. More

  • in

    Why GPS fails in cities. And how it was brilliantly fixed

    Most of us rarely question the accuracy of the GPS dot that shows our location on a map.
    Yet when visiting a new city and using our phone to navigate, it can seem as if we are jumping from one spot to another, even though we are walking steadily along the same sidewalk.
    “Cities are brutal for satellite navigation,” explained Ardeshir Mohamadi.
    Mohamadi, a doctoral fellow at the Norwegian University of Science and Technology (NTNU), is researching how to make affordable GPS receivers (like those found in smartphones and fitness watches) much more precise without depending on expensive external correction services.
    High accuracy is especially vital for vehicles that drive themselves – autonomous or self-driving cars.
    Urban canyons
    Mohamadi and his team at NTNU have developed a new system that allows autonomous vehicles to navigate safely through dense city environments.

    “In cities, glass and concrete make satellite signals bounce back and forth. Tall buildings block the view, and what works perfectly on an open motorway is not so good when you enter a built-up area,” said Mohamadi.
    When GPS signals reflect off buildings, they take longer to reach the receiver. This delay throws off the calculation of distance to the satellites, which makes the reported position inaccurate.
    Such complex urban environments are known as ‘urban canyons’. It is similar to being at the bottom of a deep gorge, where signals reach you only after multiple reflections from the walls.
    “For autonomous vehicles, this makes the difference between confident, safe behavior and hesitant, unreliable driving. That is why we developed SmartNav, a type of positioning technology designed for ‘urban canyons’,” explained Mohamadi.
    Almost down to the centimetrt
    Not only are the satellite signals disrupted down between the tall buildings, but the signals that are correct do not have sufficient precision.

    In order to solve this problem, the researchers have combined several different technologies to correct the signal. The result is a computer program that can be integrated into the navigation system of autonomous vehicles.
    To achieve this, they received help from a new Google service, but before we go any further, it might be helpful to know how GPS works:
    GPS – the Global Positioning System – comprises many small satellites orbiting the Earth. The satellites send out signals using radio waves, which are received by a GPS receiver. When the receiver receives these signals from at least four satellites, it is able to calculate its position.
    The signal consists of a message with a code indicating the satellite’s position and the exact time the signal was transmitted – like a text message from the satellite.
    Replacing the code with the wave
    It is this code that often becomes incorrect when the signal bounces around between buildings in a city. The first solution the NTNU researchers studied was dropping the code altogether. Instead, information about the radio wave can be used.
    Is the wave traveling upwards or downwards when it reaches the receiver? This is called the carrier phase of the wave.
    “Using only the carrier phase can provide very high accuracy, but it takes time, which is not very practical when the receiver is moving,” said Mohamadi.
    The problem is that you have to stay still until the calculation is good enough – not just a microsecond, but for several minutes.
    However, there are other ways to improve a GPS signal. The user can use a service that corrects the signal using base stations called RTK (Real Time Kinetics).
    RTK works fine as long as the user is in the vicinity of one of these stations. This solution, however, is expensive and intended for professional users.
    An alternative approach is PPP-RTK (Precise Point Positioning – Real-Time Kinematic), which combines precise corrections with satellite signals. The European Galileo system now supports this by broadcasting its corrections free of charge.
    But there is even more help available.
    Google and the wrong-side-of-the-street problem
    While the researchers in Trondheim were working on finding better solutions, Google launched a new service for its Android customers.
    Imagine you are planning a holiday to, say, London. You open Google Maps on your tablet. You then enter the address of your hotel and you can immediately zoom in on the street environment, study the hotel’s façade and the height of the surrounding buildings.
    Google now has these types of 3D models of buildings in almost 4000 cities around the world. The company is using these models to predict how satellite signals will be reflected between the buildings. This is how they will solve the problem of it appearing as if you are walking on the wrong side of the road when using the map app, for example when trying to find your way back to your hotel.
    “They combine data from sensors, Wi-Fi, mobile networks and 3D building models to produce smooth position estimates that can withstand errors caused by reflections,” Mohamadi said.
    Precision you can rely on
    The researchers were now able to combine all these different correction systems with algorithms they had developed themselves. When they tested it in the streets of Trondheim, they achieved an accuracy that was better than ten centimeters 90 percent of the time.
    The researchers say this provides precision that can be relied upon in cities.
    The use of PPP-RTK will also make the technology accessible to the general public because it is a relatively affordable service.
    “PPP-RTK reduces the need for dense networks of local base stations and expensive subscriptions, enabling cheap, large-scale implementation on mass-market receivers,” concluded Mohamadi. More

  • in

    Scientists suggest the brain may work best with 7 senses, not just 5

    Skoltech scientists have devised a mathematical model of memory. By analyzing its new model, the team came to surprising conclusions that could prove useful for robot design, artificial intelligence, and for better understanding of human memory. Published in Scientific Reports, the study suggests there may be an optimal number of senses — if so, those of us with five senses could use a couple more!
    “Our conclusion is of course highly speculative in application to human senses, although you never know: It could be that humans of the future would evolve a sense of radiation or magnetic field. But in any case, our findings may be of practical importance for robotics and the theory of artificial intelligence,” said study co-author Professor Nikolay Brilliantov of Skoltech AI. “It appears that when each concept retained in memory is characterized in terms of seven features — as opposed to, say, five or eight — the number of distinct objects held in memory is maximized.”
    In line with a well-established approach, which originated in the early 20th century, the team models the fundamental building blocks of memory: the memory “engrams.” An engram can be viewed as a sparse ensemble of neurons across multiple regions in the brain that fire together. The conceptual content of an engram is an ideal abstract object characterized with regard to multiple features. In the context of human memory, the features correspond to sensory inputs, so that the notion of a banana would match up with a visual image, a smell, the taste of a banana, and so on. This results in a five-dimensional object that exists and evolves in a five-dimensional space populated by all the other concepts retained in memory.
    The evolution of engrams refers to concepts becoming more focused or blurred with time, depending on how often the engrams get activated by a stimulus acting from the outer world via the senses, triggering the memory of the respective object. This models learning and forgetting as a result of interaction with the environment.
    “We have mathematically demonstrated that the engrams in the conceptual space tend to evolve toward a steady state, which means that after some transient period, a ‘mature’ distribution of engrams emerges, which then persists in time,” Brilliantov commented. “As we consider the ultimate capacity of a conceptual space of a given number of dimensions, we somewhat surprisingly find that the number of distinct engrams stored in memory in the steady state is the greatest for a concept space of seven dimensions. Hence the seven senses claim.”
    In other words, let the objects that exist out there in the world be described by a finite number of features corresponding to the dimensions of some conceptual space. Suppose that we want to maximize the capacity of the conceptual space expressed as the number of distinct concepts associated with these objects. The greater the capacity of the conceptual space, the deeper the overall understanding of the world. It turns out that the maximum is attained when the dimension of the conceptual space is seven. From this the researchers conclude that seven is the optimal number of senses.
    According to the researchers, this number does not depend on the details of the model — the properties of the conceptual space and the stimuli providing the sense impressions. The number seven appears to be a robust and persistent feature of memory engrams as such. One caveat is that multiple engrams of differing sizes existing around a common center are deemed to represent similar concepts and are therefore treated as one when calculating memory capacity.
    The memory of humans and other living beings is an enigmatic phenomenon tied to the property of consciousness, among other things. Advancing the theoretical models of memory will be instrumental to gaining new insights into the human mind and recreating humanlike memory in AI agents. More