More stories

  • in

    People feel more connected to ‘tweezer-like’ bionic tools that don’t resemble human hands

    Some say the next step in human evolution will be the integration of technology with flesh. Now, researchers have used virtual reality to test whether humans can feel embodiment — the sense that something is part of one’s body — toward prosthetic “hands” that resemble a pair of tweezers. They report June 6 in the journal iScience that participants felt an equal degree of embodiment for the tweezer-hands and were also faster and more accurate in completing motor tasks in virtual reality than when they were equipped with a virtual human hand.
    “For our biology to merge seamlessly with tools, we need to feel that the tools are part of our body,” says first author and cognitive neuroscientist Ottavia Maddaluno, who conducted the work at the Sapienza University of Rome and the Santa Lucia Foundation IRCCS with Viviana Betti. “Our findings demonstrate that humans can experience a grafted tool as an integral part of their own body.”
    Previous studies have shown that tool use induces plastic changes in the human brain, as does the use of anthropomorphic prosthetic limbs. However, an open scientific question is whether humans can embody bionic tools or prostheses that don’t resemble human anatomy.
    To investigate this possibility, the researchers used virtual reality to conduct a series of experiments on healthy participants. In the virtual reality environment, participants had either a human-like hand or “bionic tool” resembling a large pair of tweezers grafted onto the end of their wrist. To test their motor ability and dexterity, participants were asked to pop bubbles of a specific color (by pinching them with their tweezers or between their index finger and thumb). For this simple task, the researchers found that participants were faster and more accurate at popping virtual bubbles when they had tweezer-hands.
    Next, the team used a test called the “cross-modal congruency task” to compare implicit or unconscious embodiment for the virtual hand and bionic tool. During this test, the researchers applied small vibrations to the participants’ fingertips and asked them to identify which fingers were stimulated. At the same time, a flickering light was displayed on the virtual reality screen, either on the same finger as the tactile stimulus or on a different finger. By comparing the participants’ accuracy and reaction times during trials with matched and mismatched stimuli, the researchers were able to assess how distracted they were by the visual stimulus.
    “This is an index of how much of a mismatch there is in your brain between what you feel and what you see,” says Maddaluno. “But this mismatch could only happen if your brain thinks that what you see is part of your own body; if I don’t feel that the bionic tool that I’m seeing through virtual reality is part of my own body, the visual stimulus should not give any interference.”
    In both cases, participants were faster and more accurate at identifying which of their real fingers were stimulated during trials with matched tactile and visual stimuli, indicating that participants felt a sense of embodiment toward both the virtual human hand and the tweezer-hands.

    However, there was a bigger difference between matched and mismatched trials when participants had tweezer- rather than human hands, indicating that the non-anthropomorphic prosthesis resulted in an even greater sense of embodiment. The researchers speculate that this is due to the tweezer-hands’ relative simplicity compared to a human-like hand, which might make it easy for the brain to compute and accept.
    “In terms of the pinching task, the tweezers are functionally similar to a human hand, but simpler, and simple is also better computationally for the brain.” says Maddaluno.
    They note that it could also relate to the “uncanny valley” hypothesis, since the virtual human hands might have been too eerily similar yet distinct for perfect embodiment.
    In addition to the tweezer-hands, the researchers also tested a wrench-shaped bionic tool and a virtual human hand holding a pair of tweezers. They found evidence of embodiment in all cases, but the participants had higher embodiment and were more dexterous when the tweezers were grafted directly onto their virtual wrists than when they held them in their virtual hand.
    Participants also displayed a higher sense of embodiment for the bionic tools when they had the opportunity to explore the virtual reality environment before undertaking the cross-modal congruency test. “During the cross-modal congruency task participants had to stay still, whereas during the motor task, they actively interacted with the virtual environment, and these interactions in the virtual environment induce a sense of agency,” says Maddaluno.
    Ultimately, the researchers say that this study could inform robotics and prosthetic limb design. “The next step is to study if these bionic tools could be embodied in patients that have lost limbs,” says Maddaluno. “And we also want to investigate the plastic changes that this kind of bionic tool can induce in the brains of both healthy participants and amputees.” More

  • in

    Novel AI method could improve tissue, tumor analysis and advance treatment of disease

    Researchers at the University of Michigan and Brown University have developed a new computational method to analyze complex tissue data that could transform our current understanding of diseases and how we treat them.
    Integrative and Reference-Informed tissue Segmentation, or IRIS, is a novel machine learning and artificial intelligence method that gives biomedical researchers the ability to view more precise information about tissue development, disease pathology and tumor organization.
    The findings are published in the journal Nature Methods.
    IRIS draws from data generated by spatially resolved transcriptomics and uniquely leverages single-cell RNA sequencing data as the reference to examine multiple layers of tissue simultaneously and distinguish various regions with unprecedented accuracy and computational speed.
    Unlike traditional techniques that yield averaged data from tissue samples, SRT provides a much more granular view, pinpointing thousands of locations within a single tissue section. However, the challenge has always been to interpret this vast and detailed dataset, says Xiang Zhou, professor of biostatistics at the University of Michigan School of Public Health and senior author of the study.
    Interpreting large and complex datasets is where IRIS becomes a helpful tool — its algorithms sort through the data to identify and segment various functional domains, such as tumor regions, and provide insights into cell interactions and disease progression mechanisms.
    “Different from existing methods, IRIS directly characterizes the cellular landscape of the tissue and identifies biologically interpretable spatial domains, thus facilitating the understanding of the cellular mechanism underlying tissue function,” said U-M doctoral alum Ying Ma, assistant professor of biostatistics at Brown University, who helped develop IRIS.
    “We anticipate that IRIS will serve as a powerful tool for large-scale multisample spatial transcriptomics data analysis across a wide range of biological systems.”
    Zhou and Ma applied IRIS to six SRT datasets and compared its performance to other commonly used spatial domain methods. Ultimately, as SRT technology continues to grow in popularity and use, the researchers hope to see methods like IRIS help to potentially develop targets for clinical interventions or drug targets, improving personalized treatment plans and patient health outcomes.
    “The computational approach of IRIS pioneers a novel avenue for biologists to delve into the intricate architecture of complex tissues, offering unparalleled opportunities to explore the dynamic processes shaping tissue structure during development and disease progression,” Zhou said. “Through characterizing refined tissue structures and elucidating their alterations during disease states, IRIS holds the potential to unveil mechanistic insights crucial for understanding and combating various diseases.” More

  • in

    Pushing an information engine to its limits

    The molecules that make up the matter around us are in constant motion. What if we could harness that energy and put it to use?
    Over 150 years ago Maxwell theorized that if molecules’ motion could be measured accurately, this information could be used to power an engine. Until recently this was a thought experiment, but technological breakthroughs have made it possible to build working information engines in the lab.
    With funding from the Foundational Questions Institute, SFU Physics professors John Bechhoefer and David Sivak teamed up to build an information engine and test its limits. Their work has greatly advanced our understanding of how these engines function, and a paper led by postdoctoral fellow Johan du Buisson and published recently in Advances in Physics: X summarizes the findings made during their collaboration.
    “We live in a world full of extra unused energy that potentially could be used,” says Bechhoefer. Understanding how information engines function can not only help us put that energy to work, it can also suggest ways that existing engines could be redesigned to use energy more efficiently, and help us learn how biological motors work in organisms and the human body.
    The team’s information engine consists of a tiny bead in a water bath that is held in place with an optical trap. When fluctuations in the water cause the bead to move in the desired direction, the trap can be adjusted to prevent the bead from returning to the place where it was before. By taking accurate measurements of the bead’s location and using that information to adjust the trap, the engine is able to convert the heat energy of the water into work.
    To understand how fast and efficient the engine could be, the team tested multiple variables such as the mass of the bead and sampling frequency, and developed algorithms to reduce the uncertainty of their measurements.
    “Stripped down to its simplest essence, we can systematically understand how things like temperature and the size of the system changes the things we can take advantage of,” Sivak says. “What are the strategies that work best? How do they change with all those different properties?”
    The team was able to achieve the fastest speed recorded to date for an information engine, approximately ten times faster than the speed of E. coli, and comparable to the speed of motile bacteria found in marine environments.

    Next, the team wanted to learn if an information engine could harvest more energy than it costs to run. “In equilibrium, that’s always a losing game,” Bechhoefer says. “The costs of gathering the information and processing it will always exceed what you’re getting out of it, but when you have an environment that has extra energy, [molecules doing] extra jiggling around, then that can change the balance if it’s strong enough.”
    They found that in a non-equilibrium environment, where the engine was in a heat bath with a higher temperature than the measuring apparatus, it could output significantly more power than it cost to run.
    All energy on Earth comes from the sun, and it eventually radiates out into space. That directional flow of energy manifests itself in many different ways, such as wind or ocean currents that can be harvested. Understanding the principles behind information engines can help us make better use of that energy.
    “We’re coming at [energy harvesting] from a very different point of view, and we hope that this different perspective can lead to some different insights about how to be more efficient,” Bechhoefer says.
    The pair is looking forward to working together on other projects in the future. “We were lucky to get a joint grant together. That really helped with the collaboration,” says Bechhoefer.
    Sivak, a theorist, and Bechhoefer, an experimentalist, bring complementary approaches to their work, and they have been able to attract trainees who want to work with both. “We have different styles in terms of how we go about mentoring and leading a group,” says Sivak. “Our students and post-docs can benefit from both approaches.” More

  • in

    Artificial intelligence blood test provides a reliable way to identify lung cancer

    Using artificial intelligence technology to identify patterns of DNA fragments associated with lung cancer, researchers from the Johns Hopkins Kimmel Cancer Center and other institutions have developed and validated a liquid biopsy that may help identify lung cancer earlier.
    In a prospective study published June 3 in Cancer Discovery, the team demonstrated that artificial intelligence technology could identify people more likely to have lung cancer based on DNA fragment patterns in the blood. The study enrolled about 1,000 participants with and without cancer who met the criteria for traditional lung cancer screening with low-dose computed tomography (CT). Individuals were recruited to participate at 47 centers in 23 U.S. states. By helping to identify patients most at risk and who would benefit from follow-up CT screening, this new blood test could potentially boost lung cancer screening and reduce death rates, according to computer modeling by the team.
    “We have a simple blood test that could be done in a doctor’s office that would tell patients whether they have potential signs of lung cancer and should get a follow-up CT scan,” says the study’s corresponding author, Victor E. Velculescu, M.D., Ph.D., professor of oncology and co-director of the Cancer Genetics and Epigenetics program at the Johns Hopkins Kimmel Cancer Center.Lung cancer is the deadliest cancer in the United States, according to the National Cancer Institute, and worldwide, according to the World Health Organization. Yearly screening with CT scans in high-risk patients can help detect lung cancers early, when they are most treatable, and help avert lung cancer deaths. Screening is recommended by the U.S. Preventive Services Task Force for 15 million people nationally who are between ages 50 and 80 and have a smoking history, yet only about 6%-10% of eligible individuals are screened each year. People may be reticent to follow through on screening, Velculescu explains, due to the time it takes to arrange and go to an appointment, and the low doses of radiation they are exposed to from the scan.
    To help overcome some of these hurdles, Velculescu and his colleagues developed a test over the past five years that uses artificial intelligence to detect patterns of DNA fragments found in patients with lung cancer. It takes advantage of differences in how DNA is packaged in normal and cancer cells. DNA is neatly and consistently folded up in healthy cells, almost like a rolled-up ball of yarn, but DNA in cancer cells is more disorganized. When both types of cells die, fragments of DNA end up in the blood. The DNA fragments in patients with cancer tend to be more chaotic and irregular than the DNA fragments found in individuals who do not have cancer.
    The team trained artificial intelligence software to identify the specific patterns of DNA fragments seen in the blood of 576 people with or without lung cancer. Then, they verified that the method worked in a second group of 382 people with and without cancer. Based on their analyses, the test has a negative predictive value of 99.8%, meaning that only 2 in 1,000 individuals tested may be missed and have lung cancer.
    The group’s computer simulations showed that if the test boosted the rate of lung cancer screening to 50% within five years, it could quadruple the number of lung cancers detected and increase the proportion of cancers detected early — when they are most treatable — by about 10%. That could prevent about 14,000 cancer deaths over five years.
    “The test is inexpensive and could be done at a very large scale,” Velculescu says. “We believe it will make lung cancer screening more accessible and help many more people get screened. This will lead to more cancers being detected and treated early.”
    The test is currently available through DELFI Diagnostics for use as a laboratory-based test under the Clinical Laboratory Improvement Amendments. However, the team plans to seek approval from the U.S. Food and Drug Administration for lung cancer screening. Velculescu colleagues also plan to study whether a similar approach could be used to detect other types of cancer.

    Robert B. Scharpf of Johns Hopkins co-authored the study. Additional co-authors were from the Cleveland Clinic, DELFI Diagnostics, Medicus Economics LLC, Miami Cancer Institute, the Pan American Center for Oncology, Washington University, Centura Health, Vanderbilt Health, Stratevi, Massachusetts General Hospital, the Medical University of South Carolina, the Department of Veterans Affairs, the Perelman School of Medicine at the University of Pennsylvania, New York University Langone Health, Allegheny Health Network and Memorial Sloan Kettering Cancer Center.
    The work was supported in part by DELFI Diagnostics, the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation, Stand Up To Cancer-LUNGevity-American Lung Association Lung Cancer Interception Dream Team Translational Research Grant, Stand Up To Cancer-DCS International Translational Cancer Research Dream Team Grant, the Gray Foundation, The Honorable Tina Brozman Foundation, the Commonwealth Foundation, the Cole Foundation and the National Institutes of Health.
    Velculescu and Scharpf are inventors on patent applications submitted by The Johns Hopkins University related to cell-free DNA for cancer detection that have been licensed to DELFI Diagnostics, LabCorp, Qiagen, Sysmex, Agios, Genzyme, Esoterix, Ventana and ManaT Bio. Velculescu divested his equity in Personal Genome Diagnostics (PGDx) to LabCorp in February 2022. Velculescu is a founder of DELFI Diagnostics, serves on the board of directors, and owns DELFI Diagnostics stock. Scharpf is a founder and consultant of DELFI Diagnostics and owns DELFI Diagnostics stock. Velculescu, Scharpf and Johns Hopkins receive royalties and fees from the company. The Johns Hopkins University also owns equity in DELFI Diagnostics. Velculescu is an adviser to Viron Therapeutics and Epitope. These relationships are managed by Johns Hopkins in accordance with its conflict-of-interest policies. More

  • in

    Seeking social proximity improves flight routes among pigeons

    A new study conducted by Dr. Edwin Dalmaijer, a cognitive neuroscientist at the University of Bristol, UK, looked at the social influences on pigeon flight routes. Comparing the flight patterns of pairs of pigeons to a computer model, the researcher found that flight paths are improved as younger birds learn the route from older birds and also make route improvements, leading to overall more efficient routes over generations. The study publishes June 6 in the open-access journal PLOS Biology.
    Pigeons are known for their ability to travel long distances to specific locations. Like many birds, they navigate using the sun and by sensing the Earth’s magnetic field. Though these senses help pigeons find their bearings, they do not usually generate the most efficient routes.
    Dr. Dalmaijer gathered data from previously published studies where pigeons that were familiar with a route were paired with pigeons that had not flown the route before. These data demonstrated that when the inexperienced pigeon is introduced, the pair flies a more direct route to their destination. However, these previous studies could not determine how the paired birds generate more efficient routes.
    Dr. Dalmaijer compared the pigeon flight data to a computer model that prioritized four main factors. These four factors represent what might be involved in choosing a flight path with minimal cognition, including: direction to the goal, representing the bird’s internal compass; proximity to the other pigeon; the remembered route; and general consistency, since the birds are unlikely to make erratic turns.
    In the model, the simulated birds, referred to as “agents,” made over 60 journeys. Once every 12 journeys, one of the agents was replaced with an agent that had not made the trip before, simulating a young bird. This resulted in a generational increase in the efficiency of the flight routes. These improvements are similar to those seen in the real-life data from pigeon pairs, though the pigeon data did not match the most optimal version of the model, likely because pigeons are influenced by additional factors that the model could not account for.
    When some of the parameters of the model were removed, such as memory of the route or the desire to be near the other pigeon, there was no generational improvement. “These results suggest that stepwise improvement between generations can occur when individuals simply seek proximity to others,” Dr. Dalmaijer said.
    The model demonstrates learning in both directions. As expected, the younger agent benefits from the older agent by learning the route. However, it also shows that the older agent benefits from the younger agent. Since younger agents are not following an internal route, they are more oriented to the final destination. The agents’ desire for social proximity between the two balances these draws, leading to an overall more efficient route. Additionally, these findings may be applicable to other species beyond pigeons, such as ants and some types of fish, which also make journeys based on memory and social factors.
    Dr. Dalmaijeradds, “I grew up in the Netherlands, in a city where pigeons constantly walk into oncoming bicycle traffic, so I don’t have the highest opinion of pigeon intellect. On the one hand, this study vindicates that, by showing the gradual improvement in route efficiency also emerges in ‘dumb’ artificial agents. On the other hand, I have gained a huge respect for all the impressive work done in pigeon navigation and cumulative culture, and even a little bit for the humble pigeon (as long as they stay away from my bike).” More

  • in

    Flapping frequency of birds, insects, bats and whales described by universal equation

    A single universal equation can closely approximate the frequency of wingbeats and fin strokes made by birds, insects, bats and whales, despite their different body sizes and wing shapes, Jens Højgaard Jensen and colleagues from Roskilde University in Denmark report in a new study in the open-access journal PLOS ONE, publishing June 5.
    The ability to fly has evolved independently in many different animal groups. To minimize the energy required to fly, biologists expect that the frequency that animals flap their wings should be determined by the natural resonance frequency of the wing. However, finding a universal mathematical description of flapping flight has proved difficult. Researchers used dimensional analysis to calculate an equation that describes the frequency of wingbeats of flying birds, insects and bats, and the fin strokes of diving animals, including penguins and whales.
    They found that flying and diving animals beat their wings or fins at a frequency that is proportional to the square root of their body mass, divided by their wing area. They tested the accuracy of the equation by plotting its predictions against published data on wingbeat frequencies for bees, moths, dragonflies, beetles, mosquitos, bats, and birds ranging in size from hummingbirds to swans.
    The researchers also compared the equation’s predictions against published data on fin stroke frequencies for penguins and several species of whale, including humpbacks and northern bottlenose whales. The relationship between body mass, wing area and wingbeat frequency shows little variation across flying and diving animals, despite huge differences in their body size, wing shape and evolutionary history, they found. Finally, they estimated that an extinct pterosaur (Quetzalcoatlus northropi) — the largest known flying animal — beat its 10 meter-square wings at a frequency of 0.7 hertz.
    The study shows that despite huge physical differences, animals as distinct as butterflies and bats have evolved a relatively constant relationship between body mass, wing area and wingbeat frequency. The researchers note that for swimming animals they didn’t find publications with all the required information; data from different publications was pieced together to make comparisons, and in some cases animal density was estimated based on other information. Furthermore, extremely small animals — smaller than any yet discovered — would likely not fit the equation, because the physics of fluid dynamics changes at such a small scale. This could have implications in the future for flying nanobots. The authors say that the equation is the simplest mathematical explanation that accurately describes wingbeats and fin strokes across the animal kingdom.
    The authors add: “Differing almost a factor 10000 in wing/fin-beat frequency, data for 414 animals from the blue whale to mosquitoes fall on the same line. As physicists, we were surprised to see how well our simple prediction of the wing-beat formula works for such a diverse collection of animals.” More

  • in

    AIs are irrational, but not in the same way that humans are

    Large Language Models behind popular generative AI platforms like ChatGPT gave different answers when asked to respond to the same reasoning test and didn’t improve when given additional context, finds a new study from researchers at UCL.
    The study, published in Royal Society Open Science, tested the most advanced Large Language Models (LLMs) using cognitive psychology tests to gauge their capacity for reasoning. The results highlight the importance of understanding how these AIs ‘think’ before entrusting them with tasks, particularly those involving decision-making.
    In recent years, the LLMs that power generative AI apps like ChatGPT have become increasingly sophisticated. Their ability to produce realistic text, images, audio and video has prompted concern about their capacity to steal jobs, influence elections and commit crime.
    Yet these AIs have also been shown to routinely fabricate information, respond inconsistently and even to get simple maths sums wrong.
    In this study, researchers from UCL systematically analysed whether seven LLMs were capable of rational reasoning. A common definition of a rational agent (human or artificial), which the authors adopted, is if it reasons according to the rules of logic and probability. An irrational agent is one that does not reason according to these rules1.
    The LLMs were given a battery of 12 common tests from cognitive psychology to evaluate reasoning, including the Wason task, the Linda problem and the Monty Hall problem2. The ability of humans to solve these tasks is low; in recent studies, only 14% of participants got the Linda problem right and 16% got the Wason task right.
    The models exhibited irrationality in many of their answers, such as providing varying responses when asked the same question 10 times. They were prone to making simple mistakes, including basic addition errors and mistaking consonants for vowels, which led them to provide incorrect answers.

    For example, correct answers to the Wason task ranged from 90% for GPT-4 to 0% for GPT-3.5 and Google Bard. Llama 2 70b, which answered correctly 10% of the time, mistook the letter K for a vowel and so answered incorrectly.
    While most humans would also fail to answer the Wason task correctly, it is unlikely that this would be because they didn’t know what a vowel was.
    Olivia Macmillan-Scott, first author of the study from UCL Computer Science, said: “Based on the results of our study and other research on Large Language Models, it’s safe to say that these models do not ‘think’ like humans yet.
    “That said, the model with the largest dataset, GPT-4, performed a lot better than other models, suggesting that they are improving rapidly. However, it is difficult to say how this particular model reasons because it is a closed system. I suspect there are other tools in use that you wouldn’t have found in its predecessor GPT-3.5.”
    Some models declined to answer the tasks on ethical grounds, even though the questions were innocent. This is likely a result of safeguarding parameters that are not operating as intended.
    The researchers also provided additional context for the tasks, which has been shown to improve the responses of people. However, the LLMs tested didn’t show any consistent improvement.

    Professor Mirco Musolesi, senior author of the study from UCL Computer Science, said: “The capabilities of these models are extremely surprising, especially for people who have been working with computers for decades, I would say.
    “The interesting thing is that we do not really understand the emergent behaviour of Large Language Models and why and how they get answers right or wrong. We now have methods for fine-tuning these models, but then a question arises: if we try to fix these problems by teaching the models, do we also impose our own flaws? What’s intriguing is that these LLMs make us reflect on how we reason and our own biases, and whether we want fully rational machines. Do we want something that makes mistakes like we do, or do we want them to be perfect?”
    The models tested were GPT-4, GPT-3.5, Google Bard, Claude 2, Llama 2 7b, Llama 2 13b and Llama 2 70b.
    1 Stein E. (1996). Without Good Reason: The Rationality Debate in Philosophy and Cognitive Science. Clarendon Press.
    2 These tasks and their solutions are available online. An example is the Wason task:
    The Wason task
    Check the following rule: If there is a vowel on one side of the card, there is an even number on the other side.
    You see four cards now: E K 4 7Which of these cards must in any case be turned over to check the rule?
    Answer: a) E and d) 7, as these are the only ones that can violate the rule. More

  • in

    Fighting fires from space in record time: How AI could prevent devastating wildfires

    Australian scientists are getting closer to detecting bushfires in record time, thanks to cube satellites with onboard AI now able to detect fires from space 500 times faster than traditional on-ground processing of imagery.
    Remote sensing and computer science researchers have overcome the limitations of processing and compressing large amounts of hyperspectral imagery on board the smaller, more cost-effective cube satellites before sending it to the ground for analysis, saving precious time and energy.
    The breakthrough, using artificial intelligence, means that bushfires will be detected earlier from space, even before they take hold and generate large amounts of heat, allowing on ground crews to respond more quickly and prevent loss of life and property.
    A project funded by the SmartSat CRC and led by the University of South Australia (UniSA) has used cutting-edge onboard AI technology to develop an energy-efficient early fire smoke detection system for South Australia’s first cube satellite, Kanyini.
    The Kanyini mission is a collaboration between the SA Government, SmartSat CRC and industry partners to launch a 6 U CubeSat satellite into low Earth orbit to detect bushfires as well as monitor inland and coastal water quality.
    Equipped with a hyperspectral imager, the satellite sensor captures reflected light from Earth in different wavelengths to generate detailed surface maps for various applications, including bushfire monitoring, water quality assessment and land management.
    Lead researcher UniSA geospatial scientist Dr Stefan Peters says that, traditionally, Earth observation satellites have not had the onboard processing capabilities to analyse complex images of Earth captured from space in real-time.

    His team, which includes scientists from UniSA, Swinburne University of Technology and Geoscience Australia, has overcome this by building a lightweight AI model that can detect smoke within the available onboard processing, power consumption and data storage constraints of cube satellites.
    Compared to the on-ground based processing of hyperspectral satellite imagery to detect fires, the AI onboard model reduced the volume of data downlinked to 16% of its original size, while consuming 69% less energy.
    The AI onboard model also detected fire smoke 500 times faster than traditional on-ground processing.
    “Smoke is usually the first thing you can see from space before the fire gets hot and big enough for sensors to identify it, so early detection is crucial,” Dr Peters says.
    To demonstrate the AI model, they used simulated satellite imagery of recent Australian bushfires, using machine learning to train the model to detect smoke in an image.
    “For most sensor systems, only a fraction of the data collected contains critical information related to the purpose of a mission. Because the data can’t be processed on board large satellites, all of it is downlinked to the ground where it is analysed, taking up a lot of space and energy. We have overcome this by training the model to differentiate smoke from cloud, which makes it much faster and more efficient.”
    Using a past fire event in the Coorong as a case study, the simulated Kanyini AI onboard approach took less than 14 minutes to detect the smoke and send the data to the South Pole ground station.

    “This research shows there are significant benefits of onboard AI compared to traditional on ground processing,” Dr Peters says. “This will not only prove invaluable in the event of bushfires but also serve as an early warning system for other natural disasters.”
    The research team hopes to demonstrate the onboard AI fire detection system in orbit in 2025 when the Kanyini mission is operational.
    “Once we have ironed out any issues, we hope to commercialise the technology and employ it on a CubeSat constellation, aiming to contribute to early fire detection within an hour.”
    A video explaining the research is also available at: https://youtu.be/dKQZ8V2Zagk More