More stories

  • in

    Researchers make a surprising discovery about the magnetic interactions in a Kagome layered topological magnet

    A team from Ames National Laboratory conducted an in-depth investigation of the magnetism of TbMn6Sn6, a Kagome layered topological magnet. They were surprised to find that the magnetic spin reorientation in TbMn6Sn6 occurs by generating increasing numbers of magnetically isotropic ions as the temperature increases.
    Rob McQueeney, a scientist at Ames Lab and project lead, explained that TbMn6Sn6has two different magnetic ions in the material, terbium and manganese. The direction of the manganese moments controls the topological state, “But it’s the terbium moment that determines the direction that the manganese points,” he said. “The idea is, you have these two magnetic species and it is the combination of their interactions which controls the direction of the moment.”
    In this layered material, there is a magnetic phase transition that occurs as the temperature increases. During this phase transition, the magnetic moments switch from pointing perpendicular to the Kagome layer, or uniaxial, to pointing within the layer, or planar. This transition is called a spin reorientation.
    McQueeney explained that in Kagome metals, the spin direction controls the properties of topological or Dirac electrons. Dirac electrons occur where the magnetic bands touch at one point. However, magnetic order causes gapping at the points where the bands are touching. This gapping stabilizes the topological Chern insulator state. “So you can go from a Dirac semimetal to a Chern insulator just by turning the direction of the moment,” he said.
    As part of their TbMn6Sn6 investigation, the team performed inelastic neutron scattering experiments at the Spallation Neutron Source to understand how the magnetic interactions in the material drive the spin reorientation transition. McQueeney said that the terbium wants to be uniaxial at low temperatures, while the manganese is planar, so they are at odds.
    According to McQueeney, the behavior at very low or very high temperatures is as expected. At low temperatures, the terbium is uniaxial (with electronic orbitals shaped like an ellipsoid). At high temperatures, the terbium is magnetically isotropic (with a spherical orbital shape), which allows the planar Mn to determine the overall moment direction. The team assumed that each terbium orbital would gradually deform from ellipsoidal to spherical. Instead, they found both types of terbium exist at intermediate temperatures, however the population of spherical terbium increases as the temperature increases.
    “So, what we did was we determined how the magnetic excitations evolve from this uniaxial state into this easy plane state as a function of temperature. And the long-standing assumption of how it happens is correct,” said McQueeney. “But the nuance is that you can’t treat every terbium as being exactly the same on some timescale. Every terbium site can exist in two quantum states, uniaxial or isotropic, and if I look at a site, it’s either in one state or the other at some instant time. The probability that it’s uniaxial or isotropic depends on temperature.” We call this an orbital binary quantum alloy. More

  • in

    GPT detectors can be biased against non-native English writers

    In a peer-reviewed opinion paper publishing July 10 in the journal Patterns, researchers show that computer programs commonly used to determine if a text was written by artificial intelligence tend to falsely label articles written by non-native language speakers as AI-generated. The researchers caution against the use of such AI text detectors for their unreliability, which could have negative impacts on individuals including students and those applying for jobs.
    “Our current recommendation is that we should be extremely careful about and maybe try to avoid using these detectors as much as possible,” says senior author James Zou, of Stanford University. “It can have significant consequences if these detectors are used to review things like job applications, college entrance essays or high school assignments.”
    AI tools like OpenAI’s ChatGPT chatbot can compose essays, solve science and math problems, and produce computer code. Educators across the U.S. are increasingly concerned about the use of AI in students’ work and many of them have started using GPT detectors to screen students’ assignments. These detectors are platforms that claim to be able to identify if the text is generated by AI, but their reliability and effectiveness remain untested.
    Zou and his team put seven popular GPT detectors to the test. They ran 91 English essays written by non-native English speakers for a widely recognized English proficiency test, called Test of English as a Foreign Language, or TOEFL, through the detectors. These platforms incorrectly labeled more than half of the essays as AI-generated, with one detector flagging nearly 98% of these essays as written by AI. In comparison, the detectors were able to correctly classify more than 90% of essays written by eighth-grade students from the U.S. as human-generated.
    Zou explains that the algorithms of these detectors work by evaluating text perplexity, which is how surprising the word choice is in an essay. “If you use common English words, the detectors will give a low perplexity score, meaning my essay is likely to be flagged as AI-generated. If you use complex and fancier words, then it’s more likely to be classified as human written by the algorithms,” he says. This is because large language models like ChatGPT are trained to generate text with low perplexity to better simulate how an average human talks, Zou adds.
    As a result, simpler word choices adopted by non-native English writers would make them more vulnerable to being tagged as using AI.
    The team then put the human-written TOEFL essays into ChatGPT and prompted it to edit the text using more sophisticated language, including substituting simple words with complex vocabulary. The GPT detectors tagged these AI-edited essays as human-written.
    “We should be very cautious about using any of these detectors in classroom settings, because there’s still a lot of biases, and they’re easy to fool with just the minimum amount of prompt design,” Zou says. Using GPT detectors could also have implications beyond the education sector. For example, search engines like Google devalue AI-generated content, which may inadvertently silence non-native English writers.
    While AI tools can have positive impacts on student learning, GPT detectors should be further enhanced and evaluated before putting into use. Zou says that training these algorithms with more diverse types of writing could be one way to improve these detectors. More

  • in

    LIONESS redefines brain tissue imaging

    Brain tissue is one of the most intricate specimens that scientists have arguably ever dealt with. Packed with currently immeasurable amount of information, the human brain is the most sophisticated computational device with its network of around 86 billion neurons. Understanding such complexity is a difficult task, and hence making progress requires technologies to unravel the tiny, complex interactions taking place in the brain at microscopic scales. Imaging is therefore an enabling tool in neuroscience.
    The new imaging and virtual reconstruction technology developed by Johann Danzl’s group at ISTA is a big leap in imaging brain activity and is aptly named LIONESS — Live Information Optimized Nanoscopy Enabling Saturated Segmentation. LIONESS is a pipeline to image, reconstruct, and analyze live brain tissue with a comprehensiveness and spatial resolution not possible until now.
    “With LIONESS, for the first time, it is possible to get a comprehensive, dense reconstruction of living brain tissue. By imaging the tissue multiple times, LIONESS allows us to observe and measure the dynamic cellular biology in the brain take its course,” says first author Philipp Velicky. “The output is a reconstructed image of the cellular arrangements in three dimensions, with time making up the fourth dimension, as the sample can be imaged over minutes, hours, or days,” he adds.
    With LIONESS neuroscientists can image living brain tissue and achieve high-resolution 3D imagery without damaging the living sample.
    Collaboration and AI the Key
    The strength of LIONESS lies in refined optics and in the two levels of deep learning — a method of Artificial Intelligence — that make up its core: the first enhances the image quality and the second identifies the different cellular structures in the dense neuronal environment.

    The pipeline is a result of a collaboration between the Danzl group, Bickel group, Jonas group, Novarino group, and ISTA’s Scientific Service Units, as well as other international collaborators. “Our approach was to assemble a dynamic group of scientists with unique combined expertise across disciplinary boundaries, who work together to close a technology gap in the analysis of brain tissue,” Johann Danzl of ISTA says.
    Surpassing hurdles
    Previously it was possible to get reconstructions of brain tissue by using Electron Microscopy. This method images the sample based on its interactions with electrons. Despite its ability to capture images at a few nanometers — a millionth of a millimeter — resolution, Electron Microscopy requires a sample to be fixed in one biological state, which needs to be physically sectioned to obtain 3D information. Hence, no dynamic information can be obtained.
    Another previously known technique of Light Microscopy allows observation of living systems and record intact tissue volumes by slicing them “optically” rather than physically. However, Light Microscopy is severely hampered in its resolving power by the very properties of the light waves it uses to generate an image. Its best-case resolution is a few hundred nanometers, much too coarse-grained to capture important cellular details in brain tissue.
    Using Super-resolution Light Microscopy scientists can break this resolution barrier. Recent work in this field, dubbed SUSHI (Super-resolution Shadow Imaging), showed that applying dye molecules to the spaces around cells and applying the Nobel Prize-winning super-resolution technique STED (Stimulated Emission Depletion) microscopy reveals super-resolved ‘shadows’ of all the cellular structures and thus visualizes them in the tissue. Nevertheless, it has been impossible to image entire volumes of brain tissue with resolution enhancement that matches the brain tissue’s complex 3D architecture. This is because increasing resolution also entails a high load of imaging light on the sample, which may damage or ‘fry’ the subtle, living tissue.

    Herein lies the prowess of LIONESS, having been developed for, according to the authors, “fast and mild” imaging conditions, thus keeping the sample alive. The technique does so while providing isotropic super-resolution — meaning that it is equally good in all three spatial dimensions — that allows visualization of the tissue’s cellular components in 3D nanoscale resolved detail.
    LIONESS collects only as little information from the sample as needed during the imaging step. This is followed by the first deep learning step to fill in additional information on the brain tissue’s structure in a process called Image Restoration. In this innovative way, it achieves a resolution of around 130 nanometers, while being gentle enough for imaging of living brain tissue in real-time. Together, these steps allow for a second step of deep learning, this time to make sense of the extremely complex imaging data and identify the neuronal structures in an automated manner.
    Homing In
    “The interdisciplinary approach allowed us to break the intertwined limitations in resolving power and light exposure to the living system, to make sense of the complex 3D data, and to couple the tissue’s cellular architecture with molecular and functional measurements,” says Danzl.
    For virtual reconstruction, Danzl and Velicky teamed up with visual computing experts: the Bickel group at ISTA and the group led by Hanspeter Pfister at Harvard University, who contributed their expertise in automated segmentation — the process of automatically recognizing the cellular structures in the tissue — and visualization, with further support by ISTA’s image analysis staff scientist Christoph Sommer. For sophisticated labeling strategies, neuroscientists and chemists from Edinburgh, Berlin, and ISTA contributed. Consequently, it was possible to bridge functional measurements, i.e. to read out the cellular structures together with biological signaling activity in the same living neuronal circuit. This was done by imaging Calcium ion fluxes into cells and measuring the cellular electrical activity in collaboration with the Jonas group at ISTA. The Novarino group contributed human cerebral organoids, often nicknamed mini-brains that mimic human brain development. The authors underline that all of this was facilitated through expert support by ISTA’s top-notch scientific service units.
    Brain structure and activity are highly dynamic; its structures evolve as the brain performs and learns new tasks. This aspect of the brain is often referred to as “plasticity.” Hence, observing the changes in the brain’s tissue architecture is essential to unlocking the secrets behind its plasticity. The new tool developed at ISTA shows potential for understanding the functional architecture of brain tissue and potentially other organs by revealing the subcellular structures and capturing how these might change over time. More

  • in

    Taking a lesson from spiders: Researchers create an innovative method to produce soft, recyclable fibers for smart textiles

    Smart textiles offer many potential wearable technology applications, from therapeutics to sensing to communication. For such intelligent textiles to function effectively, they need to be strong, stretchable, and electrically conductive. However, fabricating fibres that possess these three properties is challenging and requires complex conditions and systems.
    Drawing inspiration from how spiders spin silk to make webs, a team of researchers led by Assistant Professor Swee-Ching Tan from the Department of Materials Science and Engineering under the National University of Singapore’s College of Design and Engineering, together with their international collaborators, have developed an innovative method of producing soft fibres that possess these three key properties, and at the same time can be easily reused to produce new fibres. The fabrication process can be carried out at room temperature and pressure, and uses less solvent as well as less energy, making it an attractive option for producing functional soft fibres for various smart applications.
    “Technologies for fabricating soft fibres should be simple, efficient and sustainable to meet the high demand for smart textile electronics. Soft fibres created using our spider-inspired method of spinning has been demonstrated to be versatile for various smart technology applications — for example, these functional fibres can be incorporated into a strain-sensing glove for gaming purposes, and a smart face mask to monitor breathing status for conditions such as obstructive sleep apnea. These are just some of the many possibilities,” said Asst Prof Tan.
    Their innovation was demonstrated and outlined in their paper that was published in scientific journal Nature Electronics on 27 April 2023.
    Spinning a web of soft fibres
    Conventional artificial spinning methods to fabricate synthetic fibres require high pressure, high energy input, large volumes of chemicals, and specialised equipment. Moreover, the resulting fibres typically have limited functions.

    In contrast, the spider silk spinning process is highly efficient and can form strong and versatile fibres under room temperature and pressure. To address the current technological challenges, the NUS team decided to emulate this natural spinning process to create one-dimensional (1D) functional soft fibres that are strong, stretchable, and electrically conductive. They identified two unique steps in spider silk formation that they could mimic.
    Spider silk formation involves the change of a highly concentrated protein solution, known as a silk dope, into a strand of fibre. The researchers first identified that the protein concentration and interactions in the silk dope increase from dope synthesis to spinning. The second step identified was that the arrangement of proteins within the dope changes when triggered by external factors to help separate the liquid portion from the silk dope, leaving the solid part — the spider silk fibres. This second step is known as liquid-solid phase separation.
    The team recreated the two steps and developed a new spinning process known as the phase separation-enabled ambient (PSEA) spinning approach.
    The soft fibres were spun from a viscous gel solution composed of polyacrylonitrile (PAN) and silver ions — referred to as PANSion — dissolved in dimethylformamide (DMF), a common solvent. This gel solution is known as the spinning dope, which forms into a strand of soft fibre through the spinning process when the gel is pulled and spun under ambient conditions.
    Once the PANSion gel is pulled and exposed to air, water molecules in the air act as a trigger to cause the liquid portion of the gel to separate in the form of droplets from the solid portion of the gel, this phenomenon is known as the nonsolvent vapour-induced phase separation effect. When separated from the solid fibre, the droplets of the liquid portion are removed by holding the fibre vertically or at an angle for gravity to do its work.

    “Fabrication of 1D soft fibres with seamless integration of all-round functionalities is much more difficult to achieve and requires complicated fabrication or multiple post-treatment processes. This innovative method fulfils an unmet need to create a simple yet efficient spinning approach to produce functional 1D soft fibres that simultaneously possess unified mechanical and electrical functionalities,” said Asst Prof Tan.
    Three properties, one method
    The biomimetic spinning process combined with the unique formulation of the gel solution allowed the researchers to fabricate soft fibres that are imbued with three key properties — strong, stretchable, and electrically conductive.
    Researchers tested the mechanical properties, strength, and elasticity, of the PANSion gel through a series of stress tests and demonstrated that this remarkable innovation possessed excellent strength and elasticity. These tests also allowed the researchers to deduce that the formation of strong chemical networks between metal-based complexes within the gel is responsible for its mechanical properties.
    Further analysis of the PANSion soft fibres at the molecular level confirmed its electrical conductivity and showed that the silver ions present in the PANSion gel contributed to the electrical conductivity of the soft fibres.
    The team then concluded that PANSion soft fibres fulfils all the properties that would allow it to be versatile and potentially be used in a wide range of smart technology applications.
    Potential applications and next steps
    The team demonstrated the capabilities of the PANSion soft fibres in a number of applications, such as communication and temperature sensing. PANSion fibres were sewn to create an interactive glove that exemplified a smart gaming glove. When connected to a computer interface, the glove could successfully detect human hand gestures and enable a user to play simple games.
    PANSion fibres could also detect changes in electrical signals that could be used as a form of communication like Morse code. In addition, these fibres could sense temperature changes, a property that can potentially be capitalised to protect robots from environments with extreme temperatures. Researchers also sewed PANSion fibres into a smart face mask for monitoring the breathing activities of the mask wearer.
    On top of the wide range of potential applications of PANSion soft fibres, this innovative discovery earns points in sustainability. PANSion fibres could be recycled by dissolving in DMF, allowing it to be converted back into a gel solution for spinning new fibres. A comparison with other current fibre-spinning methods revealed that this new spider-inspired method consumes significantly lower amounts of energy and requires lower volume of chemicals.
    Further to this cutting-edge discovery, the research team will continue to work on improving the sustainability of the PANSion soft fibres throughout its production cycle, from the raw materials to recycling the final product. More

  • in

    AI nursing ethics: Viability of robots and artificial intelligence in nursing practice

    The recent progress in the field of robotics and artificial intelligence (AI) promises a future where these technologies would play a more prominent role in society. Current developments, such as the introduction of autonomous vehicles, the ability to generate original artwork, and the creation of chatbots capable of engaging in human-like conversations, highlight the immense possibilities held by these technologies. While these advancements offer numerous benefits, they also pose some fundamental questions. The characteristics such as creativity, communication, critical thinking, and learning — once considered to be unique to humans — are now being replicated by AI. So, can intelligent machines be considered ‘human’?
    In a step toward answering this question, Associate Professor Tomohide Ibuki from Tokyo University of Science, in collaboration with medical ethics researcher Dr. Eisuke Nakazawa from The University of Tokyo and nursing researcher Dr. Ai Ibuki from Kyoritsu Women’s University, recently explored whether robots and AI can be entrusted with nursing, a highly humane practice. Their work was made available online on 12 June 2023 and published in the journal Nursing Ethics on 12 June 2023.
    “This study in applied ethics examines whether robotics, human engineering, and human intelligence technologies can and should replace humans in nursing tasks,” says Dr. Ibuki.
    Nurses demonstrate empathy and establish meaningful connections with their patients. This human touch is essential in fostering a sense of understanding, trust, and emotional support. The researchers examined whether the current advancements in robotics and AI can implement these human qualities by replicating the ethical concepts attributed to human nurses, including advocacy, accountability, cooperation, and caring.
    Advocacy in nursing involves speaking on behalf of patients to ensure that they receive the best possible medical care. This encompasses safeguarding patients from medical errors, providing treatment information, acknowledging the preferences of a patient, and acting as mediators between the hospital and the patient. In this regard, the researchers noted that while AI can inform patients about medical errors and present treatment options, they questioned its ability to truly understand and empathize with patients’ values and to effectively navigate human relationships as mediators.
    The researchers also expressed concerns about holding robots accountable for their actions. They suggested the development of explainable AI, which would provide insights into the decision-making process of AI systems, improving accountability.
    The study further highlights that nurses are required to collaborate effectively with their colleagues and other healthcare professionals to ensure the best possible care for patients. As humans rely on visual cues to build trust and establish relationships, unfamiliarity with robots might lead to suboptimal interactions. Recognizing this issue, the researchers emphasized the importance of conducting further investigations to determine the appropriate appearance of robots for facilitating efficient cooperation with human medical staff.
    Lastly, while robots and AI have the potential to understand a patient’s emotions and provide appropriate care, the patient must also be willing to accept robots as care providers.
    Having considered the above four ethical concepts in nursing, the researchers acknowledge that while robots may not fully replace human nurses anytime soon, they do not dismiss the possibility. While robots and AI can potentially reduce the shortage of nurses and improve treatment outcomes for patients, their deployment requires careful weighing of the ethical implications and impact on nursing practice.
    “While the present analysis does not preclude the possibility of implementing the ethical concepts of nursing in robots and AI in the future, it points out that there are several ethical questions. Further research could not only help solve them but also lead to new discoveries in ethics,” concludes Dr. Ibuki.
    Here’s hoping for such novel applications of robotics and AI to emerge soon! More

  • in

    Solving rare disease mysteries … and protecting privacy

    Macquarie University researchers have demonstrated a new way of linking personal records and protecting privacy. The first application is in identifying cases of rare genetic disorders. There are many other potential applications across society.
    The research will be presented at the 18th ACM ASIA Conference on Computer and Communications Security in Melbourne on 12 July.
    A five-year-old boy in the US has a mutation in a gene called GPX4, which he shares with just 10 other children in the world. The condition causes skeletal and central nervous system abnormalities. There are likely to be other children with the disorder recorded in hundreds of health and diagnostic databases worldwide, but we do not know of them, because their privacy is guarded for legal and commercial reasons.
    But what if records linked to the condition could be found and counted while still preserving privacy? Researchers from the Macquarie University Cyber Security Hub have developed a technique to achieve exactly that. The team includes Dr Dinusha Vatsalan and Professor Dali Kaafar of the University’s School of Computing and the boy’s father, software engineer Mr Sanath Kumar Ramesh, who is CEO of the OpenTreatments Foundation in Seattle, Washington.
    “I am very excited about this work,” says Mr Ramesh, whose foundation initiated and supported the project. “Knowing how many people have a condition underpins economic assumptions. If a condition was previously thought to have 15 patients and now we know, having pulled in data from diagnostic testing companies, that there are 100 patients, that increases market-size hugely.
    “It would have a significant economic impact. The valuation of a company working on the condition would go up. Product costing would go down. How insurance companies account for medical costs would change. Diagnostic companies would target [the condition] more. And you can start to do epidemiology more precisely.”
    Linking and counting data records might seem simple but, in reality, it involves many issues, says Professor Kaafar. First, because we are dealing with a rare disease, there is no centralised database, and the records are sprinkled across the world. “In this case in hundreds of databases,” he says. “And from a business perspective, data is precious, and the companies holding it are not necessarily interested in sharing.”

    Then, there are technical issues of matching data that is recorded, encoded, and stored in different ways, and accounting for individuals who are double-counted in and between different databases. And, on top of all that, are the privacy considerations. “We are dealing with very, very sensitive health data,” Professor Kaafar says.
    This personal data isn’t needed for a simple estimate of the number of patients and for epidemiological purposes. But, until now, it was needed to ensure that records are unique and can be linked.
    Dr Vatsalan and her colleagues used a technique known as Bloom filter encoding with differential privacy. They devised a suite of algorithms which deliberately introduces enough noise into the data to blur precise details to the point where they cannot be extracted from individual records, but it still allows the patterns of records of the same disease condition to be matched and clustered.
    The accuracy of their technique was then evaluated using North Carolina voter registration data. And the results showed the method led to a negligible error rate with a guarantee of a very high level of privacy, even on highly corrupted datasets. The technique significantly outperforms existing methods.
    In addition to detecting and counting rare diseases, the research has many other applications; for determining awareness of a new product in marketing, for instance, or in cybersecurity for tracking the number of unique views of particular social media posts.

    But it is the application to rare diseases about which the Macquarie University researchers are passionate. “There is no better feeling for a researcher than seeing the technology they’ve been developing having a real impact and making the world a better place,” says Professor Kaafar. “In this case, it is so real and so important.”
    The OpenTreatment Foundation partly funded the research.
    “The Foundation wanted to make this project completely open source from the very beginning,” Dr Vatsalan adds. “So the algorithm we implemented is being published openly.”
    The authors will present their research at the 18th ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS 2023) in Melbourne on 12 July. More

  • in

    Bees make decisions better and faster than we do, for the things that matter to them

    Honey bees have to balance effort, risk and reward, making rapid and accurate assessments of which flowers are mostly likely to offer food for their hive. Research published in the journal eLife today reveals how millions of years of evolution has engineered honey bees to make fast decisions and reduce risk.
    The study enhances our understanding of insect brains, how our own brains evolved, and how to design better robots.
    The paper presents a model of decision-making in bees and outlines the paths in their brains that enable fast decision-making. The study was led by Professor Andrew Barron from Macquarie University in Sydney, and Dr HaDi MaBouDi, Neville Dearden and Professor James Marshall from the University of Sheffield.
    “Decision-making is at the core of cognition,” says Professor Barron. “It’s the result of an evaluation of possible outcomes, and animal lives are full of decisions. A honey bee has a brain smaller than a sesame seed. And yet she can make decisions faster and more accurately than we can. A robot programmed to do a bee’s job would need the back up of a supercomputer.
    “Today’s autonomous robots largely work with the support of remote computing,” Professor Barron continues. “Drones are relatively brainless, they have to be in wireless communication with a data centre. This technology path will never allow a drone to truly explore Mars solo — NASA’s amazing rovers on Mars have travelled about 75 kilometres in years of exploration.”
    Bees need to work quickly and efficiently, finding nectar and returning it to the hive, while avoiding predators. They need to make decisions. Which flower will have nectar? While they’re flying, they’re only prone to aerial attack. When they land to feed, they’re vulnerable to spiders and other predators, some of which use camouflage to look like flowers.

    “We trained 20 bees to recognise five different coloured ‘flower disks’. Blue flowers always had sugar syrup,” says Dr MaBouDi. “Green flowers always had quinine [tonic water] with a bitter taste for bees. Other colours sometimes had glucose.”
    “Then we introduced each bee to a ‘garden’ where the ‘flowers’ just had distilled water. We filmed each bee then watched more than 40 hours of video, tracking the path of the bees and timing how long it took them to make a decision.
    “If the bees were confident that a flower would have food, then they quickly decided to land on it taking an average of 0.6 seconds),” says Dr MaBouDi. “If they were confident that a flower would not have food, they made a decision just as quickly.”
    If they were unsure, then they took much more time — on average 1.4 seconds — and the time reflected the probability that a flower had food.
    The team then built a computer model from first principles aiming to replicate the bees’ decision-making process. They found the structure of their computer model looked very similar to the physical layout of a bee brain.
    “Our study has demonstrated complex autonomous decision-making with minimal neural circuitry,” says Professor Marshall. “Now we know how bees make such smart decisions, we are studying how they are so fast at gathering and sampling information. We think bees are using their flight movements to enhance their visual system to make them better at detecting the best flowers.”
    AI researchers can learn much from insects and other ‘simple’ animals. Millions of years of evolution has led to incredibly efficient brains with very low power requirements. The future of AI in industry will be inspired by biology, says Professor Marshall, who co-founded Opteran, a company that reverse-engineers insect brain algorithms to enable machines to move autonomously, like nature. More

  • in

    Unraveling the humanity in metacognitive ability: Distinguishing human metalearning from AI

    Monitoring and controlling one’s own learning process objectively is essential for improving one’s learning abilities. This ability, often referred to as “learning to learn” or “metacognition,” has been studied in educational psychology. Owing to the tight coupling between the higher meta-level and the lower object-level cognitive systems, a conventional reduction approach has difficulty understanding the neural basis of metacognition. To overcome this limitation, the researchers employed a novel research approach where they compared the metacognition of artificial intelligence (AI) to that of humans.
    First, they demonstrated that the metacognitive system of AI, which aims to maximize rewards and minimize punishments, can effectively regulate learning speed and memory retention in response to the environment and task. Second, they demonstrated the metacognitive behavior of human motor learning, which demonstrates that providing monetary feedback as a function of memory can either promote or suppress motor learning and memory retention. This constitutes the first-ever empirical demonstration of the bi-directional regulation of implicit motor learning abilities by economic factors. Notably, while AI exhibited equal metacognitive abilities for reward and punishment, humans exhibited an asymmetric response to monetary gain and loss; humans adjust their memory retention in response to gain and their learning speed in response to loss. This asymmetric property may provide valuable insights into the neural mechanisms underlying human metacognition.
    Researchers anticipate that these findings could be effectively applied to enhance the learning abilities of individuals engaging in new sports or motor-related activities, such as post-stroke rehabilitation training.
    This work was supported by the Japan Society for the Promotion of Science KAKENHI (JP19H04977, JP19H05729, and JP22H00498). TS was supported by a JSPS Research Fellowship for Young Scientists and KAKENHI (JP19J20366). NS was supported by NIH R21 NS120274. More