More stories

  • in

    New AI model draws treasure maps to diagnose disease

    Medical diagnostics expert, doctor’s assistant, and cartographer are all fair titles for an artificial intelligence model developed by researchers at the Beckman Institute for Advanced Science and Technology.
    Their new model accurately identifies tumors and diseases in medical images and is programmed to explain each diagnosis with a visual map. The tool’s unique transparency allows doctors to easily follow its line of reasoning, double-check for accuracy, and explain the results to patients.
    “The idea is to help catch cancer and disease in its earliest stages — like an X on a map — and understand how the decision was made. Our model will help streamline that process and make it easier on doctors and patients alike,” said Sourya Sengupta, the study’s lead author and a graduate research assistant at the Beckman Institute.
    This research appeared in IEEE Transactions on Medical Imaging.
    Cats and dogs and onions and ogres
    First conceptualized in the 1950s, artificial intelligence — the concept that computers can learn to adapt, analyze, and problem-solve like humans do — has reached household recognition, due in part to ChatGPT and its extended family of easy-to-use tools.
    Machine learning, or ML, is one of many methods researchers use to create artificially intelligent systems. ML is to AI what driver’s education is to a 15-year-old: a controlled, supervised environment to practice decision-making, calibrating to new environments, and rerouting after a mistake or wrong turn.

    Deep learning — machine learning’s wiser and worldlier relative — can digest larger quantities of information to make more nuanced decisions. Deep learning models derive their decisive power from the closest computer simulations we have to the human brain: deep neural networks.
    These networks — just like humans, onions, and ogres — have layers, which makes them tricky to navigate. The more thickly layered, or nonlinear, a network’s intellectual thicket, the better it performs complex, human-like tasks.
    Consider a neural network trained to differentiate between pictures of cats and pictures of dogs. The model learns by reviewing images in each category and filing away their distinguishing features (like size, color, and anatomy) for future reference. Eventually, the model learns to watch out for whiskers and cry Doberman at the first sign of a floppy tongue.
    But deep neural networks are not infallible — much like overzealous toddlers, said Sengupta, who studies biomedical imaging in the University of Illinois Urbana-Champaign Department of Electrical and Computer Engineering.
    “They get it right sometimes, maybe even most of the time, but it might not always be for the right reasons,” he said. “I’m sure everyone knows a child who saw a brown, four-legged dog once and then thought that every brown, four-legged animal was a dog.”
    Sengupta’s gripe? If you ask a toddler how they decided, they will probably tell you.

    “But you can’t ask a deep neural network how it arrived at an answer,” he said.
    The black box problem
    Sleek, skilled, and speedy as they may be, deep neural networks struggle to master the seminal skill drilled into high school calculus students: showing their work. This is referred to as the black box problem of artificial intelligence, and it has baffled scientists for years.
    On the surface, coaxing a confession from the reluctant network that mistook a Pomeranian for a cat does not seem unbelievably crucial. But the gravity of the black box sharpens as the images in question become more life-altering. For example: X-ray images from a mammogram that may indicate early signs of breast cancer.
    The process of decoding medical images looks different in different regions of the world.
    “In many developing countries, there is a scarcity of doctors and a long line of patients. AI can be helpful in these scenarios,” Sengupta said.
    When time and talents are in high demand, automated medical image screening can be deployed as an assistive tool — in no way replacing the skill and expertise of doctors, Sengupta said. Instead, an AI model can pre-scan medical images and flag those containing something unusual — like a tumor or early sign of disease, called a biomarker — for a doctor’s review. This method saves time and can even improve the performance of the person tasked with reading the scan.
    These models work well, but their bedside manner leaves much to be desired when, for example, a patient asks why an AI system flagged an image as containing (or not containing) a tumor.
    Historically, researchers have answered questions like this with a slew of tools designed to decipher the black box from the outside in. Unfortunately, the researchers using them are often faced with a similar plight as the unfortunate eavesdropper, leaning against a locked door with an empty glass to their ear.
    “It would be so much easier to simply open the door, walk inside the room, and listen to the conversation firsthand,” Sengupta said.
    To further complicate the matter, many variations of these interpretation tools exist. This means that any given black box may be interpreted in “plausible but different” ways, Sengupta said.
    “And now the question is: which interpretation do you believe?” he said. “There is a chance that your choice will be influenced by your subjective bias, and therein lies the main problem with traditional methods.”
    Sengupta’s solution? An entirely new type of AI model that interprets itself every time — that explains each decision instead of blandly reporting the binary of “tumor versus non-tumor,” Sengupta said.
    No water glass needed, in other words, because the door has disappeared.
    Mapping the model
    A yogi learning a new posture must practice it repeatedly. An AI model trained to tell cats from dogs studying countless images of both quadrupeds.
    An AI model functioning as doctor’s assistant is raised on a diet of thousands of medical images, some with abnormalities and some without. When faced with something never-before-seen, it runs a quick analysis and spits out a number between 0 and 1. If the number is less than .5, the image is not assumed to contain a tumor; a numeral greater than .5 warrants a closer look.
    Sengupta’s new AI model mimics this setup with a twist: the model produces a value plus a visual map explaining its decision.
    The map — referred to by the researchers as an equivalency map, or E-map for short — is essentially a transformed version of the original X-ray, mammogram, or other medical image medium. Like a paint-by-numbers canvas, each region of the E-map is assigned a number. The greater the value, the more medically interesting the region is for predicting the presence of an anomaly. The model sums up the values to arrive at its final figure, which then informs the diagnosis.
    “For example, if the total sum is 1, and you have three values represented on the map — .5, .3, and .2 — a doctor can see exactly which areas on the map contributed more to that conclusion and investigate those more fully,” Sengupta said.
    This way, doctors can double-check how well the deep neural network is working — like a teacher checking the work on a student’s math problem — and respond to patients’ questions about the process.
    “The result is a more transparent, trustable system between doctor and patient,” Sengupta said.
    X marks the spot
    The researchers trained their model on three different disease diagnosis tasks including more than 20,000 total images.
    First, the model reviewed simulated mammograms and learned to flag early signs of tumors. Second, it analyzed optical coherence tomography images of the retina, where it practiced identifying a buildup called Drusen that may be an early sign of macular degeneration. Third, the model studied chest X-rays and learned to detect cardiomegaly, a heart enlargement condition that can lead to disease.
    Once the mapmaking model had been trained, the researchers compared its performance to existing black-box AI systems — the ones without a self-interpretation setting. The new model performed comparably to its counterparts in all three categories, with accuracy rates of 77.8% for mammograms, 99.1% for retinal OCT images, and 83% for chest x-rays compared to the existing 77.8%, 99.1%, and 83.33.%
    These high accuracy rates are a product of the deep neural network, the non-linear layers of which mimic the nuance of human neurons.
    To create such a complicated system, the researchers peeled the proverbial onion and drew inspiration from linear neural networks, which are simpler and easier to interpret.
    “The question was: How can we leverage the concepts behind linear models to make non-linear deep neural networks also interpretable like this?” said principal investigator Mark Anastasio, a Beckman Institute researcher and the Donald Biggar Willet Professor and Head of the Illinois Department of Bioengineering. “This work is a classic example of how fundamental ideas can lead to some novel solutions for state-of-the-art AI models.”
    The researchers hope that future models will be able to detect and diagnose anomalies all over the body and even differentiate between them.
    “I am excited about our tool’s direct benefit to society, not only in terms of improving disease diagnoses, but also improving trust and transparency between doctors and patients,” Anastasio said. More

  • in

    A key to the future of robots could be hiding in liquid crystals

    Robots and cameras of the future could be made of liquid crystals, thanks to a new discovery that significantly expands the potential of the chemicals already common in computer displays and digital watches.
    The findings, a simple and inexpensive way to manipulate the molecular properties of liquid crystals with light exposure, are now published in Advanced Materials.
    “Using our method, any lab with a microscope and a set of lenses can arrange the liquid crystal alignment in any pattern they’d want,” said author Alvin Modin, a doctoral researcher studying physics at Johns Hopkins. “Industrial labs and manufacturers could probably adopt the method in a day.”
    Liquid crystal molecules flow like a liquid, but they have a common orientation like in solids, and this orientation can change in response to stimuli. They are useful in LCD screens, biomedical imaging instruments, and other devices that require precise control of light and subtle movements. But controlling their alignment in three dimensions requires costly and complicated techniques, Modin said.
    The team, which includes Johns Hopkins physics professor Robert Leheny and assistant research professor Francesca Serra, discovered they could manipulate the three-dimensional orientation of liquid crystals by controlling light exposures of a photosensitive material deposited on glass.
    They shined polarized and unpolarized light at the liquid crystals through a microscope. In polarized light, light waves oscillate in specific directions rather than randomly in all directions, as they would in unpolarized light. The team used the method to create a microscopic lens of liquid crystals able to focus light depending on the polarization of light shining through it.
    First, the team beamed polarized light to align the liquid crystals on a surface. Then, they used regular light to reorient the liquid crystals upward from that plane. This allowed them to control the orientation of two types of common liquid crystals and create patterns with features the size of a few micrometers, a fraction of the thickness of a human hair.

    The findings could lead to the creation of programmable tools that shapeshift in response to stimuli, like those needed in soft, rubberlike robots to handle complex objects and environments or camera lenses that automatically focus depending on lighting conditions, said Serra, who is also an associate professor at the University of Southern Denmark.
    “If I wanted to make an arbitrary three-dimensional shape, like an arm or a gripper, I would have to align the liquid crystals so that when it is subject to a stimulus, this material restructures spontaneously into those shapes,” Serra said. “The missing information until now was how to control this three-dimensional axis of the alignment of liquid crystals, but now we have a way to make that possible.”
    The scientists are working to obtain a patent for their discovery and plan to further test it with different types of liquid crystal molecules and solidified polymers made of these molecules.
    “Certain types of structures couldn’t be attempted before because we didn’t have the right control of the three-dimensional alignment of the liquid crystals,” Serra said. “But now we do, so it is just limited by one’s imagination in finding a clever structure to build with this method, using a three-dimensional varying alignment of liquid crystals.” More

  • in

    New dressing robot can ‘mimic’ the actions of care-workers

    Scientists have developed a new robot that can ‘mimic’ the two-handed movements of care-workers as they dress an individual.
    Until now, assistive dressing robots, designed to help an elderly person or a person with a disability get dressed, have been created in the laboratory as a one-armed machine, but research has shown that this can be uncomfortable for the person in care or impractical.
    To tackle this problem, Dr Jihong Zhu, a robotics researcher at the University of York’s Institute for Safe Autonomy, proposed a two-armed assistive dressing scheme, which has not been attempted in previous research, but inspired by caregivers who have demonstrated that specific actions are required to reduce discomfort and distress to the individual in their care.
    It is thought that this technology could be significant in the social care system to allow care-workers to spend less time on practical tasks and more time on the health and mental well-being of individuals.
    Dr Zhu gathered important information on how care-workers moved during a dressing exercise, through allowing a robot to observe and learn from human movements and then, through AI, generate a model that mimics how human helpers do their task.
    This allowed the researchers to gather enough data to illustrate that two hands were needed for dressing and not one, as well as information on the angles that the arms make, and the need for a human to intervene and stop or alter certain movements.
    Dr Zhu, from the University of York’s Institute for Safe Autonomy and the School of Physics, Engineering and Technology, said: “We know that practical tasks, such as getting dressed, can be done by a robot, freeing up a care-worker to concentrate more on providing companionship and observing the general well-being of the individual in their care. It has been tested in the laboratory, but for this to work outside of the lab we really needed to understand how care-workers did this task in real-time.

    “We adopted a method called learning from demonstration, which means that you don’t need an expert to programme a robot, a human just needs to demonstrate the motion that is required of the robot and the robot learns that action. It was clear that for care workers two arms were needed to properly attend to the needs of individuals with different abilities.
    “One hand holds the individual’s hand to guide them comfortably through the arm of a shirt, for example, whilst at the same time the other hand moves the garment up and around or over. With the current one-armed machine scheme a patient is required to do too much work in order for a robot to assist them, moving their arm up in the air or bending it in ways that they might not be able to do.”
    The team were also able to build algorithms that made the robotic arm flexible enough in its movements for it to perform the pulling and lifting actions, but also be prevented from making an action by the gentle touch of a human hand, or guided out of an action by a human hand moving the hand left or right, up or down, without the robot resisting.
    Dr Zhu said: “Human modelling can really help with efficient and safe human and robot interactions, but it is not only important to ensure it performs the task, but that it can be halted or changed mid-action should an individual desire it. Trust is a significant part of this process, and the next step in this research is testing the robot’s safety limitations and whether it will be accepted by those who need it most.”
    The research, in collaboration with researchers from TU Delft and Honda Research Institute Europe, was funded by the Honda Research Institute Europe. More

  • in

    Network of quantum sensors boosts precision

    The quantum systems employed in quantum technologies, for example single atoms, are also very sensitive: any interaction with the environment can induce changes in the quantum system, leading to errors. However, this remarkable sensitivity of quantum systems to environmental factors actually represents a unique advantage. This sensitivity enables quantum sensors to surpass conventional sensors in precision, for example when measuring magnetic or gravitational fields.
    Noise cancellation using correlation spectroscopy
    The delicate quantum properties needed for sensing can be covered up by noise — rapid interactions between the sensor and the environment that disrupt the information within the sensor, rendering the quantum signal unreadable. In a new paper, physicists led by Christian Roos from the Department of Experimental Physics at the University of Innsbruck, together with partners in Israel and the USA, present a method for making this information accessible again using “correlation spectroscopy.” “Here, the key idea is that we do not just use a single sensor, but a network of up to 91 sensors, each consisting of a single atom,” explains Helene Hainzer, the first author of the paper. “Since noise affects all sensors equally, analyzing simultaneous changes in the states of all sensors allows us to effectively subtract the environmental noise and reconstruct the desired information. This allows us to precisely measure magnetic field variations in the environment, as well as determine the distance between the quantum sensors.” Beyond that, the method is applicable for various other sensing tasks and within diverse experimental platforms, reflecting its versatility.
    Precision increases with the number of sensors
    While correlation spectroscopy has been demonstrated previously with two atomic clocks, allowing for a superior precision in measuring time, “our work marks the first application of this method on such a large number of atoms,” emphasizes ERC award winner Christian Roos. “In order to establish experimental control over so many atoms, we built an entirely new experimental setup over several years.” In their publication, the Innsbruck scientists show that the precision of the sensor measurements increases with the number of particles in the sensor network. Notably, entanglement — conventionally used to enhance quantum sensor precision but hard to create in the laboratory — fails to provide an advantage compared to the multi-sensor network.
    The work has been published in the journal Physical Review X and was financially supported by the Austrian Science Fund FWF, the Austrian Federal Ministry of Education, Science and Research, the European Union and the Federation of Austrian Industries Tyrol, among others. More

  • in

    New AI smartphone tool accurately diagnoses ear infections

    A new cellphone app developed by physician-scientists at UPMC and the University of Pittsburgh, which uses artificial intelligence (AI) to accurately diagnose ear infections, or acute otitis media (AOM), could help decrease unnecessary antibiotic use in young children, according to new research published today in JAMA Pediatrics.
    AOM is one of the most common childhood infections for which antibiotics are prescribed but can be difficult to discern from other ear conditions without intensive training. The new AI tool, which makes a diagnosis by assessing a short video of the ear drum captured by an otoscope connected to a cellphone camera, offers a simple and effective solution that could be more accurate than trained clinicians.
    “Acute otitis media is often incorrectly diagnosed,” said senior author Alejandro Hoberman, M.D., professor of pediatrics and director of the Division of General Academic Pediatrics at Pitt’s School of Medicine and president of UPMC Children’s Community Pediatrics. “Underdiagnosis results in inadequate care and overdiagnosis results in unnecessary antibiotic treatment, which can compromise the effectiveness of currently available antibiotics. Our tool helps get the correct diagnosis and guide the right treatment.”
    According to Hoberman, about 70% of children have an ear infection before their first birthday. Although this condition is common, accurate diagnosis of AOM requires a trained eye to detect subtle visual findings gained from a brief view of the ear drum on a wriggly baby. AOM is often confused with otitis media with effusion, or fluid behind the ear, a condition that generally does not involve bacteria and does not benefit from antimicrobial treatment.
    To develop a practical tool to improve accuracy in the diagnosis of AOM, Hoberman and his team started by building and annotating a training library of 1,151 videos of the tympanic membrane from 635 children who visited outpatient UPMC pediatric offices between 2018 and 2023. Two trained experts with extensive experience in AOM research reviewed the videos and made a diagnosis of AOM or not AOM.
    “The ear drum, or tympanic membrane, is a thin, flat piece of tissue that stretches across the ear canal,” said Hoberman. “In AOM, the ear drum bulges like a bagel, leaving a central area of depression that resembles a bagel hole. In contrast, in children with otitis media with effusion, no bulging of the tympanic membrane is present.”
    The researchers used 921 videos from the training library to teach two different AI models to detect AOM by looking at features of the tympanic membrane, including shape, position, color and translucency. Then they used the remaining 230 videos to test how the models performed.

    Both models were highly accurate, producing sensitivity and specificity values of greater than 93%, meaning that they had low rates of false negatives and false positives. According to Hoberman, previous studies of clinicians have reported diagnostic accuracy of AOM ranging from 30% to 84%, depending on type of health care provider, level of training and age of the children being examined.
    “These findings suggest that our tool is more accurate than many clinicians,” said Hoberman. “It could be a gamechanger in primary health care settings to support clinicians in stringently diagnosing AOM and guiding treatment decisions.”
    “Another benefit of our tool is that the videos we capture can be stored in a patient’s medical record and shared with other providers,” said Hoberman. “We can also show parents and trainees — medical students and residents — what we see and explain why we are or are not making a diagnosis of ear infection. It is important as a teaching tool and for reassuring parents that their child is receiving appropriate treatment.”
    Hoberman hopes that their technology could soon be implemented widely across health care provider offices to enhance accurate diagnosis of AOM and support treatment decisions.
    Other authors on the study were Nader Shaikh, M.D., Shannon Conway, Timothy Shope, M.D., Mary Ann Haralam, C.R.N.P., Catherine Campese, C.R.N.P., and Matthew Lee, all of UPMC and the University of Pittsburgh; Jelena Kova?evi?, Ph.D., of New York University; Filipe Condessa, Ph.D., of Bosch Center for Artificial Intelligence; and Tomas Larsson, M.Sc, and Zafer Cavdar, both of Dcipher Analytics.
    This research was supported by the Department of Pediatrics at the University of Pittsburgh School of Medicine. More

  • in

    Evolution-capable AI promotes green hydrogen production using more abundant chemical elements

    A NIMS research team has developed an AI technique capable of expediting the identification of materials with desirable characteristics. Using this technique, the team was able to discover high-performance water electrolyzer electrode materials free of platinum-group elements — substances previously thought to be indispensable in water electrolysis. These materials may be used to reduce the cost of large-scale production of green hydrogen — a next-generation energy source.
    Large-scale production of green hydrogen using water electrolyzers is a viable means of achieving carbon neutrality. Currently available water electrolyzers rely on expensive, scarce platinum-group elements as their main electrocatalyst components to accelerate the slow oxygen evolution reaction (OER) — an electrolytic water reaction that can produce hydrogen. To address this issue, research is underway to develop platinum-group-free, cheaper OER electrocatalysts composed of relatively abundant chemical elements compatible with large-scale green hydrogen production. However, identifying the optimum chemical compositions of such electrocatalysts from an infinitely large number of possible combinations had been found to be enormously costly, time-consuming and labor-intensive.
    This NIMS research team recently developed an AI technique capable of accurately predicting the compositions of materials with desirable characteristics by switching prediction models depending on the sizes of the datasets available for analysis. Using this AI, the team was able to identify new, effective OER electrocatalytic materials from about 3,000 candidate materials in just a single month. For reference, manual, comprehensive evaluation of these 3,000 materials was estimated to take almost six years. These newly discovered electrocatalytic materials can be synthesized using only relatively cheap and abundant metallic elements: manganese (Mn), iron (Fe), nickel (Ni), zinc (Zn) and silver (Ag). Experiments found that under certain conditions, these electrocatalytic materials exhibit superior electrochemical properties to ruthenium (Ru) oxides — the existing electrocatalytic materials with the highest OER activity known. In Earth’s crust, Ag is the least abundant element among those constituting the newly discovered electrocatalytic materials. However, its crustal abundance is nearly 100 times that of Ru, indicating that these new electrocatalytic materials can be synthesized in sufficiently large amounts to enable hydrogen mass-production using water electrolyzers.
    These results demonstrated that this AI technique could be used to expand the limits of human intelligence and dramatically accelerate the search for higher-performance materials. Using the technique, the team plans to expedite its efforts to develop new materials — mainly water electrolyzer electrode materials — in order to improve the efficiency of various electrochemical devices contributing to carbon neutrality.
    This project was carried out by a NIMS research team led by Ken Sakaushi (Principal Researcher) and Ryo Tamura (Team Leader). This work was conducted in conjunction with another project entitled, “High throughput search for seawater electrolysis catalysts by combining automated experiments with data science” (grant number: JPMJMI21EA) under the JST-Mirai Program mission area, “low carbon society.” More

  • in

    AI outperforms humans in standardized tests of creative potential

    Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.
    Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as “What is the best way to avoid talking about politics with my parents?” In the study, GPT-4 provided more original and elaborate answers than the human participants.
    The study, “The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks,” was published in Scientific Reports and authored by U of A Ph.D. students in psychological science Kent F. Hubert and Kim N. Awa, as well as Darya L. Zabelina, an assistant professor of psychological science at the U of A and director of the Mechanisms of Creative Cognition and Attention Lab.
    The three tests utilized were the Alternative Use Task, which asks participants to come up with creative uses for everyday objects like a rope or a fork; the Consequences Task, which invites participants to imagine possible outcomes of hypothetical situations, like “what if humans no longer needed sleep?”; and the Divergent Associations Task, which asks participants to generate 10 nouns that are as semantically distant as possible. For instance, there is not much semantic distance between “dog” and “cat” while there is a great deal between words like “cat” and “ontology.”
    Answers were evaluated for the number of responses, length of response and semantic difference between words. Ultimately, the authors found that “Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses. In other words, GPT-4 demonstrated higher creative potential across an entire battery of divergent thinking tasks.”
    This finding does come with some caveats. The authors state, “It is important to note that the measures used in this study are all measures of creative potential, but the involvement in creative activities or achievements are another aspect of measuring a person’s creativity.” The purpose of the study was to examine human-level creative potential, not necessarily people who may have established creative credentials.
    Hubert and Awa further note that “AI, unlike humans, does not have agency” and is “dependent on the assistance of a human user. Therefore, the creative potential of AI is in a constant state of stagnation unless prompted.”
    Also, the researchers did not evaluate the appropriateness of GPT-4 responses. So while the AI may have provided more responses and more original responses, human participants may have felt they were constrained by their responses needing to be grounded in the real world.
    Awa also acknowledged that the human motivation to write elaborate answers may not have been high, and said there are additional questions about “how do you operationalize creativity? Can we really say that using these tests for humans is generalizable to different people? Is it assessing a broad array of creative thinking? So I think it has us critically examining what are the most popular measures of divergent thinking.”
    Whether the tests are perfect measures of human creative potential is not really the point. The point is that large language models are rapidly progressing and outperforming humans in ways they have not before. Whether they are a threat to replace human creativity remains to be seen. For now, the authors continue to see “Moving forward, future possibilities of AI acting as a tool of inspiration, as an aid in a person’s creative process or to overcome fixedness is promising.” More

  • in

    AI-enabled atomic robotic probe to advance quantum material manufacturing

    Scientists from the National University of Singapore (NUS) have pioneered a new methodology of fabricating carbon-based quantum materials at the atomic scale by integrating scanning probe microscopy techniques and deep neural networks. This breakthrough highlights the potential of implementing artificial intelligence (AI) at the sub-angstrom scale for enhanced control over atomic manufacturing, benefiting both fundamental research and future applications.
    Open-shell magnetic nanographenes represent a technologically appealing class of new carbon-based quantum materials, which host robust π-spin centres and non-trivial collective quantum magnetism. These properties are crucial for developing high-speed electronic devices at the molecular level and creating quantum bits, the building blocks of quantum computers. Despite significant advancements in the synthesis of these materials through on-surface synthesis, a type of solid-phase chemical reaction, achieving precise fabrication and tailoring of the properties of these quantum materials at the atomic level has remained a challenge.
    The research team, led by Associate Professor LU Jiong from the NUS Department of Chemistry and the Institute for Functional Intelligent Materials together with Associate Professor ZHANG Chun from the NUS Department of Physics, have introduced the concept of the chemist-intuited atomic robotic probe (CARP) by integrating probe chemistry knowledge and artificial intelligence to fabricate and characterise open-shell magnetic nanographenes at the single-molecule level. This allows for precise engineering of their π-electron topology and spin configurations in an automated manner, mirroring the capabilities of human chemists. The CARP concept, utilises deep neural networks trained using the experience and knowledge of surface science chemists, to autonomously synthesize open-shell magnetic nanographenes. It can also extract chemical information from the experimental training database, offering conjunctures about unknown mechanisms. This serves as an essential supplement to theoretical simulations, contributing to a more comprehensive understanding of probe chemistry reaction mechanisms. The research work is a collaboration involving Associate Professor WANG Xiaonan from Tsinghua University in China.
    The research findings are published in the journal Nature Synthesis on 29 February 2024.
    The researchers tested the CARP concept on a complicated site-selective cyclodehydrogenation reaction used for producing chemical compounds with specific structural and electronic properties. Results show that the CARP framework can efficiently adopt the expert knowledge of the scientist and convert it into machine-understandable tasks, mimicking the workflow to perform single-molecule reactions that can manipulate the geometric shape and spin characteristic of the final chemical compound.
    In addition, the research team aims to harness the full potential of AI capabilities by extracting hidden insights from the database. They established a smart learning paradigm using a game theory-based approach to examine the framework’s learning outcomes. The analysis shows that CARP effectively captured important details that humans might miss, especially when it comes to making the cyclodehydrogenation reaction successful. This suggests that the CARP framework could be a valuable tool for gaining additional insights into the mechanisms of unexplored single-molecule reactions.
    Assoc Prof Lu said, “Our main goal is to work at the atomic level to create, study and control these quantum materials. We are striving to revolutionise the production of these materials on surfaces to enable more control over their outcomes, right down to the level of individual atoms and bonds.
    “Our goal in the near future is to extend the CARP framework further to adopt versatile on-surface probe chemistry reactions with scale and efficiency. This has the potential to transform conventional laboratory-based on-surface synthesis process towards on-chip fabrication for practical applications. Such transformation could play a pivotal role in accelerating the fundamental research of quantum materials and usher in a new era of intelligent atomic fabrication,” added Assoc Prof Lu. More