More stories

  • in

    AI nursing ethics: Viability of robots and artificial intelligence in nursing practice

    The recent progress in the field of robotics and artificial intelligence (AI) promises a future where these technologies would play a more prominent role in society. Current developments, such as the introduction of autonomous vehicles, the ability to generate original artwork, and the creation of chatbots capable of engaging in human-like conversations, highlight the immense possibilities held by these technologies. While these advancements offer numerous benefits, they also pose some fundamental questions. The characteristics such as creativity, communication, critical thinking, and learning — once considered to be unique to humans — are now being replicated by AI. So, can intelligent machines be considered ‘human’?
    In a step toward answering this question, Associate Professor Tomohide Ibuki from Tokyo University of Science, in collaboration with medical ethics researcher Dr. Eisuke Nakazawa from The University of Tokyo and nursing researcher Dr. Ai Ibuki from Kyoritsu Women’s University, recently explored whether robots and AI can be entrusted with nursing, a highly humane practice. Their work was made available online on 12 June 2023 and published in the journal Nursing Ethics on 12 June 2023.
    “This study in applied ethics examines whether robotics, human engineering, and human intelligence technologies can and should replace humans in nursing tasks,” says Dr. Ibuki.
    Nurses demonstrate empathy and establish meaningful connections with their patients. This human touch is essential in fostering a sense of understanding, trust, and emotional support. The researchers examined whether the current advancements in robotics and AI can implement these human qualities by replicating the ethical concepts attributed to human nurses, including advocacy, accountability, cooperation, and caring.
    Advocacy in nursing involves speaking on behalf of patients to ensure that they receive the best possible medical care. This encompasses safeguarding patients from medical errors, providing treatment information, acknowledging the preferences of a patient, and acting as mediators between the hospital and the patient. In this regard, the researchers noted that while AI can inform patients about medical errors and present treatment options, they questioned its ability to truly understand and empathize with patients’ values and to effectively navigate human relationships as mediators.
    The researchers also expressed concerns about holding robots accountable for their actions. They suggested the development of explainable AI, which would provide insights into the decision-making process of AI systems, improving accountability.
    The study further highlights that nurses are required to collaborate effectively with their colleagues and other healthcare professionals to ensure the best possible care for patients. As humans rely on visual cues to build trust and establish relationships, unfamiliarity with robots might lead to suboptimal interactions. Recognizing this issue, the researchers emphasized the importance of conducting further investigations to determine the appropriate appearance of robots for facilitating efficient cooperation with human medical staff.
    Lastly, while robots and AI have the potential to understand a patient’s emotions and provide appropriate care, the patient must also be willing to accept robots as care providers.
    Having considered the above four ethical concepts in nursing, the researchers acknowledge that while robots may not fully replace human nurses anytime soon, they do not dismiss the possibility. While robots and AI can potentially reduce the shortage of nurses and improve treatment outcomes for patients, their deployment requires careful weighing of the ethical implications and impact on nursing practice.
    “While the present analysis does not preclude the possibility of implementing the ethical concepts of nursing in robots and AI in the future, it points out that there are several ethical questions. Further research could not only help solve them but also lead to new discoveries in ethics,” concludes Dr. Ibuki.
    Here’s hoping for such novel applications of robotics and AI to emerge soon! More

  • in

    Solving rare disease mysteries … and protecting privacy

    Macquarie University researchers have demonstrated a new way of linking personal records and protecting privacy. The first application is in identifying cases of rare genetic disorders. There are many other potential applications across society.
    The research will be presented at the 18th ACM ASIA Conference on Computer and Communications Security in Melbourne on 12 July.
    A five-year-old boy in the US has a mutation in a gene called GPX4, which he shares with just 10 other children in the world. The condition causes skeletal and central nervous system abnormalities. There are likely to be other children with the disorder recorded in hundreds of health and diagnostic databases worldwide, but we do not know of them, because their privacy is guarded for legal and commercial reasons.
    But what if records linked to the condition could be found and counted while still preserving privacy? Researchers from the Macquarie University Cyber Security Hub have developed a technique to achieve exactly that. The team includes Dr Dinusha Vatsalan and Professor Dali Kaafar of the University’s School of Computing and the boy’s father, software engineer Mr Sanath Kumar Ramesh, who is CEO of the OpenTreatments Foundation in Seattle, Washington.
    “I am very excited about this work,” says Mr Ramesh, whose foundation initiated and supported the project. “Knowing how many people have a condition underpins economic assumptions. If a condition was previously thought to have 15 patients and now we know, having pulled in data from diagnostic testing companies, that there are 100 patients, that increases market-size hugely.
    “It would have a significant economic impact. The valuation of a company working on the condition would go up. Product costing would go down. How insurance companies account for medical costs would change. Diagnostic companies would target [the condition] more. And you can start to do epidemiology more precisely.”
    Linking and counting data records might seem simple but, in reality, it involves many issues, says Professor Kaafar. First, because we are dealing with a rare disease, there is no centralised database, and the records are sprinkled across the world. “In this case in hundreds of databases,” he says. “And from a business perspective, data is precious, and the companies holding it are not necessarily interested in sharing.”

    Then, there are technical issues of matching data that is recorded, encoded, and stored in different ways, and accounting for individuals who are double-counted in and between different databases. And, on top of all that, are the privacy considerations. “We are dealing with very, very sensitive health data,” Professor Kaafar says.
    This personal data isn’t needed for a simple estimate of the number of patients and for epidemiological purposes. But, until now, it was needed to ensure that records are unique and can be linked.
    Dr Vatsalan and her colleagues used a technique known as Bloom filter encoding with differential privacy. They devised a suite of algorithms which deliberately introduces enough noise into the data to blur precise details to the point where they cannot be extracted from individual records, but it still allows the patterns of records of the same disease condition to be matched and clustered.
    The accuracy of their technique was then evaluated using North Carolina voter registration data. And the results showed the method led to a negligible error rate with a guarantee of a very high level of privacy, even on highly corrupted datasets. The technique significantly outperforms existing methods.
    In addition to detecting and counting rare diseases, the research has many other applications; for determining awareness of a new product in marketing, for instance, or in cybersecurity for tracking the number of unique views of particular social media posts.

    But it is the application to rare diseases about which the Macquarie University researchers are passionate. “There is no better feeling for a researcher than seeing the technology they’ve been developing having a real impact and making the world a better place,” says Professor Kaafar. “In this case, it is so real and so important.”
    The OpenTreatment Foundation partly funded the research.
    “The Foundation wanted to make this project completely open source from the very beginning,” Dr Vatsalan adds. “So the algorithm we implemented is being published openly.”
    The authors will present their research at the 18th ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS 2023) in Melbourne on 12 July. More

  • in

    Bees make decisions better and faster than we do, for the things that matter to them

    Honey bees have to balance effort, risk and reward, making rapid and accurate assessments of which flowers are mostly likely to offer food for their hive. Research published in the journal eLife today reveals how millions of years of evolution has engineered honey bees to make fast decisions and reduce risk.
    The study enhances our understanding of insect brains, how our own brains evolved, and how to design better robots.
    The paper presents a model of decision-making in bees and outlines the paths in their brains that enable fast decision-making. The study was led by Professor Andrew Barron from Macquarie University in Sydney, and Dr HaDi MaBouDi, Neville Dearden and Professor James Marshall from the University of Sheffield.
    “Decision-making is at the core of cognition,” says Professor Barron. “It’s the result of an evaluation of possible outcomes, and animal lives are full of decisions. A honey bee has a brain smaller than a sesame seed. And yet she can make decisions faster and more accurately than we can. A robot programmed to do a bee’s job would need the back up of a supercomputer.
    “Today’s autonomous robots largely work with the support of remote computing,” Professor Barron continues. “Drones are relatively brainless, they have to be in wireless communication with a data centre. This technology path will never allow a drone to truly explore Mars solo — NASA’s amazing rovers on Mars have travelled about 75 kilometres in years of exploration.”
    Bees need to work quickly and efficiently, finding nectar and returning it to the hive, while avoiding predators. They need to make decisions. Which flower will have nectar? While they’re flying, they’re only prone to aerial attack. When they land to feed, they’re vulnerable to spiders and other predators, some of which use camouflage to look like flowers.

    “We trained 20 bees to recognise five different coloured ‘flower disks’. Blue flowers always had sugar syrup,” says Dr MaBouDi. “Green flowers always had quinine [tonic water] with a bitter taste for bees. Other colours sometimes had glucose.”
    “Then we introduced each bee to a ‘garden’ where the ‘flowers’ just had distilled water. We filmed each bee then watched more than 40 hours of video, tracking the path of the bees and timing how long it took them to make a decision.
    “If the bees were confident that a flower would have food, then they quickly decided to land on it taking an average of 0.6 seconds),” says Dr MaBouDi. “If they were confident that a flower would not have food, they made a decision just as quickly.”
    If they were unsure, then they took much more time — on average 1.4 seconds — and the time reflected the probability that a flower had food.
    The team then built a computer model from first principles aiming to replicate the bees’ decision-making process. They found the structure of their computer model looked very similar to the physical layout of a bee brain.
    “Our study has demonstrated complex autonomous decision-making with minimal neural circuitry,” says Professor Marshall. “Now we know how bees make such smart decisions, we are studying how they are so fast at gathering and sampling information. We think bees are using their flight movements to enhance their visual system to make them better at detecting the best flowers.”
    AI researchers can learn much from insects and other ‘simple’ animals. Millions of years of evolution has led to incredibly efficient brains with very low power requirements. The future of AI in industry will be inspired by biology, says Professor Marshall, who co-founded Opteran, a company that reverse-engineers insect brain algorithms to enable machines to move autonomously, like nature. More

  • in

    Unraveling the humanity in metacognitive ability: Distinguishing human metalearning from AI

    Monitoring and controlling one’s own learning process objectively is essential for improving one’s learning abilities. This ability, often referred to as “learning to learn” or “metacognition,” has been studied in educational psychology. Owing to the tight coupling between the higher meta-level and the lower object-level cognitive systems, a conventional reduction approach has difficulty understanding the neural basis of metacognition. To overcome this limitation, the researchers employed a novel research approach where they compared the metacognition of artificial intelligence (AI) to that of humans.
    First, they demonstrated that the metacognitive system of AI, which aims to maximize rewards and minimize punishments, can effectively regulate learning speed and memory retention in response to the environment and task. Second, they demonstrated the metacognitive behavior of human motor learning, which demonstrates that providing monetary feedback as a function of memory can either promote or suppress motor learning and memory retention. This constitutes the first-ever empirical demonstration of the bi-directional regulation of implicit motor learning abilities by economic factors. Notably, while AI exhibited equal metacognitive abilities for reward and punishment, humans exhibited an asymmetric response to monetary gain and loss; humans adjust their memory retention in response to gain and their learning speed in response to loss. This asymmetric property may provide valuable insights into the neural mechanisms underlying human metacognition.
    Researchers anticipate that these findings could be effectively applied to enhance the learning abilities of individuals engaging in new sports or motor-related activities, such as post-stroke rehabilitation training.
    This work was supported by the Japan Society for the Promotion of Science KAKENHI (JP19H04977, JP19H05729, and JP22H00498). TS was supported by a JSPS Research Fellowship for Young Scientists and KAKENHI (JP19J20366). NS was supported by NIH R21 NS120274. More

  • in

    Organic electronics: Sustainability during the entire lifecycle

    Organic electronics can make a decisive contribution to decarbonization and, at the same time, help to cut the consumption of rare and valuable raw materials. To do so, it is not only necessary to further develop manufacturing processes, but also to devise technical solutions for recycling as early on as the laboratory phase. Materials scientists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) are now promoting this circular strategy in conjunction with researchers from the UK and USA in the 
    Organic electronic components, such as solar modules, have several exceptional features. They can be applied in extremely thin layers on flexible carrier materials and therefore have a wider range of applications than crystalline materials. Since their photoactive substances are carbon based, they also contribute to cutting the consumption of rare, expensive and sometimes toxic materials such as iridium, platinum and silver.
    Organic electronic components are experiencing major growth in the field of OLED technologies in particular, and above all for television or computer screens. “One the one hand, this is progress, but on the other, it causes some problems,” says Prof. Dr. Christoph Brabec, Chair of Materials Science (Materials in Electronics and Energy Technology) at FAU and Director of the Helmholtz Institute Erlangen-Nürnberg for Renewable Energy (HI ERN). As a materials scientist, Brabec sees the danger of permanently incorporating environmentally friendly technology into a device architecture that is not sustainable on the whole. This not only affects electronic devices, but also organic sensors in textiles that have an extremely short operating life. Brabec: “Applied research in particular must now set the course to ensure that electronic components and all their individual parts must leave an ecological footprint that is as small as possible during their entire lifecycle.”
    More efficient synthesis and more robust materials
    The further development of organic electronics themselves is elementary here, since new materials and more efficient manufacturing processes lead to the reduction of outlay and energy during production. “Compared with simple polymers, the manufacturing process for the photoactive layer requires significantly higher amounts of energy as it is deposited in a vacuum at high temperatures,” explains Brabec. The researchers are therefore proposing cheaper and more environmentally-friendly processes, such as deposition from water-based solutions and printing using inkjet processes. Brabec: “One major challenge is developing functional materials that can be processed without toxic solvents that are harmful to the environment.” In the case of OLED screens, inkjet printing also offers the possibility of replacing precious metals such as iridium and platinum with organic materials.
    In addition to their efficiency, the operating stability of materials is decisive. Complex encapsulation is required in order to protect the vacuum-deposited carbon layers of organic solar modules, which can make up to two thirds of their overall weight. More robust combinations of materials could contribute to significant savings in materials, weight and energy.
    Planning the recycling process in the laboratory
    To make a realistic evaluation of the environmental footprint of organic electronics, the entire product lifecycle has to be considered. In terms of output, organic photovoltaic systems are still lagging behind conventional silicon modules, but 30% less CO2 is emitted during the manufacturing process. Aiming for maximum efficiency levels is not everything, says Brabec: “18 percent could make more sense environmentally than 20, if it’s possible to manufacture the photoactive material in five steps instead of eight.”
    In addition, the shorter operating life of organic modules is also relative if you look more closely. Although photovoltaic modules based on silicon last longer, they are very difficult to recycle. “Biocompatibility and biodegradability will increasingly become important criteria, both for product development as well as for packaging design,” says Christoph Brabec. “We really must start taking recycling into consideration in the laboratory.” This means, for example, using substrates that can either be easily recycled or that are as biodegradable as the active substances. Using what is known as multilayer designs as early on as the product design phase could ensure that various materials can easily be separated and recycled at the end of the product lifecycle. Brabec: “This cradle-to-cradle approach will be a decisive prerequisite for establishing organic electronics as an important component in the transition to renewable energy.” More

  • in

    AI tool decodes brain cancer’s genome during surgery

    Scientists have designed an AI tool that can rapidly decode a brain tumor’s DNA to determine its molecular identity during surgery — critical information that under the current approach can take a few days and up to a few weeks.
    Knowing a tumor’s molecular type enables neurosurgeons to make decisions such as how much brain tissue to remove and whether to place tumor-killing drugs directly into the brain — while the patient is still on the operating table.
    A report on the work, led by Harvard Medical School researchers, is published July 7 in the journal Med.
    Accurate molecular diagnosis — which details DNA alterations in a cell — during surgery can help a neurosurgeon decide how much brain tissue to remove. Removing too much when the tumor is less aggressive can affect a patient’s neurologic and cognitive function. Likewise, removing too little when the tumor is highly aggressive may leave behind malignant tissue that can grow and spread quickly.
    “Right now, even state-of-the-art clinical practice cannot profile tumors molecularly during surgery. Our tool overcomes this challenge by extracting thus-far untapped biomedical signals from frozen pathology slides,” said study senior author Kun-Hsing Yu, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.
    Knowing a tumor’s molecular identity during surgery is also valuable because certain tumors benefit from on-the-spot treatment with drug-coated wafers placed directly into the brain at the time of the operation, Yu said.

    “The ability to determine intraoperative molecular diagnosis in real time, during surgery, can propel the development of real-time precision oncology,” Yu added.
    The standard intraoperative diagnostic approach used now involves taking brain tissue, freezing it, and examining it under a microscope. A major drawback is that freezing the tissue tends to alter the appearance of cells under a microscope and can interfere with the accuracy of clinical evaluation. Furthermore, the human eye, even when using potent microscopes, cannot reliably detect subtle genomic variations on a slide.
    The new AI approach overcomes these challenges.
    The tool, called CHARM (Cryosection Histopathology Assessment and Review Machine), is freely available to other researchers. It still has to be clinically validated through testing in real-world settings and cleared by the FDA before deployment in hospitals, the research team said.
    Cracking cancer’s molecular code
    Recent advances in genomics have allowed pathologists to differentiate the molecular signatures — and the behaviors that such signatures portend — across various types of brain cancer as well as within specific types of brain cancer. For example, glioma — the most aggressive brain tumor and the most common form of brain cancer — has three main subvariants that carry different molecular markers and have different propensities for growth and spread.

    The new tool’s ability to expedite molecular diagnosis could be particularly valuable in areas with limited access to technology to perform rapid cancer genetic sequencing.
    Beyond the decisions made during surgery, knowledge of a tumor’s molecular type provides clues about its aggressiveness, behavior, and likely response to various treatments. Such knowledge can inform post-operative decisions.
    Furthermore, the new tool enables during-surgery diagnoses aligned with the World Health Organization’s recently updated classification system for diagnosing and grading the severity of gliomas, which calls for such diagnoses to be made based on a tumor’s genomic profile.
    Training CHARM
    CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.
    Going a step further, the tool successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types.
    The tool was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response.
    The tool further connected the appearance of the cells — the shape of their nuclei, the presence of edema around the cells — with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell’s appearance relates to the molecular type of a tumor.
    This ability to assess the broader context around the image renders the model more accurate and closer to how a human pathologist would visually assess a tumor sample, Yu said.
    The researchers say that while the model was trained and tested on glioma samples, it could be successfully retrained to identify other brain cancer subtypes.
    Scientists have already designed AI models to profile other types of cancer — colon, lung, breast — but gliomas have remained particularly challenging due to their molecular complexity and huge variation in tumor cells’ shape and appearance.
    The CHARM tool would have to be retrained periodically to reflect new disease classifications as they emerge from new knowledge, Yu said.
    “Just like human clinicians who must engage in ongoing education and training, AI tools must keep up with the latest knowledge to remain at peak performance.”
    Authorship, funding, disclosures
    Coinvestigators included MacLean P. Nasrallah, Junhan Zhao, Cheng Che Tsai, David Meredith, Eliana Marostica, Keith L. Ligon, and Jeffrey A. Golden.
    This work was supported in part by the National Institute of General Medical Sciences grant R35GM142879, the Google Research Scholar Award, the Blavatnik Center for Computational Biomedicine Award, the Partners Innovation Discovery Grant, and the Schlager Family Award for Early-Stage Digital Health Innovations. More

  • in

    Board games are boosting math ability in young children

    Board games based on numbers, like Monopoly, Othello and Chutes and Ladders, make young children better at math, according to a comprehensive review of research published on the topic over the last 23 years.
    Board games are already known to enhance learning and development including reading and literacy.
    Now this new study, published in the peer-reviewed journal Early Years, finds, for three to nine-year-olds, the format of number-based board games helps to improve counting, addition, and the ability to recognize if a number is higher or lower than another.
    The researchers say children benefit from programs — or interventions — where they play board games a few times a week supervised by a teacher or another trained adult.
    “Board games enhance mathematical abilities for young children,” says lead author Dr. Jaime Balladares, from Pontificia Universidad Católica de Chile, in Santiago, Chile.
    “Using board games can be considered a strategy with potential effects on basic and complex math skills.

    “Board games can easily be adapted to include learning objectives related to mathematical skills or other domains.”
    Games where players take turns to move pieces around a board differ from those involving specific skills or gambling.
    Board game rules are fixed which limits a player’s activities, and the moves on the board usually determine the overall playing situation.
    However, preschools rarely use board games. This study aimed to compile the available evidence of their effects on children.
    The researchers set out to investigate the scale of the effects of physical board games in promoting learning in young children.

    They based their findings on a review of 19 studies published from 2000 onwards involving children aged from three to nine years. All except one study focused on the relationship between board games and mathematical skills.
    All children participating in the studies received special board game sessions which took place on average twice a week for 20 minutes over one-and-a-half months. Teachers, therapists, or parents were among the adults who led these sessions.
    In some of the 19 studies, children were grouped into either the number board game or to a board game that did not focus on numeracy skills. In others, all children participated in number board games but were allocated different types e.g. Dominoes.
    All children were assessed on their math performance before and after the intervention sessions which were designed to encourage skills such as counting out loud.
    The authors rated success according to four categories including basic numeric competency such as the ability to name numbers, and basic number comprehension e.g. ‘nine is greater than three’.
    The other categories were deepened number comprehension — where a child can accurately add and subtract — and interest in mathematics.
    In some cases, parents attended a training session to learn arithmetic that they could then use in the games.
    Results showed that math skills improved significantly after the sessions among children for more than half (52%) of the tasks analyzed.
    In nearly a third (32%) of cases, children in the intervention groups gained better results than those who did not take part in the board game intervention.
    The results also show that from analyzed studies to date, board games on the language or literacy areas, while implemented, did not include scientific evaluation (i.e. comparing control with intervention groups, or pre and post-intervention) to evaluate their impact on children.
    Designing and implementing board games along with scientific procedures to evaluate their efficacy, therefore, are “urgent tasks to develop in the next few years,” Dr. Balladares, who was previously at UCL, argues.
    And this, now, is the next project they are investigating.
    Dr. Balladares concludes: “Future studies should be designed to explore the effects that these games could have on other cognitive and developmental skills.
    “An interesting space for the development of intervention and assessment of board games should open up in the next few years, given the complexity of games and the need to design more and better games for educational purposes.” More

  • in

    Machine learning takes materials modeling into new era

    The arrangement of electrons in matter, known as the electronic structure, plays a crucial role in fundamental but also applied research such as drug design and energy storage. However, the lack of a simulation technique that offers both high fidelity and scalability across different time and length scales has long been a roadblock for the progress of these technologies. Researchers from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Görlitz, Germany, and Sandia National Laboratories in Albuquerque, New Mexico, USA, have now pioneered a machine learning-based simulation method (npj Computational Materials), that supersedes traditional electronic structure simulation techniques. Their Materials Learning Algorithms (MALA) software stack enables access to previously unattainable length scales.
    Electrons are elementary particles of fundamental importance. Their quantum mechanical interactions with one another and with atomic nuclei give rise to a multitude of phenomena observed in chemistry and materials science. Understanding and controlling the electronic structure of matter provides insights into the reactivity of molecules, the structure and energy transport within planets, and the mechanisms of material failure.
    Scientific challenges are increasingly being addressed through computational modeling and simulation, leveraging the capabilities of high-performance computing. However, a significant obstacle to achieving realistic simulations with quantum precision is the lack of a predictive modeling technique that combines high accuracy with scalability across different length and time scales. Classical atomistic simulation methods can handle large and complex systems, but their omission of quantum electronic structure restricts their applicability. Conversely, simulation methods which do not rely on assumptions such as empirical modeling and parameter fitting (first principles methods) provide high fidelity but are computationally demanding. For instance, density functional theory (DFT), a widely used first principles method, exhibits cubic scaling with system size, thus restricting its predictive capabilities to small scales.
    Hybrid approach based on deep learning
    The team of researchers now presented a novel simulation method called the Materials Learning Algorithms (MALA) software stack. In computer science, a software stack is a collection of algorithms and software components that are combined to create a software application for solving a particular problem. Lenz Fiedler, a Ph.D. student and key developer of MALA at CASUS, explains, “MALA integrates machine learning with physics-based approaches to predict the electronic structure of materials. It employs a hybrid approach, utilizing an established machine learning method called deep learning to accurately predict local quantities, complemented by physics algorithms for computing global quantities of interest.”
    The MALA software stack takes the arrangement of atoms in space as input and generates fingerprints known as bispectrum components, which encode the spatial arrangement of atoms around a Cartesian grid point. The machine learning model in MALA is trained to predict the electronic structure based on this atomic neighborhood. A significant advantage of MALA is its machine learning model’s ability to be independent of the system size, allowing it to be trained on data from small systems and deployed at any scale.
    In their publication, the team of researchers showcased the remarkable effectiveness of this strategy. They achieved a speedup of over 1,000 times for smaller system sizes, consisting of up to a few thousand atoms, compared to conventional algorithms. Furthermore, the team demonstrated MALA’s capability to accurately perform electronic structure calculations at a large scale, involving over 100,000 atoms. Notably, this accomplishment was achieved with modest computational effort, revealing the limitations of conventional DFT codes.
    Attila Cangi, the Acting Department Head of Matter under Extreme Conditions at CASUS, explains: “As the system size increases and more atoms are involved, DFT calculations become impractical, whereas MALA’s speed advantage continues to grow. The key breakthrough of MALA lies in its capability to operate on local atomic environments, enabling accurate numerical predictions that are minimally affected by system size. This groundbreaking achievement opens up computational possibilities that were once considered unattainable.”
    Boost for applied research expected
    Cangi aims to push the boundaries of electronic structure calculations by leveraging machine learning: “We anticipate that MALA will spark a transformation in electronic structure calculations, as we now have a method to simulate significantly larger systems at an unprecedented speed. In the future, researchers will be able to address a broad range of societal challenges based on a significantly improved baseline, including developing new vaccines and novel materials for energy storage, conducting large-scale simulations of semiconductor devices, studying material defects, and exploring chemical reactions for converting the atmospheric greenhouse gas carbon dioxide into climate-friendly minerals.”
    Furthermore, MALA’s approach is particularly suited for high-performance computing (HPC). As the system size grows, MALA enables independent processing on the computational grid it utilizes, effectively leveraging HPC resources, particularly graphical processing units. Siva Rajamanickam, a staff scientist and expert in parallel computing at the Sandia National Laboratories, explains, “MALA’s algorithm for electronic structure calculations maps well to modern HPC systems with distributed accelerators. The capability to decompose work and execute in parallel different grid points across different accelerators makes MALA an ideal match for scalable machine learning on HPC resources, leading to unparalleled speed and efficiency in electronic structure calculations.”
    Apart from the developing partners HZDR and Sandia National Laboratories, MALA is already employed by institutions and companies such as the Georgia Institute of Technology, the North Carolina A&T State University, Sambanova Systems Inc., and Nvidia Corp. More