More stories

  • in

    Artificial intelligence unravels mysteries of polycrystalline materials

    Researchers at Nagoya University in Japan have used artificial intelligence to discover a new method for understanding small defects called dislocations in polycrystalline materials, materials widely used in information equipment, solar cells, and electronic devices, that can reduce the efficiency of such devices. The findings were published in the journal Advanced Materials.
    Almost every device that we use in our modern lives has a polycrystal component. From your smartphone to your computer to the metals and ceramics in your car. Despite this, polycrystalline materials are tough to utilize because of their complex structures. Along with their composition, the performance of a polycrystalline material is affected by its complex microstructure, dislocations, and impurities.
    A major problem for using polycrystals in industry is the formation of tiny crystal defects caused by stress and temperature changes. These are known as dislocations and can disrupt the regular arrangement of atoms in the lattice, affecting electrical conduction and overall performance. To reduce the chances of failure in devices that use polycrystalline materials, it is important to understand the formation of these dislocations.
    A team of researchers at Nagoya University, led by Professor Noritaka Usami and including Lecturer Tatsuya Yokoi and Associate Professor Hiroaki Kudo and collaborators, used a new AI to analyse image data of a material widely used in solar panels, called polycrystalline silicon. The AI created a 3D model in virtual space, helping the team to identify the areas where dislocation clusters were affecting the material’s performance.
    After identifying the areas of the dislocation clusters, the researchers used electron microscopy and theoretical calculations to understand how these areas formed. They revealed stress distribution in the crystal lattice and found staircase-like structures at the boundaries between the crystal grains. These structures appear to cause dislocations during crystal growth. “We found a special nanostructure in the crystals associated with dislocations in polycrystalline structures,” Usami said.
    Along with its practical implications, this study may have important implications for the science of crystal growth and deformation as well. The Haasen-Alexander-Sumino (HAS) model is an influential theoretical framework used to understand the behavior of dislocations in materials. But Usami believes that they have discovered dislocations that the Haasen-Alexander-Sumino model missed.
    Another surprise was to follow soon after, as when the team calculated the arrangement of the atoms in these structures, they found unexpectedly large tensile bond strains along the edge of the staircase-like structures that triggered dislocation generation.
    As explained by Usami, “As experts who have been studying this for years, we were amazed and excited to finally see proof of the presence of dislocations in these structures. It suggests that we can control the formation of dislocation clusters by controlling the direction in which the boundary spreads.”
    “By extracting and analyzing the nanoscale regions through polycrystalline materials informatics, which combines experiment, theory, and AI, we made this clarification of phenomena in complex polycrystalline materials possible for the first time,” Usami continued. “This research illuminates the path towards establishing universal guidelines for high-performance materials and is expected to contribute to the creation of innovative polycrystalline materials. The potential impact of this research extends beyond solar cells to everything from ceramics to semiconductors. Polycrystalline materials are widely used in society, and the improved performance of these materials has the potential to revolutionize society.” More

  • in

    Giving video games this Christmas? New research underlines need to be aware of loot box risks

    Recent controversy has surrounded the concept of loot boxes — the purchasable video game features that offer randomised rewards but are not governed by gambling laws.
    Now research led by the University of Plymouth has shown that at-risk individuals, such as those with known gaming and gambling problems, are more likely to engage with loot boxes than those without.
    The study is one of the largest, most complex and robustly designed surveys yet conducted on loot boxes, and has prompted experts to reiterate the call for stricter enforcement around them.
    Existing studies have shown that the items are structurally and psychologically akin to gambling but, despite the evidence, they still remain accessible to children.
    The new findings, which add to the evidence base linking loot boxes to gambling, are published in the journal Royal Society Open Science.
    The surveys captured the thoughts of 1,495 loot box purchasing gamers, and 1,223 gamers who purchase other, non-randomised game content.
    They highlighted that taking the risk of opening a loot box was associated with people who had experienced problem gambling, problem gaming, impulsivity and gambling cognitions — including the perceived inability to stop buying them.

    It also showed that any financial or psychological impacts from loot box purchasing are liable to disproportionately affect various at-risk cohorts, such as those who have previously had issues with gambling.
    Lead author Dr James Close, Lecturer in Clinical Education at the University of Plymouth, said: “Loot boxes are paid-for rewards in video games, but the gamer does not know what’s inside. With the risk/reward mindset and behaviours associated with accessing loot boxes, we know there are similarities with gambling, and these new papers provide a longer, more robust description exploring the complexities of the issue.
    “Among the findings, the work shows that loot box use is driven by beliefs such as ‘I’ll win in a minute’ — which really echoes the psychology we see in gambling. The studies contribute to a substantial body of evidence establishing that, for some, loot boxes can lead to financial and psychological harm. However, it’s not about making loot boxes illegal, but ensuring that their impact is understood as akin to gambling, and that policies are in place to ensure consumers are protected from these harms.”
    The research was funded by GambleAware, supported by the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South West Peninsula (PenARC), and conducted alongside the University of Wolverhampton and other collaborators.
    An earlier paper from this study also found evidence that under-18s who engaged with loot boxes progressed onto other forms of gambling. The overall findings remain consistent with narratives that policy action on loot boxes will take steps to minimise harm in future.
    Co-lead Dr Stuart Spicer, PenARC Research Fellow in the University of Plymouth’s Peninsula Medical School, added: “We know loot boxes have attracted a lot of controversy and the UK government has adopted an approach of industry self-regulation. However, industry compliance to safety features is currently unsatisfactory, and there is a pressing need to see tangible results. Our research adds to the evidence base that they pose a problem for at-risk groups, such as people with dysfunctional thoughts about gambling, lower income, and problematic levels of video gaming. We really hope that these findings will add to the evidence base showing the link between loot boxes, gambling, and other risky behaviours, and that there will be more of a push to take action and minimise harm.” More

  • in

    Unveiling molecular origami: A breakthrough in dynamic materials

    Origami, traditionally associated with paper folding, has transcended its craft origins to influence a diverse range of fields, including art, science, engineering, and architecture. Recently, origami principles have extended to technology, with applications spanning solar cells to biomedical devices. While origami-inspired materials have been explored at various scales, the challenge of creating molecular materials based on origami tessellations has remained. Addressing this challenge, a team of researchers, led by Professor Wonyoung Choe in the Department of Chemistry at Ulsan National Institute of Science and Technology (UNIST), South Korea, has unveiled a remarkable breakthrough in the form of a two-dimensional (2D) Metal Organic Framework (MOF) that showcases unprecedented origami-like movement at the molecular level.
    Metal-Organic Frameworks (MOFs) have long been recognized for their structural flexibility, making them an ideal platform for origami tessellation-based materials. However, their application in this context is still in its early stages. Through the development of a 2D MOF based on the origami tessellation, the research team has achieved a significant milestone. The researchers utilized temperature-dependent synchrotron single-crystal X-ray diffraction to demonstrate the origami-like folding behavior of the 2D MOF in response to temperature changes. This behavior showcases negative thermal expansion and reveals a unique origami tessellation pattern, previously unseen at the molecular level.
    The key to this breakthrough lies in the choice of MOFs, which incorporate flexible structural building blocks. The inherent flexibility enables the origami-like movement, observed in the 2D MOF. The study highlights the deformable net topology of the materials. Additionally, the role of solvents in maintaining the packing between 2D framework in MOFs is emphasized, as it directly affects the degree of folding.
    “This groundbreaking research opens new avenues for origami-inspired materials at the molecular level, introducing the concept of origamic MOFs. The findings not only contribute to the understanding of dynamic behavior in MOFs, but also offer potential applications in mechanical metamaterials.” noted Professor Wonyoung Choe. He further highlighted the potential of molecular level control over origami movement, as a platform for designing advanced materials with unique mechanical properties. The study also suggests exciting possibilities for tailoring origamic MOFs for specific applications, including advancements in molecular quantum computing.
    The findings of this research have been published in Nature Communications, a sister journal to Nature, on December 01, 2023. This study has been supported by the National Research Foundation (NRF) of Korea via the Mid-Career Researcher Program, Hydrogen Energy Innovation Technology Development Project, Science Research Center (SRC), and Global Ph.D. Fellowship (GPF), as well as Korea Environment Industry & Technology Institute (KEITI) through Public Technology Program based on Environmental Policy Program, funded by Korea Ministry of Environment (MOE). More

  • in

    Clinicians could be fooled by biased AI, despite explanations

    AI models in health care are a double-edged sword, with models improving diagnostic decisions for some demographics, but worsening decisions for others when the model has absorbed biased medical data.
    Given the very real life and death risks of clinical decision-making, researchers and policymakers are taking steps to ensure AI models are safe, secure and trustworthy — and that their use will lead to improved outcomes.
    The U.S. Food and Drug Administration has oversight of software powered by AI and machine learning used in health care and has issued guidance for developers. This includes a call to ensure the logic used by AI models is transparent or explainable so that clinicians can review the underlying reasoning.
    However, a new study in JAMA finds that even with provided AI explanations, clinicians can be fooled by biased AI models.
    “The problem is that the clinician has to understand what the explanation is communicating and the explanation itself,” said first author Sarah Jabbour, a Ph.D. candidate in computer science and engineering at the College of Engineering at the University of Michigan.
    The U-M team studied AI models and AI explanations in patients with acute respiratory failure.
    “Determining why a patient has respiratory failure can be difficult. In our study, we found clinicians baseline diagnostic accuracy to be around 73%,” said Michael Sjoding, M.D., associate professor of internal medicine at the U-M Medical School, a co-senior author on the study.

    “During the normal diagnostic process, we think about a patient’s history, lab tests and imaging results, and try to synthesize this information and come up with a diagnosis. It makes sense that a model could help improve accuracy.”
    Jabbour, Sjoding, co-senior author, Jenna Wiens, Ph.D., associate professor of computer science and engineering and their multidisciplinary team designed a study to evaluate the diagnostic accuracy of 457 hospitalist physicians, nurse practitioners and physician assistants with and without assistance from an AI model.
    Each clinician was asked to make treatment recommendations based on their diagnoses. Half were randomized to receive an AI explanation with the AI model decision, while the other half received only the AI decision with no explanation.
    Clinicians were then given real clinical vignettes of patients with respiratory failure, as well as a rating from the AI model on whether the patient had pneumonia, heart failure or COPD.
    In the half of participants who were randomized to see explanations, the clinician was provided a heatmap, or visual representation, of where the AI model was looking in the chest radiograph, which served as the basis for the diagnosis.
    The team found that clinicians who were presented with an AI model trained to make reasonably accurate predictions, but without explanations, had their own accuracy increase by 2.9 percentage points. When provided an explanation, their accuracy increased by 4.4 percentage points.

    However, to test whether an explanation could enable clinicians to recognize when an AI model is clearly biased or incorrect, the team also presented clinicians with models intentionally trained to be biased — for example, a model predicting a high likelihood of pneumonia if the patient was 80 years old or older.
    “AI models are susceptible to shortcuts, or spurious correlations in the training data. Given a dataset in which women are underdiagnosed with heart failure, the model could pick up on an association between being female and being at lower risk for heart failure,” explained Wiens.
    “If clinicians then rely on such a model, it could amplify existing bias. If explanations could help clinicians identify incorrect model reasoning this could help mitigate the risks.”
    When clinicians were shown the biased AI model, however, it decreased their accuracy by 11.3 percentage points and explanations which explicitly highlighted that the AI was looking at non-relevant information (such as low bone density in patients over 80 years) did not help them recover from this serious decline in performance.
    The observed decline in performance aligns with previous studies that find users may be deceived by models, noted the team.
    “There’s still a lot to be done to develop better explanation tools so that we can better communicate to clinicians why a model is making specific decisions in a way that they can understand. It’s going to take a lot of discussion with experts across disciplines,” Jabbour said.
    The team hopes this study will spur more research into the safe implementation of AI-based models in health care across all populations and for medical education around AI and bias. More

  • in

    Study assesses GPT-4’s potential to perpetuate racial, gender biases in clinical decision making

    Large language models (LLMs) like ChatGPT and GPT-4 have the potential to assist in clinical practice to automate administrative tasks, draft clinical notes, communicate with patients, and even support clinical decision making. However, preliminary studies suggest the models can encode and perpetuate social biases that could adversely affect historically marginalized groups. A new study by investigators from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, evaluated the tendency of GPT-4 to encode and exhibit racial and gender biases in four clinical decision support roles. Their results are published in The Lancet Digital Health.
    “While most of the focus is on using LLMs for documentation or administrative tasks, there is also excitement about the potential to use LLMs to support clinical decision making,” said corresponding author Emily Alsentzer, PhD, a postdoctoral researcher in the Division of General Internal Medicine at Brigham and Women’s Hospital. “We wanted to systematically assess whether GPT-4 encodes racial and gender biases that impact its ability to support clinical decision making.”
    Alsentzer and colleagues tested four applications of GPT-4 using the Azure OpenAI platform. First, they prompted GPT-4 to generate patient vignettes that can be used in medical education. Next, they tested GPT-4’s ability to correctly develop a differential diagnosis and treatment plan for 19 different patient cases from a NEJM Healer, a medical education tool that presents challenging clinical cases to medical trainees. Finally, they assessed how GPT-4 makes inferences about a patient’s clinical presentation using eight case vignettes that were originally generated to measure implicit bias. For each application, the authors assessed whether GPT-4’s outputs were biased by race or gender.
    For the medical education task, the researchers constructed ten prompts that required GPT-4 to generate a patient presentation for a supplied diagnosis. They ran each prompt 100 times and found that GPT-4 exaggerated known differences in disease prevalence by demographic group.
    “One striking example is when GPT-4 is prompted to generate a vignette for a patient with sarcoidosis: GPT-4 describes a Black woman 81% of the time,” Alsentzer explains. “While sarcoidosis is more prevalent in Black patients and in women, it’s not 81% of all patients.”
    Next, when GPT-4 was prompted to develop a list of 10 possible diagnoses for the NEJM Healer cases, changing the gender or race/ethnicity of the patient significantly affected its ability to prioritize the correct top diagnosis in 37% of cases.
    “In some cases, GPT-4’s decision making reflects known gender and racial biases in the literature,” Alsentzer said. “In the case of pulmonary embolism, the model ranked panic attack/anxiety as a more likely diagnosis for women than men. It also ranked sexually transmitted diseases, such as acute HIV and syphilis, as more likely for patients from racial minority backgrounds compared to white patients.”
    When asked to evaluate subjective patient traits such as honesty, understanding, and pain tolerance, GPT-4 produced significantly different responses by race, ethnicity, and gender for 23% of the questions. For example, GPT-4 was significantly more likely to rate Black male patients as abusing the opioid Percocet than Asian, Black, Hispanic, and white female patients when the answers should have been identical for all the simulated patient cases.
    Limitations of the current study include testing GPT-4’s responses using a limited number of simulated prompts and analyzing model performance using only a few traditional categories of demographic identities. Future work should investigate biases using clinical notes from the electronic health record.
    “While LLM-based tools are currently being deployed with a clinician in the loop to verify the model’s outputs, it is very challenging for clinicians to detect systemic biases when viewing individual patient cases,” Alsentzer said. “It is critical that we perform bias evaluations for each intended use of LLMs, just as we do for other machine learning models in the medical domain. Our work can help start a conversation about GPT-4’s potential to propagate bias in clinical decision support applications.” More

  • in

    AI’s memory-forming mechanism found to be strikingly similar to that of the brain

    An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.
    In the race towards developing Artificial General Intelligence (AGI), with influential entities like OpenAI and Google DeepMind leading the way, understanding and replicating human-like intelligence has become an important research interest. Central to these technological advancements is the Transformer model, whose fundamental principles are now being explored in new depth.
    The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.
    The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper’s (the magnesium ion) role in the whole process is quite specific.
    The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain’s NMDA receptor. This revelation led the researchers to investigate if the Transformer’s memory consolidation can be controlled by a mechanism similar to the NMDA receptor’s gating process.
    In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience.
    C. Justin LEE, who is a neuroscientist director at the institute, said, “This research makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”
    CHA Meeyoung, who is a data scientist in the team and at KAIST, notes, “The human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”
    What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models. More

  • in

    Air conditioning has reduced mortality due to high temperatures in Spain by one third

    Air conditioning and heating systems have contributed considerably to reducing mortality linked to extreme temperatures in Spain, according to a study led by the Barcelona Institute for Global Health (ISGlobal), a centre supported by the “la Caixa” Foundation. The findings, published in Environment International, provide valuable insights for designing policies to adapt to climate change.
    Rising temperatures but lower mortality
    Spain, like many parts of the world, has experienced rising temperatures in recent decades, with the average annual mean temperature increasing at an average rate of 0.36°C per decade. The warming trend is even more pronounced in the summer months (0.40°C per decade). Surprisingly, this increase in temperature has coincided with a progressive reduction in mortality associated with heat. In addition, cold-related mortality has also decreased.
    “Understanding the factors that reduce susceptibility to extreme temperatures is crucial to inform health adaptation policies and to combat the negative effects of climate change,” says first author of the study, Hicham Achebak, researcher at ISGlobal and Inserm (France) and holder of a Marie Sklodowska-Curie Postdoctoral Fellowship from the European Commission.
    Effective societal adaptations
    In this study, Achebak and colleagues analysed the demographic and socioeconomic factors behind the observed reduction in heat and cold-related mortality, despite rising temperatures. They found that the increase in air conditioning (AC) prevalence in Spain was associated with a reduction in heat-related mortality, while the rise in heating prevalence was associated with a decrease in cold-related mortality. Specifically, AC was found to be responsible for about 28.6% of the decline in deaths due to heat and 31.5% of the decrease in deaths due to extreme heat between the late 1980s and the early 2010s. Heating systems contributed significantly, accounting for about 38.3% of the reduction in cold-related deaths and a substantial 50.8% decrease in extreme cold-related fatalities during the same period. The decrease in mortality due to cold would have been larger had there not been a demographic shift towards a higher proportion of people aged over 65, who are more susceptible to cold temperatures.
    The authors conclude that the reduction in heat-related mortality is largely the result of the country’s socioeconomic development over the study period, rather than specific interventions such as heat-wave warning systems.

    Four decades of data
    For the statistical analysis, the research team collected data on daily mortality (all causes) and weather (temperature and relative humidity) for 48 provinces in mainland Spain and the Balearic Islands, between January 1980 and December 2018. These data were then linked to 14 indicators of context (demographic and socioeconomic variables such as housing, income and education) for these populations over the same period.
    Implications for climate adaptation
    The results of the study extend previous findings on heat-related mortality in Spain and underscore the importance of air conditioning and heating as effective adaptation measures to mitigate the effects of heat and cold. “However, we observed large disparities in the presence of AC across provinces. AC is still unaffordable for many Spanish households,” says Achebak.
    The authors also point out that the widespread use of AC could further contribute to global warming depending on the source of electricity generation, which is why other cooling strategies, such as expanding green and blue spaces in cities, are also needed.
    “Our findings have important implications for the development of adaptation strategies to climate change. They also inform future projections of the impact of climate change on human health,” concludes Joan Ballester, ISGlobal researcher and study coordinator. More

  • in

    Artificial intelligence can predict events in people’s lives

    Artificial intelligence developed to model written language can be utilized to predict events in people’s lives. A research project from DTU, University of Copenhagen, ITU, and Northeastern University in the US shows that if you use large amounts of data about people’s lives and train so-called ‘transformer models’, which (like ChatGPT) are used to process language, they can systematically organize the data and predict what will happen in a person’s life and even estimate the time of death.
    In a new scientific article, ‘Using Sequences of Life-events to Predict Human Lives’, published in Nature Computational Science, researchers have analyzed health data and attachment to the labour market for 6 million Danes in a model dubbed life2vec. After the model has been trained in an initial phase, i.e., learned the patterns in the data, it has been shown to outperform other advanced neural networks (see fact box) and predict outcomes such as personality and time of death with high accuracy.
    “We used the model to address the fundamental question: to what extent can we predict events in your future based on conditions and events in your past? Scientifically, what is exciting for us is not so much the prediction itself, but the aspects of data that enable the model to provide such precise answers,” says Sune Lehmann, professor at DTU and first author of the article.
    Predictions of time of death
    The predictions from Life2vec are answers to general questions such as: ‘death within four years’? When the researchers analyze the model’s responses, the results are consistent with existing findings within the social sciences; for example, all things being equal, individuals in a leadership position or with a high income are more likely to survive, while being male, skilled or having a mental diagnosis is associated with a higher risk of dying. Life2vec encodes the data in a large system of vectors, a mathematical structure that organizes the different data. The model decides where to place data on the time of birth, schooling, education, salary, housing and health.
    “What’s exciting is to consider human life as a long sequence of events, similar to how a sentence in a language consists of a series of words. This is usually the type of task for which transformer models in AI are used, but in our experiments we use them to analyze what we call life sequences, i.e., events that have happened in human life,” says Sune Lehmann.
    Raising ethical questions
    The researchers behind the article point out that ethical questions surround the life2vec model, such as protecting sensitive data, privacy, and the role of bias in data. These challenges must be understood more deeply before the model can be used, for example, to assess an individual’s risk of contracting a disease or other preventable life events.

    “The model opens up important positive and negative perspectives to discuss and address politically. Similar technologies for predicting life events and human behaviour are already used today inside tech companies that, for example, track our behaviour on social networks, profile us extremely accurately, and use these profiles to predict our behaviour and influence us. This discussion needs to be part of the democratic conversation so that we consider where technology is taking us and whether this is a development we want,” says Sune Lehmann.
    According to the researchers, the next step would be to incorporate other types of information, such as text and images or information about our social connections. This use of data opens up a whole new interaction between social and health sciences.
    The research project
    The research project ‘Using Sequences of Life-events to Predict Human Lives’ is based on labour market data and data from the National Patient Registry (LPR) and Statistics Denmark. The dataset includes all 6 million Danes and contains information on income, salary, stipend, job type, industry, social benefits, etc. The health dataset includes records of visits to healthcare professionals or hospitals, diagnosis, patient type and degree of urgency. The dataset spans from 2008 to 2020, but in several analyses, researchers focus on the 2008-2016 period and an age-restricted subset of individuals.
    Transformer model
    A transformer model is an AI, deep learning data architecture used to learn about language and other tasks. The models can be trained to understand and generate language. The transformer model is designed to be faster and more efficient than previous models and is often used to train large language models on large datasets.
    Neural networks
    A neural network is a computer model inspired by the brain and nervous system of humans and animals. There are many different types of neural networks (e.g. transformer models). Like the brain, a neural network is made up of artificial neurons. These neurons are connected and can send signals to each other. Each neuron receives input from other neurons and then calculates an output passed on to other neurons. A neural network can learn to solve tasks by training on large amounts of data. Neural networks rely on training data to learn and improve their accuracy over time. But once these learning algorithms are fine-tuned for accuracy, they are potent tools in computer science and artificial intelligence that allow us to classify and group data at high speed. One of the most well-known neural networks is Google’s search algorithm.  More