More stories

  • in

    New AI smartphone tool accurately diagnoses ear infections

    A new cellphone app developed by physician-scientists at UPMC and the University of Pittsburgh, which uses artificial intelligence (AI) to accurately diagnose ear infections, or acute otitis media (AOM), could help decrease unnecessary antibiotic use in young children, according to new research published today in JAMA Pediatrics.
    AOM is one of the most common childhood infections for which antibiotics are prescribed but can be difficult to discern from other ear conditions without intensive training. The new AI tool, which makes a diagnosis by assessing a short video of the ear drum captured by an otoscope connected to a cellphone camera, offers a simple and effective solution that could be more accurate than trained clinicians.
    “Acute otitis media is often incorrectly diagnosed,” said senior author Alejandro Hoberman, M.D., professor of pediatrics and director of the Division of General Academic Pediatrics at Pitt’s School of Medicine and president of UPMC Children’s Community Pediatrics. “Underdiagnosis results in inadequate care and overdiagnosis results in unnecessary antibiotic treatment, which can compromise the effectiveness of currently available antibiotics. Our tool helps get the correct diagnosis and guide the right treatment.”
    According to Hoberman, about 70% of children have an ear infection before their first birthday. Although this condition is common, accurate diagnosis of AOM requires a trained eye to detect subtle visual findings gained from a brief view of the ear drum on a wriggly baby. AOM is often confused with otitis media with effusion, or fluid behind the ear, a condition that generally does not involve bacteria and does not benefit from antimicrobial treatment.
    To develop a practical tool to improve accuracy in the diagnosis of AOM, Hoberman and his team started by building and annotating a training library of 1,151 videos of the tympanic membrane from 635 children who visited outpatient UPMC pediatric offices between 2018 and 2023. Two trained experts with extensive experience in AOM research reviewed the videos and made a diagnosis of AOM or not AOM.
    “The ear drum, or tympanic membrane, is a thin, flat piece of tissue that stretches across the ear canal,” said Hoberman. “In AOM, the ear drum bulges like a bagel, leaving a central area of depression that resembles a bagel hole. In contrast, in children with otitis media with effusion, no bulging of the tympanic membrane is present.”
    The researchers used 921 videos from the training library to teach two different AI models to detect AOM by looking at features of the tympanic membrane, including shape, position, color and translucency. Then they used the remaining 230 videos to test how the models performed.

    Both models were highly accurate, producing sensitivity and specificity values of greater than 93%, meaning that they had low rates of false negatives and false positives. According to Hoberman, previous studies of clinicians have reported diagnostic accuracy of AOM ranging from 30% to 84%, depending on type of health care provider, level of training and age of the children being examined.
    “These findings suggest that our tool is more accurate than many clinicians,” said Hoberman. “It could be a gamechanger in primary health care settings to support clinicians in stringently diagnosing AOM and guiding treatment decisions.”
    “Another benefit of our tool is that the videos we capture can be stored in a patient’s medical record and shared with other providers,” said Hoberman. “We can also show parents and trainees — medical students and residents — what we see and explain why we are or are not making a diagnosis of ear infection. It is important as a teaching tool and for reassuring parents that their child is receiving appropriate treatment.”
    Hoberman hopes that their technology could soon be implemented widely across health care provider offices to enhance accurate diagnosis of AOM and support treatment decisions.
    Other authors on the study were Nader Shaikh, M.D., Shannon Conway, Timothy Shope, M.D., Mary Ann Haralam, C.R.N.P., Catherine Campese, C.R.N.P., and Matthew Lee, all of UPMC and the University of Pittsburgh; Jelena Kova?evi?, Ph.D., of New York University; Filipe Condessa, Ph.D., of Bosch Center for Artificial Intelligence; and Tomas Larsson, M.Sc, and Zafer Cavdar, both of Dcipher Analytics.
    This research was supported by the Department of Pediatrics at the University of Pittsburgh School of Medicine. More

  • in

    Evolution-capable AI promotes green hydrogen production using more abundant chemical elements

    A NIMS research team has developed an AI technique capable of expediting the identification of materials with desirable characteristics. Using this technique, the team was able to discover high-performance water electrolyzer electrode materials free of platinum-group elements — substances previously thought to be indispensable in water electrolysis. These materials may be used to reduce the cost of large-scale production of green hydrogen — a next-generation energy source.
    Large-scale production of green hydrogen using water electrolyzers is a viable means of achieving carbon neutrality. Currently available water electrolyzers rely on expensive, scarce platinum-group elements as their main electrocatalyst components to accelerate the slow oxygen evolution reaction (OER) — an electrolytic water reaction that can produce hydrogen. To address this issue, research is underway to develop platinum-group-free, cheaper OER electrocatalysts composed of relatively abundant chemical elements compatible with large-scale green hydrogen production. However, identifying the optimum chemical compositions of such electrocatalysts from an infinitely large number of possible combinations had been found to be enormously costly, time-consuming and labor-intensive.
    This NIMS research team recently developed an AI technique capable of accurately predicting the compositions of materials with desirable characteristics by switching prediction models depending on the sizes of the datasets available for analysis. Using this AI, the team was able to identify new, effective OER electrocatalytic materials from about 3,000 candidate materials in just a single month. For reference, manual, comprehensive evaluation of these 3,000 materials was estimated to take almost six years. These newly discovered electrocatalytic materials can be synthesized using only relatively cheap and abundant metallic elements: manganese (Mn), iron (Fe), nickel (Ni), zinc (Zn) and silver (Ag). Experiments found that under certain conditions, these electrocatalytic materials exhibit superior electrochemical properties to ruthenium (Ru) oxides — the existing electrocatalytic materials with the highest OER activity known. In Earth’s crust, Ag is the least abundant element among those constituting the newly discovered electrocatalytic materials. However, its crustal abundance is nearly 100 times that of Ru, indicating that these new electrocatalytic materials can be synthesized in sufficiently large amounts to enable hydrogen mass-production using water electrolyzers.
    These results demonstrated that this AI technique could be used to expand the limits of human intelligence and dramatically accelerate the search for higher-performance materials. Using the technique, the team plans to expedite its efforts to develop new materials — mainly water electrolyzer electrode materials — in order to improve the efficiency of various electrochemical devices contributing to carbon neutrality.
    This project was carried out by a NIMS research team led by Ken Sakaushi (Principal Researcher) and Ryo Tamura (Team Leader). This work was conducted in conjunction with another project entitled, “High throughput search for seawater electrolysis catalysts by combining automated experiments with data science” (grant number: JPMJMI21EA) under the JST-Mirai Program mission area, “low carbon society.” More

  • in

    AI outperforms humans in standardized tests of creative potential

    Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.
    Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as “What is the best way to avoid talking about politics with my parents?” In the study, GPT-4 provided more original and elaborate answers than the human participants.
    The study, “The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks,” was published in Scientific Reports and authored by U of A Ph.D. students in psychological science Kent F. Hubert and Kim N. Awa, as well as Darya L. Zabelina, an assistant professor of psychological science at the U of A and director of the Mechanisms of Creative Cognition and Attention Lab.
    The three tests utilized were the Alternative Use Task, which asks participants to come up with creative uses for everyday objects like a rope or a fork; the Consequences Task, which invites participants to imagine possible outcomes of hypothetical situations, like “what if humans no longer needed sleep?”; and the Divergent Associations Task, which asks participants to generate 10 nouns that are as semantically distant as possible. For instance, there is not much semantic distance between “dog” and “cat” while there is a great deal between words like “cat” and “ontology.”
    Answers were evaluated for the number of responses, length of response and semantic difference between words. Ultimately, the authors found that “Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses. In other words, GPT-4 demonstrated higher creative potential across an entire battery of divergent thinking tasks.”
    This finding does come with some caveats. The authors state, “It is important to note that the measures used in this study are all measures of creative potential, but the involvement in creative activities or achievements are another aspect of measuring a person’s creativity.” The purpose of the study was to examine human-level creative potential, not necessarily people who may have established creative credentials.
    Hubert and Awa further note that “AI, unlike humans, does not have agency” and is “dependent on the assistance of a human user. Therefore, the creative potential of AI is in a constant state of stagnation unless prompted.”
    Also, the researchers did not evaluate the appropriateness of GPT-4 responses. So while the AI may have provided more responses and more original responses, human participants may have felt they were constrained by their responses needing to be grounded in the real world.
    Awa also acknowledged that the human motivation to write elaborate answers may not have been high, and said there are additional questions about “how do you operationalize creativity? Can we really say that using these tests for humans is generalizable to different people? Is it assessing a broad array of creative thinking? So I think it has us critically examining what are the most popular measures of divergent thinking.”
    Whether the tests are perfect measures of human creative potential is not really the point. The point is that large language models are rapidly progressing and outperforming humans in ways they have not before. Whether they are a threat to replace human creativity remains to be seen. For now, the authors continue to see “Moving forward, future possibilities of AI acting as a tool of inspiration, as an aid in a person’s creative process or to overcome fixedness is promising.” More

  • in

    AI-enabled atomic robotic probe to advance quantum material manufacturing

    Scientists from the National University of Singapore (NUS) have pioneered a new methodology of fabricating carbon-based quantum materials at the atomic scale by integrating scanning probe microscopy techniques and deep neural networks. This breakthrough highlights the potential of implementing artificial intelligence (AI) at the sub-angstrom scale for enhanced control over atomic manufacturing, benefiting both fundamental research and future applications.
    Open-shell magnetic nanographenes represent a technologically appealing class of new carbon-based quantum materials, which host robust π-spin centres and non-trivial collective quantum magnetism. These properties are crucial for developing high-speed electronic devices at the molecular level and creating quantum bits, the building blocks of quantum computers. Despite significant advancements in the synthesis of these materials through on-surface synthesis, a type of solid-phase chemical reaction, achieving precise fabrication and tailoring of the properties of these quantum materials at the atomic level has remained a challenge.
    The research team, led by Associate Professor LU Jiong from the NUS Department of Chemistry and the Institute for Functional Intelligent Materials together with Associate Professor ZHANG Chun from the NUS Department of Physics, have introduced the concept of the chemist-intuited atomic robotic probe (CARP) by integrating probe chemistry knowledge and artificial intelligence to fabricate and characterise open-shell magnetic nanographenes at the single-molecule level. This allows for precise engineering of their π-electron topology and spin configurations in an automated manner, mirroring the capabilities of human chemists. The CARP concept, utilises deep neural networks trained using the experience and knowledge of surface science chemists, to autonomously synthesize open-shell magnetic nanographenes. It can also extract chemical information from the experimental training database, offering conjunctures about unknown mechanisms. This serves as an essential supplement to theoretical simulations, contributing to a more comprehensive understanding of probe chemistry reaction mechanisms. The research work is a collaboration involving Associate Professor WANG Xiaonan from Tsinghua University in China.
    The research findings are published in the journal Nature Synthesis on 29 February 2024.
    The researchers tested the CARP concept on a complicated site-selective cyclodehydrogenation reaction used for producing chemical compounds with specific structural and electronic properties. Results show that the CARP framework can efficiently adopt the expert knowledge of the scientist and convert it into machine-understandable tasks, mimicking the workflow to perform single-molecule reactions that can manipulate the geometric shape and spin characteristic of the final chemical compound.
    In addition, the research team aims to harness the full potential of AI capabilities by extracting hidden insights from the database. They established a smart learning paradigm using a game theory-based approach to examine the framework’s learning outcomes. The analysis shows that CARP effectively captured important details that humans might miss, especially when it comes to making the cyclodehydrogenation reaction successful. This suggests that the CARP framework could be a valuable tool for gaining additional insights into the mechanisms of unexplored single-molecule reactions.
    Assoc Prof Lu said, “Our main goal is to work at the atomic level to create, study and control these quantum materials. We are striving to revolutionise the production of these materials on surfaces to enable more control over their outcomes, right down to the level of individual atoms and bonds.
    “Our goal in the near future is to extend the CARP framework further to adopt versatile on-surface probe chemistry reactions with scale and efficiency. This has the potential to transform conventional laboratory-based on-surface synthesis process towards on-chip fabrication for practical applications. Such transformation could play a pivotal role in accelerating the fundamental research of quantum materials and usher in a new era of intelligent atomic fabrication,” added Assoc Prof Lu. More

  • in

    Scientists make nanoparticles dance to unravel quantum limits

    The question of where the boundary between classical and quantum physics lies is one of the longest-standing pursuits of modern scientific research and in new research published today, scientists demonstrate a novel platform that could help us find an answer.
    The laws of quantum physics govern the behaviour of particles at miniscule scales, leading to phenomena such as quantum entanglement, where the properties of entangled particles become inextricably linked in ways that cannot be explained by classical physics.
    Research in quantum physics helps us to fill gaps in our knowledge of physics and can give us a more complete picture of reality, but the tiny scales at which quantum systems operate can make them difficult to observe and study.
    Over the past century, physicists have successfully observed quantum phenomena in increasingly larger objects, all the way from subatomic particles like electrons to molecules which contain thousands of atoms.
    More recently, the field of levitated optomechanics, which deals with the control of high-mass micron-scale objects in vacuum, aims to push the envelope further by testing the validity of quantum phenomena in objects that are several orders of magnitude heavier than atoms and molecules. However, as the mass and size of an object increase, the interactions which result in delicate quantum features, such as entanglement, get lost to the environment, resulting in the classical behaviour we observe.
    But now, the team co-led by Dr Jayadev Vijayan, Head of the Quantum Engineering Lab at The University of Manchester, with scientists from ETH Zurich, and theorists from the University of Innsbruck, have established a new approach to overcome this problem in an experiment carried out at ETH Zurich, published in the journal Nature Physics.
    Dr Vijayan said: “To observe quantum phenomena at larger scales and shed light on the classical-quantum transition, quantum features need to be preserved in the presence of noise from the environment. As you can imagine, there are two ways to do this- one is to suppress the noise, and the second is to boost the quantum features.

    “Our research demonstrates a way to tackle the challenge by taking the second approach. We show that the interactions needed for entanglement between two optically trapped 0.1-micron-sized glass particles can be amplified by several orders of magnitude to overcome losses to the environment.”
    The scientists placed the particles between two highly reflective mirrors which form an optical cavity. This way, the photons scattered by each particle bounce between the mirrors several thousand times before leaving the cavity, leading to a significantly higher chance of interacting with the other particle.
    Johannes Piotrowski, co-lead of the paper from ETH Zurich added: “Remarkably, because the optical interactions are mediated by the cavity, its strength does not decay with distance meaning we could couple micron-scale particles over several millimetres.”
    The researchers also demonstrate the remarkable ability to finely adjust or control the interaction strength by varying the laser frequencies and position of the particles within the cavity.
    The findings represent a significant leap towards understanding fundamental physics, but also hold promise for practical applications, particularly in sensor technology that could be used towards environmental monitoring and offline navigation.
    Dr Carlos Gonzalez-Ballestero, a collaborator from the Technical University of Vienna said: “The key strength of levitated mechanical sensors is their high mass relative to other quantum systems using sensing. The high mass makes them well-suited for detecting gravitational forces and accelerations, resulting in better sensitivity. As such, quantum sensors can be used in many different applications in various fields, such as monitoring polar ice for climate research and measuring accelerations for navigation purposes.”
    Piotrowski added: “It is exciting to work on this relatively new platform and test how far we can push it into the quantum regime.”
    Now, the team of researchers will combine the new capabilities with well-established quantum cooling techniques in a stride towards validating quantum entanglement. If successful, achieving entanglement of levitated nano- and micro-particles could narrow the gap between the quantum world and everyday classical mechanics.
    At the Photon Science Institute and the Department of Electrical and Electronic Engineering at The University of Manchester, Dr Jayadev Vijayan’s team will continue working in levitated optomechanics, harnessing interactions between multiple nanoparticles for applications in quantum sensing. More

  • in

    Software speeds up drug development

    Proteins not only carry out the functions that are critical for the survival of cells, but also influence the development and progression of diseases. To understand their role in health and disease, researchers study the three-dimensional atomic structure of proteins using both experimental and computational methods.
    Over 75 percent of proteins present at the surface of our cells are covered by glycans. These sugar-like molecules form very dynamic protective shields around the proteins. However, the mobility and variability of the sugars make it difficult to determine how these shields behave, or how they influence the binding of drug molecules.
    Mateusz Sikora, the project leader and head of the Dioscuri Centre for Modelling of Posttranslational Modifications, and his team in Krakow and partners at the Max Planck Institute of Biophysics in Frankfurt am Main, Germany, have addressed this challenge by using computers, working together with scientists at Inserm in Paris, Academia Sinica in Tapei and the University of Bremen. Their powerful new algorithm GlycoSHIELD enables a fast but realistic modeling of the sugar chains present on protein surfaces. Reducing computing hours and therefore power consumption by several orders of magnitude compared to conventional simulation tools, GlycoSHIELD paves the path towards green computing.
    From thousands of hours to a few minutes
    Protective glycan shields strongly influence how proteins interact with other molecules such as therapeutic drugs. For example, the sugar layer on the spike protein of the coronavirus hides the virus from the immune system by making it difficult for natural or vaccine-induced antibodies to recognize the virus. The sugar shields therefore play an important role in drug and vaccine development. Pharmaceutical research could benefit from routinely predicting their morphology and dynamics. Until now, however, forecasting the structure of sugar layers using computer simulations was only possible with expert knowledge on special supercomputers. In many cases, thousands or even millions of computing hours were required.
    With GlycoSHIELD, Sikora’s team provides a fast, environmentally friendly open source alternative. “Our approach reduces resources, computing time and the technical expertise needed,” says Sikora. “Anyone can now calculate the arrangement and dynamics of sugar molecules on proteins on their personal computer within minutes, without the need of expert knowledge and high-performance computers. Furthermore, this new way of making calculations is very energy efficient.” The software can not only be used in research, but could also be helpful for the development of drugs or vaccines, for example in immunotherapy for cancer.
    A jigsaw puzzle made of sugar
    How did the team manage to achieve such a high increase in efficiency? The authors created and analyzed a library of thousands of most likely 3D poses of the most common forms of sugar chains on proteins found in humans and microorganisms. Using long simulations and experiments, they found that for a reliable prediction of glycan shields, it is sufficient that the attached sugars do not collide with membranes or parts of the protein.
    The algorithm is based on these findings. “GlyoSHIELD users only have to specify the protein and the locations where the sugars are attached. Our software then puzzles them on the protein surface in the most likely arrangement,” explains Sikora. “We could reproduce the sugar shields of the spike protein accurately: they look exactly as what we see in the experiments!” With GlycoSHIELD it is now possible to supplement new as well as existing protein structures with sugar information. The scientists also used GlycoSHIELD to reveal the pattern of the sugars on the GABAA receptor, an important target for sedatives and anesthetics. More

  • in

    Umbrella for atoms: The first protective layer for 2D quantum materials

    The race to create increasingly faster and more powerful computer chips continues as transistors, their fundamental components, shrink to ever smaller and more compact sizes. In a few years, these transistors will measure just a few atoms across — by which point, the miniaturization of the silicon technology currently used will have reached its physical limits. Consequently, the quest for alternative materials with entirely new properties is crucial for future technological advancements.
    Back in 2021, scientists from the Cluster of Excellence ct.qmat — Complexity and Topology in Quantum Matter at the universities JMU Würzburg and TU Dresden made a significant discovery: topological quantum materials such as indenene, which hold great promise for ultrafast, energy-efficient electronics. The resulting, extremely thin quantum semiconductors are composed of a single atom layer — in indenene’s case, indium atoms — and act as topological insulators, conducting electricity virtually without resistance along their edges.
    “Producing such a single atomic layer requires sophisticated vacuum equipment and a specific substrate material. To utilize this two-dimensional material in electronic components, it would need to be removed from the vacuum environment. However, exposure to air, even briefly, leads to oxidation, destroying its revolutionary properties and rendering it useless,” explains experimental physicist Professor Ralph Claessen, ct.qmat’s Würzburg spokesperson.
    The ct.qmat Würzburg team has now managed to solve this problem. Their results have been published in the journal Nature Communications.
    In Search of a Protective Coating
    “We dedicated two years to finding a method to protect the sensitive indenene layer from environmental elements using a protective coating. The challenge was ensuring that this coating did not interact with the indenene layer,” explains Cedric Schmitt, one of Claessen’s doctoral students involved in the project. This interaction is problematic because when different types of atoms — from the protective layer and the semiconductor, for instance — meet, they react chemically at the atomic level, changing the material. This isn’t a problem with conventional silicon chips, which comprise multiple atomic layers, leaving sufficient layers unaffected and hence still functional.
    “A semiconductor material consisting of a single atomic layer such as indenene would normally be compromised by a protective film. This posed a seemingly insurmountable challenge that piqued our research curiosity,” says Claessen. The search for a viable protective layer led them to explore van der Waals materials, named after the Dutch physicist Johannes Diderik van der Waals (1837-1923). Claessen explains: “These two-dimensional van der Waals atomic layers are characterized by strong internal bonds between their atoms, while only weakly bonding to the substrate. This concept is akin to how pencil lead made of graphite — a form of carbon with atoms arranged in honeycomb layers — writes on paper. The layers of graphene can be easily separated. We aimed to replicate this characteristic.”
    Success!

    Using sophisticated ultrahigh vacuum equipment, the Würzburg team experimented with heating silicon carbide (SiC) as a substrate for indenene, exploring the conditions needed to form graphene from it. “Silicon carbide consists of silicon and carbon atoms. Heating it causes the carbon atoms to detach from the surface and form graphene,” says Schmitt, elucidating the laboratory process. “We then vapor-deposited indium atoms, which are immersed between the protective graphene layer and the silicon carbide substrate. This is how the protective layer for our two-dimensional quantum material indenene was formed.”
    Umbrella Unfurled
    For the first time globally, Claessen and his team at ct.qmat’s Würzburg branch successfully crafted a functional protective layer for a two-dimensional quantum semiconductor material without compromising its extraordinary quantum properties. After analyzing the fabrication process, they thoroughly tested the layer’s protective capabilities against oxidation and corrosion. “It works! The sample can even be exposed to water without being affected in any way,” says Claessen with delight. “The graphene layer acts like an umbrella for our indenene.”
    Toward Atomic Layer Electronics
    This breakthrough paves the way for applications involving highly sensitive semiconductor atomic layers. The manufacture of ultrathin electronic components requires them to be processed in air or other chemical environments. This has been made possible thanks to the discovery of this protective mechanism. The team in Würzburg is now focused on identifying more van der Waals materials that can serve as protective layers — and they already have a few prospects in mind. The snag is that despite graphene’s effective protection of atomic monolayers against environmental factors, its electrical conductivity poses a risk of short circuits. The Würzburg scientists are working on overcoming these challenges and creating the conditions for tomorrow’s atomic layer electronics. More

  • in

    Researchers use AI, Google street view to predict household energy costs on large scale

    Low-income households in the United States are bearing an energy burden that is three times that of the average household, according to the U.S. Department of Energy.
    In total, more than 46 million U.S. households carry a significant energy burden — meaning they pay more than 6 percent of their gross income for basic energy expenses such as cooling and heating their homes.
    Passive design elements like natural ventilation can play a pivotal role in reducing energy consumption. By harnessing ambient energy sources like sunlight and wind, they can create a more comfortable environment at little or no cost. However, data on passive design is scarce, making it difficult to assess the energy savings on a large scale.
    To address that need, an interdisciplinary team of experts from the University of Notre Dame, in collaboration with faculty at the University of Maryland and University of Utah, have found a way to use artificial intelligence to analyze a household’s passive design characteristics and predict its energy expenses with more than 74 percent accuracy.
    By combining their findings with demographic data including poverty levels, the researchers have created a comprehensive model for predicting energy burden across 1,402 census tracts and nearly 300,000 households in the Chicago metropolitan area. Their research was published this month in the journal Building and Environment.
    The results yield invaluable insights for policymakers and urban planners, said Ming Hu, associate dean for research, scholarship and creative work in the School of Architecture, allowing them to identify neighborhoods that are most vulnerable — and paving the way toward smart and sustainable cities.
    “When families cannot afford air conditioning or heat, it can lead to dire health risks,” Hu said. “And these risks are only exacerbated by climate change, which is expected to increase both the frequency and intensity of extreme temperature events. There is an urgent and real need to find low-cost, low-tech solutions to help reduce energy burden and to help families prepare for and adapt to our changing climate.”
    In addition to Hu, who is a concurrent associate professor in the College of Engineering, the Notre Dame research team includes Chaoli Wang, a professor of computer science and engineering; Siyuan Yao, a doctoral student in the Department of Computer Science and Engineering; Siavash Ghorbany, a doctoral student in the Department of Civil and Environmental Engineering and Earth Science; and Matthew Sisk, an associate professor of the practice in the Lucy Family Institute for Data and Society.

    Their research, which was funded by the Lucy Institute as part of the Health Equity Data Lab, focused on three of the most influential factors in passive design: the size of windows in the dwelling, the types of windows (operable or fixed) and the percent of the building that has proper shading.
    Using a convolutional neural network, the team analyzed Google Street View images of residential buildings in Chicago and then performed different machine learning methods to find the best prediction model. Their results show that passive design characteristics are associated with average energy burden and are essential for prediction models.
    “The first step toward mitigating the energy burden for low-income families is to get a better understanding of the issue and to be able to measure and predict it,” Ghorbany said. “So, we asked, ‘What if we could use everyday tools and technologies like Google Street View, combined with the power of machine learning, to gather this information?’ We hope it will be a positive step toward energy justice in the United States.”
    The resulting model is easily scalable and far more efficient than previous methods of energy auditing, which required researchers to go building by building through an area.
    Over the next few months, the team will work with Notre Dame’s Center for Civic Innovation to evaluate residences in the local South Bend and Elkhart communities. Being able to use this model to quickly and efficiently get information to the organizations who can help local families is an exciting next step for this work, Sisk said.
    “When you have an increased energy burden, where is that money being taken away from? Is it being taken from educational opportunities or nutritious food? Is it then contributing to that population becoming more disenfranchised as time goes on?” Sisk said. “When we look at systemic issues like poverty, there is no one thing that will fix it. But when there’s a thread we can pull, when there are actionable steps that can start to make it a little bit better, that’s really powerful.”
    The researchers are also working toward including additional passive design characteristics in the analysis, such as insulation, cool roofs and green roofs. And eventually, they hope to scale the project up to evaluate and address energy burden disparities at the national level.

    For Hu, the project is emblematic of the University’s commitments to both sustainability and helping a world in need.
    “This is an issue of environmental justice. And this is what we do so well at Notre Dame — and what we should be doing,” she said. “We want to use advancements like AI and machine learning not just because they are cutting-edge technologies, but for the common good.” More