More stories

  • in

    Offshore wind farms are vulnerable to cyberattacks

    The hurrying pace of societal electrification is encouraging from a climate perspective. But the transition away from fossil fuels toward renewable sources like wind presents new risks that are not yet fully understood.
    Researchers from Concordia and Hydro-Quebec presented a new study on the topic in Glasgow, United Kingdom at the 2023 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). Their study explores the risks of cyberattacks faced by offshore wind farms. Specifically, the researchers considered wind farms that use voltage-source-converter high-voltage direct-current (VSC-HVDC) connections, which are rapidly becoming the most cost-effective solution to harvest offshore wind energy around the world.
    “As we advance the integration of renewable energies, it is imperative to recognize that we are venturing into uncharted territory, with unknown vulnerabilities and cyber threats,” says Juanwei Chen, a PhD student at the Concordia Institute for Information Systems Engineering (CIISE) at the Gina Cody School of Engineering and Computer Science.
    “Offshore wind farms are connected to the main power grid using HVDC technologies. These farms may face new operational challenges,” Chen explains.
    “Our focus is to investigate how these challenges could be intensified by cyber threats and to assess the broader impact these threats might have on our power grid.”
    Concordia PhD student Hang Du, CIISE associate professor Jun Yan and Gina Cody School dean Mourad Debbabi, along with Rawad Zgheib from the Hydro-Quebec Research Institute (IREQ), also contributed to the study. This work is part of a broad research collaboration project involving the group of Prof. Debbabi and the IREQ cybersecurity research group led by Dr. Marthe Kassouf and involving a team of researchers including Dr. Zgheib.
    Complex and vulnerable systems
    Offshore wind farms require more cyber infrastructure than onshore wind farms, given that offshore farms are often dozens of kilometres from land and operated remotely. Offshore wind farms need to communicate with onshore systems via a wide area network. Meanwhile, the turbines also communicate with maintenance vessels and inspection drones, as well as with each other.

    This complex, hybrid-communication architecture presents multiple access points for cyberattacks. If malicious actors were able to penetrate the local area network of the converter station on the wind farm side, these actors could tamper with the system’s sensors. This tampering could lead to the replacement of actual data with false information. As a result, electrical disturbances would affect the offshore wind farm at the points of common coupling.
    In turn, these disturbances could trigger poorly dampened power oscillations from the offshore wind farms when all the offshore wind farms are generating their maximum output. If these cyber-induced electrical disturbances are repetitive and match the frequency of the poorly dampened power oscillations, the oscillations could be amplified. These amplified oscillations might then be transmitted through the HVDC system, potentially reaching and affecting the stability of the main power grid. While existing systems usually have redundancies built in to protect them against physical contingencies, such protection is rare against cyber security breaches.
    “The system networks can handle events like router failures or signal decays. If there is an attacker in the middle who is trying to hijack the signals, then that becomes more concerning,” says Yan, the Concordia University Research Chair (Tier 2) in Artificial Intelligence in Cyber Security and Resilience.
    Yan adds that considerable gaps exist in the industry, both among manufacturers and utilities. While many organizations are focusing on corporate issues such as data security and access controls, much is to be done to strengthen the security of operational technologies.
    He notes that Concordia is leading the push for international standardization efforts but acknowledges the work is just beginning.
    “There are regulatory standards for the US and Canada, but they often only state what is required without specifying how it should be done,” he says. “Researchers and operators are aware of the need to protect our energy security, but there remain many directions to pursue and open questions to answer.”
    This research is supported by the Concordia/Hydro-Québec/Hitachi Partnership Research Chair, with additional support from NSERC and PROMPT. More

  • in

    When lab-trained AI meets the real world, ‘mistakes can happen’

    Human pathologists are extensively trained to detect when tissue samples from one patient mistakenly end up on another patient’s microscope slides (a problem known as tissue contamination). But such contamination can easily confuse artificial intelligence (AI) models, which are often trained in pristine, simulated environments, reports a new Northwestern Medicine study.
    “We train AIs to tell ‘A’ versus ‘B’ in a very clean, artificial environment, but, in real life, the AI will see a variety of materials that it hasn’t trained on. When it does, mistakes can happen,” said corresponding author Dr. Jeffery Goldstein, director of perinatal pathology and an assistant professor of perinatal pathology and autopsy at Northwestern University Feinberg School of Medicine.
    “Our findings serve as a reminder that AI that works incredibly well in the lab may fall on its face in the real world. Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear — and AI companies hope — that the computers are coming for our jobs. Not yet.”
    In the new study, scientists trained three AI models to scan microscope slides of placenta tissue to (1) detect blood vessel damage; (2) estimate gestational age; and (3) classify macroscopic lesions. They trained a fourth AI model to detect prostate cancer in tissues collected from needle biopsies. When the models were ready, the scientists exposed each one to small portions of contaminant tissue (e.g. bladder, blood, etc.) that were randomly sampled from other slides. Finally, they tested the AIs’ reactions.
    Each of the four AI models paid too much attention to the tissue contamination, which resulted in errors when diagnosing or detecting vessel damage, gestational age, lesions and prostate cancer, the study found.
    The findings were published earlier this month in the journal Modern Pathology. It marks the first study to examine how tissue contamination affects machine-learning models.
    ‘For a human, we’d call it a distraction, like a bright, shiny object’
    Tissue contamination is a well-known problem for pathologists, but it often comes as a surprise to non-pathologist researchers or doctors, the study points out. A pathologist examining 80 to 100 slides per day can expect to see two to three with contaminants, but they’ve been trained to ignore them.

    When humans examine tissue on slides, they can only look at a limited field within the microscope, then move to a new field and so on. After examining the entire sample, they combine all the information they’ve gathered to make a diagnosis. An AI model performs in the same way, but the study found AI was easily misled by contaminants.
    “The AI model has to decide which pieces to pay attention to and which ones not to, and that’s zero sum,” Goldstein said. “If it’s paying attention to tissue contaminants, then it’s paying less attention to the tissue from the patient that is being examined. For a human, we’d call it a distraction, like a bright, shiny object.”
    The AI models gave a high level of attention to contaminants, indicating an inability to encode biological impurities. Practitioners should work to quantify and improve upon this problem, the study authors said.
    Previous AI scientists in pathology have studied different kinds of image artifacts, such as blurriness, debris on the slide, folds or bubbles, but this is the first time they’ve examined tissue contamination.
    ‘Confident that AI for placenta is doable’
    Perinatal pathologists, such as Goldstein, are incredibly rare. In fact, there are only 50 to 100 in the entire U.S., mostly located in big academic centers, Goldstein said. This means only 5% of placentas in the U.S. are examined by human experts. Worldwide, that number is even lower. Embedding this type of expertise into AI models can help pathologists across the country do their jobs better and faster, Goldstein said.
    “I’m actually very excited about how well we were able to build the models and how well they performed before we deliberately broke them for the study,” Goldstein said. “Our results make me confident that AI evaluations of placenta are doable. We ran into a real-world problem, but hitting that speedbump means we’re on the road to better integrating the use of machine learning in pathology.” More

  • in

    Artificial intelligence and immunity

    Researchers from Cleveland Clinic and IBM have published a strategy for identifying new targets for immunotherapy through artificial intelligence (AI). This is the first peer-reviewed publication from the two organizations’ Discovery Accelerator partnership, designed to advance research in healthcare and life sciences.
    The team worked together to develop supervised and unsupervised AI to reveal the molecular characteristics of peptide antigens, small pieces of protein molecules immune cells use to recognize threats. Project members came from diverse groups led by Cleveland Clinic’s Timothy Chan, M.D., Ph.D., as well as IBM’s Jeff Weber, Ph.D., Senior Research Scientist, and Wendy Cornell, Ph.D., Manager and Strategy Lead for Healthcare and Life Sciences Accelerated Discovery .
    “In the past, all our data on cancer antigen targets came from trial and error,” says Dr. Chan, chair of Cleveland Clinic’s Center for Immunotherapy and Precision Immuno-Oncology and Sheikha Fatima Bint Mubarak Endowed Chair in Immunotherapy and Precision Immuno-Oncology. “Partnering with IBM allows us to push the boundaries of artificial intelligence and health sciences research to change the way we develop and evaluate targets for cancer therapy.”
    For decades, scientists have been researching how to better identify antigens and use them to attack cancer cells or cells infected with viruses. This task has proved challenging because antigen peptides interact with immune cells based on specific features on the surface of the cells, a process which is still not well understood. Research has been limited by the sheer number of variables that affect how immune systems recognize these targets. Identifying these variables is difficult and time intensive with regular computing, so current models are limited and at times inaccurate.
    Published inBriefings in Bioinformatics,the study found that AI models that account for changes in molecular shape over time can accurately depict how immune systems recognize a target antigen. Through these models, researchers could home in on what processes are critical to target with immunotherapy treatments such as vaccines and engineered immune cells.
    Researchers can incorporate these insights into other AI models moving forward to identify more effective immunotherapy targets.
    “These discoveries are an example of what makes this partnership successful — combining IBM’s cutting-edge computational resources with Cleveland Clinic’s medical expertise,” Dr. Weber says. “These findings resulted from a key collaboration between everyone from a world-class expert in cancer immunotherapy to our physics-based simulation and AI experts. Collaboration when combined with innovation has terrific potential.” More

  • in

    AI surveillance tool successfully helps to predict sepsis, saves lives

    Each year, at least 1.7 million adults in the United States develop sepsis, and approximately 350,000 will die from the serious blood infection that can trigger a life-threatening chain reaction throughout the entire body.
    In a new study, published in the January 23, 2024 online edition of npj Digital Medicine, researchers at University of California San Diego School of Medicine utilized an artificial intelligence (AI) model in the emergency departments at UC San Diego Health in order to quickly identify patients at risk for sepsis infection.
    The study found the AI algorithm, entitled COMPOSER, which was previously developed by the research team, resulted in a 17% reduction in mortality.
    “Our COMPOSER model uses real-time data in order to predict sepsis before obvious clinical manifestations,” said study co-author Gabriel Wardi, MD, chief of the Division of Critical Care in the Department of Emergency Medicine at UC San Diego School of Medicine. “It works silently and safely behind the scenes, continuously surveilling every patient for signs of possible sepsis.”
    Once a patient checks in at the emergency department, the algorithm begins to continuously monitor more than 150 different patient variables that could be linked to sepsis, such as lab results, vital signs, current medications, demographics and medical history.
    Should a patient present with multiple variables, resulting in high risk for sepsis infection, the AI algorithm will notify nursing staff via the hospital’s electronic health record. The nursing team will then review with the physician and determine appropriate treatment plans.
    “These advanced AI algorithms can detect patterns that are not initially obvious to the human eye,” said study co-author Shamim Nemati, PhD, associate professor of biomedical informatics and director of predictive analytics at UC San Diego School of Medicine. “The system can look at these risk factors and come up with a highly accurate prediction of sepsis. Conversely, if the risk patterns can be explained by other conditions with higher confidence, then no alerts will be sent.”
    The study examined more than 6,000 patient admissions before and after COMPOSER was deployed in the emergency departments at UC San Diego Medical Center in Hillcrest and at Jacobs Medical Center in La Jolla.

    It is the first study to report improvement in patient outcomes by utilizing an AI deep-learning model, which is a model that uses artificial neural networks as a check and balance in order to safely, and correctly, identify health concerns in patients. The model is able to identify complex and multiple risk factors, which are then reviewed by the health care team for confirmation.
    “It is because of this AI model that our teams can provide life-saving therapy for patients quicker,” said Wardi, emergency medicine and critical care physician at UC San Diego Health.
    COMPOSER was activated in December 2022 and is now also being utilized in many hospital in-patient units throughout UC San Diego Health. It will soon be activated at the health system’s newest location, UC San Diego Health East Campus.
    UC San Diego Health, the region’s only academic medical system, is a pioneer in the field of AI health care, with a recent announcement of its inaugural chief health AI officer and opening of the Joan and Irwin Jacobs Center for Health Innovation at UC San Diego Health, which seeks to develop sophisticated and advanced solutions in health care.
    Additionally, the health system recently launched a pilot in which Epic, a cloud-based electronic health record system, and Microsoft’s generative AI integration automatically drafts more compassionate message responses through ChatGPT, alleviating this additional step from doctors and caregivers so they can focus on patient care.
    “Integration of AI technology in the electronic health record is helping to deliver on the promise of digital health, and UC San Diego Health has been a leader in this space to ensure AI-powered solutions support high reliability in patient safety and quality health care,” said study co-author Christopher Longhurst, MD, executive director of the Jacobs Center for Health Innovation, and chief medical officer and chief digital officer at UC San Diego Health.
    Co-authors of this study include Aaron Boussina, Theodore Chan, Allison Donahue, Robert El-Kareh, Atul Malhotra, Robert Owens, Kimberly Quintero and Supreeth Shashikumar, all at UC San Diego. More

  • in

    Health researchers develop software to predict diseases

    IntelliGenes, a first of its kind software created at Rutgers Health, combines artificial intelligence (AI) and machine-learning approaches to measure the significance of specific genomic biomarkers to help predict diseases in individuals, according to its developers.
    A study published in Bioinformatics explains how IntelliGenes can be utilized by a wide range of users to analyze multigenomic and clinical data.
    Zeeshan Ahmed, lead author of the study and a faculty member at Rutgers Institute for Health, Health Care Policy and Aging Research (IFH), said there currently are no AI or machine-learning tools available to investigate and interpret the complete human genome, especially for nonexperts. Ahmed and members of his Rutgers lab designed IntelliGenes so anyone can use the platform, including students or those without strong knowledge of bioinformatics techniques or access to high-performing computers.
    The software combines conventional statistical methods with cutting-edge machine learning algorithms to produce personalized patient predictions and a visual representation of the biomarkers significant to disease prediction.
    In another study, published in Scientific Reports, the researchers applied IntelliGenes to discover novel biomarkers and predict cardiovascular disease with high accuracy.
    “There is huge potential in the convergence of datasets and the staggering developments in artificial intelligence and machine learning,” said Ahmed, who also is an assistant professor of medicine at Robert Wood Johnson Medical School.
    “IntelliGenes can support personalized early detection of common and rare diseases in individuals, as well as open avenues for broader research ultimately leading to new interventions and treatments.”
    Researchers tested the software using Amarel, the high-performance computing cluster managed by the Rutgers Office of Advanced Research Computing. The office provides a research computing and data environment for Rutgers researchers engaged in complex computational and data-intensive projects.
    Coauthors of the study include William DeGroat, Dinesh Mendhe, Atharva Bhusari and Habiba Abdelhalim of IFH and Saman Zeeshan of Rutgers Cancer Institute of New Jersey. More

  • in

    Research team breaks down musical instincts with AI

    Music, often referred to as the universal language, is known to be a common component in all cultures. Then, could ‘musical instinct’ be something that is shared to some degree despite the extensive environmental differences amongst cultures?
    On January 16, a KAIST research team led by Professor Hawoong Jung from the Department of Physics announced to have identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model.
    Previously, many researchers have attempted to identify the similarities and differences between the music that exist in various different cultures, and tried to understand the origin of the universality. A paper published in Science in 2019 had revealed that music is produced in all ethnographically distinct cultures, and that similar forms of beats and tunes are used. Neuroscientist have also previously found out that a specific part of the human brain, namely the auditory cortex, is responsible for processing musical information.
    Professor Jung’s team used an artificial neural network model to show that cognitive functions for music forms spontaneously as a result of processing auditory information received from nature, without being taught music. The research team utilized AudioSet, a large-scale collection of sound data provided by Google, and taught the artificial neural network to learn the various sounds. Interestingly, the research team discovered that certain neurons within the network model would respond selectively to music. In other words, they observed the spontaneous generation of neurons that reacted minimally to various other sounds like those of animals, nature, or machines, but showed high levels of response to various forms of music including both instrumental and vocal.
    The neurons in the artificial neural network model showed similar reactive behaviours to those in the auditory cortex of a real brain. For example, artificial neurons responded less to the sound of music that was cropped into short intervals and were rearranged. This indicates that the spontaneously-generated music-selective neurons encode the temporal structure of music. This property was not limited to a specific genre of music, but emerged across 25 different genres including classic, pop, rock, jazz, and electronic.
    Furthermore, suppressing the activity of the music-selective neurons was found to greatly impede the cognitive accuracy for other natural sounds. That is to say, the neural function that processes musical information helps process other sounds, and that ‘musical ability’ may be an instinct formed as a result of an evolutionary adaptation acquired to better process sounds from nature.
    Professor Hawoong Jung, who advised the research, said, “The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.” As for the significance of the research, he explained, “We look forward for this artificially built model with human-like musicality to become an original model for various applications including AI music generation, musical therapy, and for research in musical cognition.” He also commented on its limitations, adding, “This research however does not take into consideration the developmental process that follows the learning of music, and it must be noted that this is a study on the foundation of processing musical information in early development.”
    This research, conducted by first author Dr. Gwangsu Kim of the KAIST Department of Physics (current affiliation: MIT Department of Brain and Cognitive Sciences) and Dr. Dong-Kyum Kim (current affiliation: IBS) was published in Nature Communications under the title, “Spontaneous emergence of rudimentary music detectors in deep neural networks.”
    This research was supported by the National Research Foundation of Korea. More

  • in

    Researchers propose a web 3.0 streaming architecture and marketplace

    Web 3.0 is an internet paradigm that is based around blockchain technology, an advanced database mechanism. Compared to Web 2.0, the current internet paradigm, Web 3.0 provides some added advantages, such as transparency and decentralized control structures. This is because Web 3.0 is designed to work over trustless and permissionless networks. Unfortunately, owing to certain technical difficulties, the implementation of Web 3.0 media streaming requires modifications to the service architecture of existing media streaming services. These difficulties include the degradation of user experience and Web 3.0’s incompatibility with certain operating softwares and browsers.
    To address these issues, a team of researchers, led by Assistant Professor Gi Seok Park from Incheon National University undertook a novel project. Their findings were made available on 22 August 2023 and recently published in Volume 16, Issue 6 of the journal IEEE Transactions on Services Computing in November-December 2023. In this study, the researchers proposed an end-to-end system architecture that is specifically designed for Web 3.0 streaming services. They made use of Inter-Planetary file system (IPFS), a type of Web 3.0 peer-to-peer (P2P) data storage technology, to reduce service delays and improve user experience.
    Web 3.0 services have also been implemented using the application programming interfaces of third-party service providers called IPFS pinning service. Unfortunately, they limit performance. Taking this into consideration, the team designed a system in which they were able to fully control the blockchain nodes by deploying their own IPFS nodes that ran directly on their system. They also implemented new protocols that cached content and scheduled chunks on their IPFS nodes, which enabled the nodes to collaborate with each other and quickly download data.
    The researchers found that their proposed system was compatible with IPFS nodes and still ran on IPFS P2P networks. They also launched Retriever, a media non-fungible token (NFT) marketplace that was developed using Web 3.0 technologies. Retriever allowed users to watch video content, ensure data privacy, and was found to be compatible with multiple mobile devices. “Our service can allow creators to monetize their video content and even sell their video content if they wish to. This is because each content will now be managed as an NFT. More importantly, this entire process will be fair and transparent,” says Dr. Park, while speaking about Retriever.
    When asked about the real-life implications of this study, Dr. Park explains, ” Our proposed service would establish digital trust from users. Moreover, thanks to blockchain technology, web services will no longer need to force trust on users in the future. All transactions will be made fairly through smart contracts and recorded transparently through the blockchain ledger.” More

  • in

    New research guides mathematical model-building for gene regulatory networks

    Over the last 20 years, researchers in biology and medicine have created Boolean network models to simulate complex systems and find solutions, including new treatments for colorectal cancer.
    “Boolean network models operate under the assumption that each gene in a regulatory network can have one of two states: on or off,” says Claus Kadelka, a systems biologist and associate professor of mathematics at Iowa State University.
    Kadelka and undergraduate student researchers recently published a study that disentangles the common design principles in these mathematical models for gene regulatory networks. He says showing what features have evolved over millions of years can “guide the process of accurate model building” for mathematicians, computer scientists and synthetic biologists.
    “Evolution has shaped the networks that control the decision-making of our cells in very specific, optimized ways. Synthetic biologists who try to engineer circuits that perform a particular function can learn from this evolution-inspired design,” says Kadelka.
    Gene regulatory networks determine what happens and where it happens in an organism. For example, they prompt cells in your stomach lining — but not in your eyes — to produce hydrochloric acid, even though all the cells in your body contain the same DNA.
    On a piece of paper, Kadelka draws a simple, hypothetical gene regulatory network. Gene A produces a protein that turns on gene B, which turns on gene C, which turns off gene A. This negative feedback loop is the same concept as an air conditioner that shuts off once a room reaches a certain temperature.
    But gene regulatory networks can be large and complex. One of the Boolean models in the researchers’ dataset involves more than 300 genes. And along with negative feedback loops, gene regulatory networks may contain positive feedback loops and feed-forward loops, which reinforce or delay responses. Redundant genes that perform the same function are also common.

    Among these and other design principles highlighted in the new paper, Kadelka says one of the most abundant is “canalization.” It refers to a hierarchy or importance ordering among genes in a network.
    Accessible data, bolstered with undergraduate research
    Kadelka emphasizes that the project would have been difficult to complete without the First-Year Mentor Program, which matches students in the Iowa State Honors Program with research opportunities across campus.
    Undergraduate students helped Kadelka develop an algorithm to scan 30 million biomedical journal articles and filter those most likely to include Boolean biological network models. After reviewing 2,000 articles one by one, the researchers identified around 160 models with close to 7,000 regulated genes.
    Addison Schmidt, now a senior in computer science, is one of the paper’s co-authors. When he worked on the project as a freshman in 2021, he created an online database for the project.
    “A major benefit of the research is that it collects and standardizes Boolean gene regulatory networks from many sources and presents them, along with a set of analysis tools, through a centralized web interface. This expands the accessibility of the data, and the web interface makes the analysis tools useable without a programming background,” says Schmidt.

    Kadelka says systems biologists have used the database for their research and expressed gratitude for the resource. He plans to maintain and update the website and investigate why evolution selects for certain design principles in gene regulatory networks.
    As for Schmidt, he says working on the project as a freshman helped him expand his expertise with the Python programming language and become more comfortable applying his skills to research.
    “This project also motivated me to pursue other research at Iowa State where I developed other tools and, coincidentally, another website to present them,” says Schmidt.
    He adds that he appreciated Kadelka’s mentorship and hopes the First-Year Mentor Program will continue to foster opportunities for undergraduate research at Iowa State. More