More stories

  • in

    What coffee with cream can teach us about quantum physics

    Add a dash of creamer to your morning coffee, and clouds of white liquid will swirl around your cup. But give it a few seconds, and those swirls will disappear, leaving you with an ordinary mug of brown liquid.
    Something similar happens in quantum computer chips — devices that tap into the strange properties of the universe at its smallest scales — where information can quickly jumble up, limiting the memory capabilities of these tools.
    That doesn’t have to be the case, said Rahul Nandkishore, associate professor of physics at the University of Colorado Boulder.
    In a new coup for theoretical physics, he and his colleagues have used math to show that scientists could create, essentially, a scenario where the milk and coffee never mix — no matter how hard you stir them.
    The group’s findings may lead to new advances in quantum computer chips, potentially providing engineers with new ways to store information in incredibly tiny objects.
    “Think of the initial swirling patterns that appear when you add cream to your morning coffee,” said Nandkishore, senior author of the new study. “Imagine if these patterns continued to swirl and dance no matter how long you watched.”
    Researchers still need to run experiments in the lab to make sure that these never-ending swirls really are possible. But the group’s results are a major step forward for physicists seeking to create materials that remain out of balance, or equilibrium, for long periods of time — a pursuit known as “ergodicity breaking.”
    The team’s findings appeared this week in the latest issue of Physical Review Letters.

    Quantum memory
    The study, which includes co-authors David Stephen and Oliver Hart, postdoctoal researchers in physics at CU Boulder, hinges on a common problem in quantum computing.
    Normal computers run on “bits,” which take the form of zeros or ones. Nandkishore explained that quantum computers, in contrast, employ “qubits,” which can exist as zero, one or, through the strangeness of quantum physics, zero and one at the same time. Engineers have made qubits out of a wide range of things, including individual atoms trapped by lasers or tiny devices called superconductors.
    But just like that cup of coffee, qubits can become easily mixed up. If you flip, for example, all of your qubits to one, they’ll eventually flip back and forth until the entire chip becomes a disorganized mess.
    In the new research, Nandkishore and his colleagues may have figured a way around that tendency toward mixing. The group calculated that if scientists arrange qubits into particular patterns, these assemblages will retain their information — even if you disturb them using a magnetic field or a similar disruption. That could, the physicist said, allow engineers to build devices with a kind of quantum memory.
    “This could be a way of storing information,” he said. “You would write information into these patterns, and the information couldn’t be degraded.”
    Tapping into geometry

    In the study, the researchers used mathematical modeling tools to envision an array of hundreds to thousands of qubits arranged in a checkerboard-like pattern.
    The trick, they discovered, was to stuff the qubits into a tight spot. If qubits get close enough together, Nadkishore explained, they can influence the behavior of their neighbors, almost like a crowd of people trying to squeeze themselves into a telephone booth. Some of those people might be standing upright or on their heads, but they can’t flip the other way without pushing on everyone else.
    The researchers calculated that if they arranged these patterns in just the right way, those patterns might flow around a quantum computer chip and never degrade — much like those clouds of cream swirling forever in your coffee.
    “The wonderful thing about this study is that we discovered that we could understand this fundamental phenomenon through what is almost simple geometry,” Nandkishore said.
    The team’s findings could influence a lot more than just quantum computers.
    Nandkishore explained that almost everything in the universe, from cups of coffee to vast oceans, tends to move toward what scientists call “thermal equilibrium.” If you drop an ice cube into your mug, for example, heat from your coffee will melt the ice, eventually forming a liquid with a uniform temperature.
    His new findings, however, join a growing body of research that suggests that some small organizations of matter can resist that equilibrium — seemingly breaking some of the most immutable laws of the universe.
    “We’re not going to have to redo our math for ice and water,” Nandkishore said. “The field of mathematics that we call statistical physics is incredibly successful for describing things we encounter in everyday life. But there are settings where maybe it doesn’t apply.” More

  • in

    Offshore wind farms are vulnerable to cyberattacks

    The hurrying pace of societal electrification is encouraging from a climate perspective. But the transition away from fossil fuels toward renewable sources like wind presents new risks that are not yet fully understood.
    Researchers from Concordia and Hydro-Quebec presented a new study on the topic in Glasgow, United Kingdom at the 2023 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). Their study explores the risks of cyberattacks faced by offshore wind farms. Specifically, the researchers considered wind farms that use voltage-source-converter high-voltage direct-current (VSC-HVDC) connections, which are rapidly becoming the most cost-effective solution to harvest offshore wind energy around the world.
    “As we advance the integration of renewable energies, it is imperative to recognize that we are venturing into uncharted territory, with unknown vulnerabilities and cyber threats,” says Juanwei Chen, a PhD student at the Concordia Institute for Information Systems Engineering (CIISE) at the Gina Cody School of Engineering and Computer Science.
    “Offshore wind farms are connected to the main power grid using HVDC technologies. These farms may face new operational challenges,” Chen explains.
    “Our focus is to investigate how these challenges could be intensified by cyber threats and to assess the broader impact these threats might have on our power grid.”
    Concordia PhD student Hang Du, CIISE associate professor Jun Yan and Gina Cody School dean Mourad Debbabi, along with Rawad Zgheib from the Hydro-Quebec Research Institute (IREQ), also contributed to the study. This work is part of a broad research collaboration project involving the group of Prof. Debbabi and the IREQ cybersecurity research group led by Dr. Marthe Kassouf and involving a team of researchers including Dr. Zgheib.
    Complex and vulnerable systems
    Offshore wind farms require more cyber infrastructure than onshore wind farms, given that offshore farms are often dozens of kilometres from land and operated remotely. Offshore wind farms need to communicate with onshore systems via a wide area network. Meanwhile, the turbines also communicate with maintenance vessels and inspection drones, as well as with each other.

    This complex, hybrid-communication architecture presents multiple access points for cyberattacks. If malicious actors were able to penetrate the local area network of the converter station on the wind farm side, these actors could tamper with the system’s sensors. This tampering could lead to the replacement of actual data with false information. As a result, electrical disturbances would affect the offshore wind farm at the points of common coupling.
    In turn, these disturbances could trigger poorly dampened power oscillations from the offshore wind farms when all the offshore wind farms are generating their maximum output. If these cyber-induced electrical disturbances are repetitive and match the frequency of the poorly dampened power oscillations, the oscillations could be amplified. These amplified oscillations might then be transmitted through the HVDC system, potentially reaching and affecting the stability of the main power grid. While existing systems usually have redundancies built in to protect them against physical contingencies, such protection is rare against cyber security breaches.
    “The system networks can handle events like router failures or signal decays. If there is an attacker in the middle who is trying to hijack the signals, then that becomes more concerning,” says Yan, the Concordia University Research Chair (Tier 2) in Artificial Intelligence in Cyber Security and Resilience.
    Yan adds that considerable gaps exist in the industry, both among manufacturers and utilities. While many organizations are focusing on corporate issues such as data security and access controls, much is to be done to strengthen the security of operational technologies.
    He notes that Concordia is leading the push for international standardization efforts but acknowledges the work is just beginning.
    “There are regulatory standards for the US and Canada, but they often only state what is required without specifying how it should be done,” he says. “Researchers and operators are aware of the need to protect our energy security, but there remain many directions to pursue and open questions to answer.”
    This research is supported by the Concordia/Hydro-Québec/Hitachi Partnership Research Chair, with additional support from NSERC and PROMPT. More

  • in

    Many but not all of the world’s aquifers are losing water

    The world’s precious stash of subterranean freshwater is shrinking — and in nearly a third of aquifers, that loss has been speeding up in the last couple of decades, researchers report in the Jan. 25 Nature.

    A one-two punch of unsustainable groundwater withdrawals and changing climate has been causing global water levels to fall on average, leading to water shortages, slumping land surfaces and seawater intrusion into aquifers. The new study suggests that groundwater decline has accelerated in many places since 2000, but also suggests that these losses can be reversible with better water management. More

  • in

    When lab-trained AI meets the real world, ‘mistakes can happen’

    Human pathologists are extensively trained to detect when tissue samples from one patient mistakenly end up on another patient’s microscope slides (a problem known as tissue contamination). But such contamination can easily confuse artificial intelligence (AI) models, which are often trained in pristine, simulated environments, reports a new Northwestern Medicine study.
    “We train AIs to tell ‘A’ versus ‘B’ in a very clean, artificial environment, but, in real life, the AI will see a variety of materials that it hasn’t trained on. When it does, mistakes can happen,” said corresponding author Dr. Jeffery Goldstein, director of perinatal pathology and an assistant professor of perinatal pathology and autopsy at Northwestern University Feinberg School of Medicine.
    “Our findings serve as a reminder that AI that works incredibly well in the lab may fall on its face in the real world. Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear — and AI companies hope — that the computers are coming for our jobs. Not yet.”
    In the new study, scientists trained three AI models to scan microscope slides of placenta tissue to (1) detect blood vessel damage; (2) estimate gestational age; and (3) classify macroscopic lesions. They trained a fourth AI model to detect prostate cancer in tissues collected from needle biopsies. When the models were ready, the scientists exposed each one to small portions of contaminant tissue (e.g. bladder, blood, etc.) that were randomly sampled from other slides. Finally, they tested the AIs’ reactions.
    Each of the four AI models paid too much attention to the tissue contamination, which resulted in errors when diagnosing or detecting vessel damage, gestational age, lesions and prostate cancer, the study found.
    The findings were published earlier this month in the journal Modern Pathology. It marks the first study to examine how tissue contamination affects machine-learning models.
    ‘For a human, we’d call it a distraction, like a bright, shiny object’
    Tissue contamination is a well-known problem for pathologists, but it often comes as a surprise to non-pathologist researchers or doctors, the study points out. A pathologist examining 80 to 100 slides per day can expect to see two to three with contaminants, but they’ve been trained to ignore them.

    When humans examine tissue on slides, they can only look at a limited field within the microscope, then move to a new field and so on. After examining the entire sample, they combine all the information they’ve gathered to make a diagnosis. An AI model performs in the same way, but the study found AI was easily misled by contaminants.
    “The AI model has to decide which pieces to pay attention to and which ones not to, and that’s zero sum,” Goldstein said. “If it’s paying attention to tissue contaminants, then it’s paying less attention to the tissue from the patient that is being examined. For a human, we’d call it a distraction, like a bright, shiny object.”
    The AI models gave a high level of attention to contaminants, indicating an inability to encode biological impurities. Practitioners should work to quantify and improve upon this problem, the study authors said.
    Previous AI scientists in pathology have studied different kinds of image artifacts, such as blurriness, debris on the slide, folds or bubbles, but this is the first time they’ve examined tissue contamination.
    ‘Confident that AI for placenta is doable’
    Perinatal pathologists, such as Goldstein, are incredibly rare. In fact, there are only 50 to 100 in the entire U.S., mostly located in big academic centers, Goldstein said. This means only 5% of placentas in the U.S. are examined by human experts. Worldwide, that number is even lower. Embedding this type of expertise into AI models can help pathologists across the country do their jobs better and faster, Goldstein said.
    “I’m actually very excited about how well we were able to build the models and how well they performed before we deliberately broke them for the study,” Goldstein said. “Our results make me confident that AI evaluations of placenta are doable. We ran into a real-world problem, but hitting that speedbump means we’re on the road to better integrating the use of machine learning in pathology.” More

  • in

    Artificial intelligence and immunity

    Researchers from Cleveland Clinic and IBM have published a strategy for identifying new targets for immunotherapy through artificial intelligence (AI). This is the first peer-reviewed publication from the two organizations’ Discovery Accelerator partnership, designed to advance research in healthcare and life sciences.
    The team worked together to develop supervised and unsupervised AI to reveal the molecular characteristics of peptide antigens, small pieces of protein molecules immune cells use to recognize threats. Project members came from diverse groups led by Cleveland Clinic’s Timothy Chan, M.D., Ph.D., as well as IBM’s Jeff Weber, Ph.D., Senior Research Scientist, and Wendy Cornell, Ph.D., Manager and Strategy Lead for Healthcare and Life Sciences Accelerated Discovery .
    “In the past, all our data on cancer antigen targets came from trial and error,” says Dr. Chan, chair of Cleveland Clinic’s Center for Immunotherapy and Precision Immuno-Oncology and Sheikha Fatima Bint Mubarak Endowed Chair in Immunotherapy and Precision Immuno-Oncology. “Partnering with IBM allows us to push the boundaries of artificial intelligence and health sciences research to change the way we develop and evaluate targets for cancer therapy.”
    For decades, scientists have been researching how to better identify antigens and use them to attack cancer cells or cells infected with viruses. This task has proved challenging because antigen peptides interact with immune cells based on specific features on the surface of the cells, a process which is still not well understood. Research has been limited by the sheer number of variables that affect how immune systems recognize these targets. Identifying these variables is difficult and time intensive with regular computing, so current models are limited and at times inaccurate.
    Published inBriefings in Bioinformatics,the study found that AI models that account for changes in molecular shape over time can accurately depict how immune systems recognize a target antigen. Through these models, researchers could home in on what processes are critical to target with immunotherapy treatments such as vaccines and engineered immune cells.
    Researchers can incorporate these insights into other AI models moving forward to identify more effective immunotherapy targets.
    “These discoveries are an example of what makes this partnership successful — combining IBM’s cutting-edge computational resources with Cleveland Clinic’s medical expertise,” Dr. Weber says. “These findings resulted from a key collaboration between everyone from a world-class expert in cancer immunotherapy to our physics-based simulation and AI experts. Collaboration when combined with innovation has terrific potential.” More

  • in

    AI surveillance tool successfully helps to predict sepsis, saves lives

    Each year, at least 1.7 million adults in the United States develop sepsis, and approximately 350,000 will die from the serious blood infection that can trigger a life-threatening chain reaction throughout the entire body.
    In a new study, published in the January 23, 2024 online edition of npj Digital Medicine, researchers at University of California San Diego School of Medicine utilized an artificial intelligence (AI) model in the emergency departments at UC San Diego Health in order to quickly identify patients at risk for sepsis infection.
    The study found the AI algorithm, entitled COMPOSER, which was previously developed by the research team, resulted in a 17% reduction in mortality.
    “Our COMPOSER model uses real-time data in order to predict sepsis before obvious clinical manifestations,” said study co-author Gabriel Wardi, MD, chief of the Division of Critical Care in the Department of Emergency Medicine at UC San Diego School of Medicine. “It works silently and safely behind the scenes, continuously surveilling every patient for signs of possible sepsis.”
    Once a patient checks in at the emergency department, the algorithm begins to continuously monitor more than 150 different patient variables that could be linked to sepsis, such as lab results, vital signs, current medications, demographics and medical history.
    Should a patient present with multiple variables, resulting in high risk for sepsis infection, the AI algorithm will notify nursing staff via the hospital’s electronic health record. The nursing team will then review with the physician and determine appropriate treatment plans.
    “These advanced AI algorithms can detect patterns that are not initially obvious to the human eye,” said study co-author Shamim Nemati, PhD, associate professor of biomedical informatics and director of predictive analytics at UC San Diego School of Medicine. “The system can look at these risk factors and come up with a highly accurate prediction of sepsis. Conversely, if the risk patterns can be explained by other conditions with higher confidence, then no alerts will be sent.”
    The study examined more than 6,000 patient admissions before and after COMPOSER was deployed in the emergency departments at UC San Diego Medical Center in Hillcrest and at Jacobs Medical Center in La Jolla.

    It is the first study to report improvement in patient outcomes by utilizing an AI deep-learning model, which is a model that uses artificial neural networks as a check and balance in order to safely, and correctly, identify health concerns in patients. The model is able to identify complex and multiple risk factors, which are then reviewed by the health care team for confirmation.
    “It is because of this AI model that our teams can provide life-saving therapy for patients quicker,” said Wardi, emergency medicine and critical care physician at UC San Diego Health.
    COMPOSER was activated in December 2022 and is now also being utilized in many hospital in-patient units throughout UC San Diego Health. It will soon be activated at the health system’s newest location, UC San Diego Health East Campus.
    UC San Diego Health, the region’s only academic medical system, is a pioneer in the field of AI health care, with a recent announcement of its inaugural chief health AI officer and opening of the Joan and Irwin Jacobs Center for Health Innovation at UC San Diego Health, which seeks to develop sophisticated and advanced solutions in health care.
    Additionally, the health system recently launched a pilot in which Epic, a cloud-based electronic health record system, and Microsoft’s generative AI integration automatically drafts more compassionate message responses through ChatGPT, alleviating this additional step from doctors and caregivers so they can focus on patient care.
    “Integration of AI technology in the electronic health record is helping to deliver on the promise of digital health, and UC San Diego Health has been a leader in this space to ensure AI-powered solutions support high reliability in patient safety and quality health care,” said study co-author Christopher Longhurst, MD, executive director of the Jacobs Center for Health Innovation, and chief medical officer and chief digital officer at UC San Diego Health.
    Co-authors of this study include Aaron Boussina, Theodore Chan, Allison Donahue, Robert El-Kareh, Atul Malhotra, Robert Owens, Kimberly Quintero and Supreeth Shashikumar, all at UC San Diego. More

  • in

    Health researchers develop software to predict diseases

    IntelliGenes, a first of its kind software created at Rutgers Health, combines artificial intelligence (AI) and machine-learning approaches to measure the significance of specific genomic biomarkers to help predict diseases in individuals, according to its developers.
    A study published in Bioinformatics explains how IntelliGenes can be utilized by a wide range of users to analyze multigenomic and clinical data.
    Zeeshan Ahmed, lead author of the study and a faculty member at Rutgers Institute for Health, Health Care Policy and Aging Research (IFH), said there currently are no AI or machine-learning tools available to investigate and interpret the complete human genome, especially for nonexperts. Ahmed and members of his Rutgers lab designed IntelliGenes so anyone can use the platform, including students or those without strong knowledge of bioinformatics techniques or access to high-performing computers.
    The software combines conventional statistical methods with cutting-edge machine learning algorithms to produce personalized patient predictions and a visual representation of the biomarkers significant to disease prediction.
    In another study, published in Scientific Reports, the researchers applied IntelliGenes to discover novel biomarkers and predict cardiovascular disease with high accuracy.
    “There is huge potential in the convergence of datasets and the staggering developments in artificial intelligence and machine learning,” said Ahmed, who also is an assistant professor of medicine at Robert Wood Johnson Medical School.
    “IntelliGenes can support personalized early detection of common and rare diseases in individuals, as well as open avenues for broader research ultimately leading to new interventions and treatments.”
    Researchers tested the software using Amarel, the high-performance computing cluster managed by the Rutgers Office of Advanced Research Computing. The office provides a research computing and data environment for Rutgers researchers engaged in complex computational and data-intensive projects.
    Coauthors of the study include William DeGroat, Dinesh Mendhe, Atharva Bhusari and Habiba Abdelhalim of IFH and Saman Zeeshan of Rutgers Cancer Institute of New Jersey. More

  • in

    Research team breaks down musical instincts with AI

    Music, often referred to as the universal language, is known to be a common component in all cultures. Then, could ‘musical instinct’ be something that is shared to some degree despite the extensive environmental differences amongst cultures?
    On January 16, a KAIST research team led by Professor Hawoong Jung from the Department of Physics announced to have identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model.
    Previously, many researchers have attempted to identify the similarities and differences between the music that exist in various different cultures, and tried to understand the origin of the universality. A paper published in Science in 2019 had revealed that music is produced in all ethnographically distinct cultures, and that similar forms of beats and tunes are used. Neuroscientist have also previously found out that a specific part of the human brain, namely the auditory cortex, is responsible for processing musical information.
    Professor Jung’s team used an artificial neural network model to show that cognitive functions for music forms spontaneously as a result of processing auditory information received from nature, without being taught music. The research team utilized AudioSet, a large-scale collection of sound data provided by Google, and taught the artificial neural network to learn the various sounds. Interestingly, the research team discovered that certain neurons within the network model would respond selectively to music. In other words, they observed the spontaneous generation of neurons that reacted minimally to various other sounds like those of animals, nature, or machines, but showed high levels of response to various forms of music including both instrumental and vocal.
    The neurons in the artificial neural network model showed similar reactive behaviours to those in the auditory cortex of a real brain. For example, artificial neurons responded less to the sound of music that was cropped into short intervals and were rearranged. This indicates that the spontaneously-generated music-selective neurons encode the temporal structure of music. This property was not limited to a specific genre of music, but emerged across 25 different genres including classic, pop, rock, jazz, and electronic.
    Furthermore, suppressing the activity of the music-selective neurons was found to greatly impede the cognitive accuracy for other natural sounds. That is to say, the neural function that processes musical information helps process other sounds, and that ‘musical ability’ may be an instinct formed as a result of an evolutionary adaptation acquired to better process sounds from nature.
    Professor Hawoong Jung, who advised the research, said, “The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.” As for the significance of the research, he explained, “We look forward for this artificially built model with human-like musicality to become an original model for various applications including AI music generation, musical therapy, and for research in musical cognition.” He also commented on its limitations, adding, “This research however does not take into consideration the developmental process that follows the learning of music, and it must be noted that this is a study on the foundation of processing musical information in early development.”
    This research, conducted by first author Dr. Gwangsu Kim of the KAIST Department of Physics (current affiliation: MIT Department of Brain and Cognitive Sciences) and Dr. Dong-Kyum Kim (current affiliation: IBS) was published in Nature Communications under the title, “Spontaneous emergence of rudimentary music detectors in deep neural networks.”
    This research was supported by the National Research Foundation of Korea. More