More stories

  • in

    New AI technology integrates multiple data types to predict cancer outcomes

    While it’s long been understood that predicting outcomes in patients with cancer requires considering many factors, such as patient history, genes and disease pathology, clinicians struggle with integrating this information to make decisions about patient care. A new study from researchers from the Mahmood Lab at Brigham and Women’s Hospital reveals a proof-of-concept model that uses artificial intelligence (AI) to combine multiple types of data from different sources to predict patient outcomes for 14 different types of cancer. Results are published in Cancer Cell.
    Experts depend on several sources of data, like genomic sequencing, pathology, and patient history, to diagnose and prognosticate different types of cancer. While existing technology enables them to use this information to predict outcomes, manually integrating data from different sources is challenging and experts often find themselves making subjective assessments.
    “Experts analyze many pieces of evidence to predict how well a patient may do,” said Faisal Mahmood, PhD, an assistant professor in the Division of Computational Pathology at the Brigham and associate member of the Cancer Program at the Broad Institute of Harvard and MIT. “These early examinations become the basis of making decisions about enrolling in a clinical trial or specific treatment regimens. But that means that this multimodal prediction happens at the level of the expert. We’re trying to address the problem computationally.”
    Through these new AI models, Mahmood and colleagues uncovered a means to integrate several forms of diagnostic information computationally to yield more accurate outcome predictions. The AI models demonstrate the ability to make prognostic determinations while also uncovering the predictive bases of features used to predict patient risk — a property that could be used to uncover new biomarkers.
    Researchers built the models using The Cancer Genome Atlas (TCGA), a publicly available resource containing data on many different types of cancer. They then developed a multimodal deep learning-based algorithm which is capable of learning prognostic information from multiple data sources. By first creating separate models for histology and genomic data, they could fuse the technology into one integrated entity that provides key prognostic information. Finally, they evaluated the model’s efficacy by feeding it data sets from 14 cancer types as well as patient histology and genomic data. Results demonstrated that the models yielded more accurate patient outcome predictions than those incorporating only single sources of information.
    This study highlights that using AI to integrate different types of clinically informed data to predict disease outcomes is feasible. Mahmood explained that these models could allow researchers to discover biomarkers that incorporate different clinical factors and better understand what type of information they need to diagnose different types of cancer. The researchers also quantitively studied the importance of each diagnostic modality for individual cancer types and the benefit of integrating multiple modalities.
    The AI models are also capable of elucidating pathologic and genomic features that drive prognostic predictions. The team found that the models used patient immune responses as a prognostic marker without being trained to do so, a notable finding given that previous research shows that patients whose tumors elicit stronger immune responses tend to experience better outcomes.
    While this proof-of-concept model reveals a newfound role for AI technology in cancer care, this research is only a first step in implementing these models clinically. Applying these models in the clinic requires incorporating larger data sets and validating on large independent test cohorts. Going forward, Mahmood aims to integrate even more types of patient information, such as radiology scans, family histories, and electronic medical records, and eventually bring the model to clinical trials.
    “This work sets the stage for larger health care AI studies that combine data from multiple sources,” said Mahmood. “In a broader sense, our findings emphasize a need for building computational pathology prognostic models with much larger datasets and downstream clinical trials to establish utility.”
    Story Source:
    Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence tools predict DNA's regulatory role and 3D structure

    Newly developed artificial intelligence (AI) programs accurately predicted the role of DNA’s regulatory elements and three-dimensional (3D) structure based solely on its raw sequence, according to two recent studies in Nature Genetics. These tools could eventually shed new light on how genetic mutations lead to disease and could lead to new understanding of how genetic sequence influences the spatial organization and function of chromosomal DNA in the nucleus, said study author Jian Zhou, Ph.D., Assistant Professor in the Lyda Hill Department of Bioinformatics at UTSW.
    “Taken together, these two programs provide a more complete picture of how changes in DNA sequence, even in noncoding regions, can have dramatic effects on its spatial organization and function,” said Dr. Zhou, a member of the Harold C. Simmons Comprehensive Cancer Center, a Lupe Murchison Foundation Scholar in Medical Research, and a Cancer Prevention and Research Institute of Texas (CPRIT) Scholar.
    Only about 1% of human DNA encodes instructions for making proteins. Research in recent decades has shown that much of the remaining noncoding genetic material holds regulatory elements — such as promoters, enhancers, silencers, and insulators — that control how the coding DNA is expressed. How sequence controls the functions of most of these regulatory elements is not well understood, Dr. Zhou explained.
    To better understand these regulatory components, he and colleagues at Princeton University and the Flatiron Institute developed a deep learning model they named Sei, which accurately sorts these snippets of noncoding DNA into 40 “sequence classes” or jobs — for example, as an enhancer for stem cell or brain cell gene activity. These 40 sequence classes, developed using nearly 22,000 data sets from previous studies studying genome regulation, cover more than 97% of the human genome. Moreover, Sei can score any sequence by its predicted activity in each of the 40 sequence classes and predict how mutations impact such activities.
    By applying Sei to human genetics data, the researchers were able to characterize the regulatory architecture of 47 traits and diseases recorded in the UK Biobank database and explain how mutations in regulatory elements cause specific pathologies. Such capabilities can help gain a more systematic understanding of how genomic sequence changes are linked to diseases and other traits. The findings were published this month.
    In May, Dr. Zhou reported the development of a different tool, called Orca, which predicts the 3D architecture of DNA in chromosomes based on its sequence. Using existing data sets of DNA sequences and structural data derived from previous studies that revealed the molecule’s folds, twists, and turns, Dr. Zhou trained the model to make connections and evaluated the model’s ability to predict structure at various length scales.
    The findings showed that Orca predicted DNA structures both small and large based on their sequences with high accuracy, including for sequences carrying mutations associated with various health conditions including a form of leukemia and limb malformations. Orca also enabled the researchers to generate new hypotheses about how DNA sequence controls its local and large-scale 3D structure.
    Dr. Zhou said that he and his colleagues plan to use Sei and Orca, which are both publicly available on web servers and as open-source code, to further explore the role of genetic mutations in causing the molecular and physical manifestations of diseases — research that could eventually lead to new ways to treat these conditions.
    The Orca study was supported by grants from CPRIT (RR190071), the National Institutes of Health (DP2GM146336), and the UT Southwestern Endowed Scholars Program in Medical Science.
    Story Source:
    Materials provided by UT Southwestern Medical Center. Note: Content may be edited for style and length. More

  • in

    Researchers discover major roadblock in alleviating network congestion

    When users want to send data over the internet faster than the network can handle, congestion can occur — the same way traffic congestion snarls the morning commute into a big city.
    Computers and devices that transmit data over the internet break the data down into smaller packets and use a special algorithm to decide how fast to send those packets. These congestion control algorithms seek to fully discover and utilize available network capacity while sharing it fairly with other users who may be sharing the same network. These algorithms try to minimize delay caused by data waiting in queues in the network.
    Over the past decade, researchers in industry and academia have developed several algorithms that attempt to achieve high rates while controlling delays. Some of these, such as the BBR algorithm developed by Google, are now widely used by many websites and applications.
    But a team of MIT researchers has discovered that these algorithms can be deeply unfair. In a new study, they show there will always be a network scenario where at least one sender receives almost no bandwidth compared to other senders; that is, a problem known as starvation cannot be avoided.
    “What is really surprising about this paper and the results is that when you take into account the real-world complexity of network paths and all the things they can do to data packets, it is basically impossible for delay-controlling congestion control algorithms to avoid starvation using current methods,” says Mohammad Alizadeh, associate professor of electrical engineering and computer science (EECS).
    While Alizadeh and his co-authors weren’t able to find a traditional congestion control algorithm that could avoid starvation, there may be algorithms in a different class that could prevent this problem. Their analysis also suggests that changing how these algorithms work, so that they allow for larger variations in delay, could help prevent starvation in some network situations. More

  • in

    Smart microrobots learn how to swim and navigate with artificial intelligence

    Researchers from Santa Clara University, New Jersey Institute of Technology and the University of Hong Kong have been able to successfully teach microrobots how to swim via deep reinforcement learning, marking a substantial leap in the progression of microswimming capability.
    There has been tremendous interest in developing artificial microswimmers that can navigate the world similarly to naturally-occuring swimming microorganisms, like bacteria. Such microswimmers provide promise for a vast array of future biomedical applications, such as targeted drug delivery and microsurgery. Yet, most artificial microswimmers to date can only perform relatively simple maneuvers with fixed locomotory gaits.
    In the researchers’ study published in Communications Physics, they reasoned microswimmers could learn — and adapt to changing conditions — through AI. Much like humans learning to swim require reinforcement learning and feedback to stay afloat and propel in various directions under changing conditions, so too must microswimmers, though with their unique set of challenges imposed by physics in the microscopic world.
    “Being able to swim at the micro-scale by itself is a challenging task,” said On Shun Pak, associate professor of mechanical engineering at Santa Clara University. “When you want a microswimmer to perform more sophisticated maneuvers, the design of their locomotory gaits can quickly become intractable.”
    By combining artificial neural networks with reinforcement learning, the team successfully taught a simple microswimmer to swim and navigate toward any arbitrary direction. When the swimmer moves in certain ways, it receives feedback on how good the particular action is. The swimmer then progressively learns how to swim based on its experiences interacting with the surrounding environment.
    “Similar to a human learning how to swim, the microswimmer learns how to move its ‘body parts’ — in this case three microparticles and extensible links — to self-propel and turn,” said Alan Tsang, assistant professor of mechanical engineering at the University of Hong Kong. “It does so without relying on human knowledge but only on a machine learning algorithm.”
    The AI-powered swimmer is able to switch between different locomotory gaits adaptively to navigate toward any target location on its own.
    As a demonstration of the powerful ability of the swimmer, the researchers showed that it could follow a complex path without being explicitly programmed. They also demonstrated the robust performance of the swimmer in navigating under the perturbations arising from external fluid flows.
    “This is our first step in tackling the challenge of developing microswimmers that can adapt like biological cells in navigating complex environments autonomously,” said Yuan-nan Young, professor of mathematical sciences at New Jersey Institute of Technology.
    Such adaptive behaviors are crucial for future biomedical applications of artificial microswimmers in complex media with uncontrolled and unpredictable environmental factors.
    “This work is a key example of how the rapid development of artificial intelligence may be exploited to tackle unresolved challenges in locomotion problems in fluid dynamics,” said Arnold Mathijssen, an expert on microrobots and biophysics at the University of Pennsylvania, who was not involved in the research. “The integration between machine learning and microswimmers in this work will spark further connections between these two highly active research areas.”
    Story Source:
    Materials provided by New Jersey Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Optimizing SWAP networks for quantum computing

    A research partnership at the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (Berkeley Lab) and Chicago-based Super.tech (acquired by ColdQuanta in May 2022) demonstrated how to optimize the execution of the ZZ SWAP network protocol, important to quantum computing. The team also introduced a new technique for quantum error mitigation that will improve the network protocol’s implementation in quantum processors. The experimental data was published this July in Physical Review Research, adding more pathways in the near term to implement quantum algorithms using gate-based quantum computing.
    A Smart Compiler for Superconducting Quantum Hardware
    Quantum processors with two- or three-dimensional architectures have limited qubit connectivity where each qubit interacts with only a limited number of other qubits. Furthermore, each qubit’s information can only exist for so long before noise and errors cause decoherence, limiting the runtime and fidelity of quantum algorithms. Therefore, when designing and executing a quantum circuit, researchers must optimize the translation of the circuit made up of abstract (logical) gates to physical instructions based on the native hardware gates available in a given quantum processor. Efficient circuit decompositions minimize the operating time because they consider the number of gates and operations natively supported by the hardware to perform the desired logical operations.
    SWAP gates — which swap information between qubits — are often introduced in quantum circuits to facilitate interactions between information in non-adjacent qubits. If a quantum device only allows gates between adjacent qubits, swaps are used to move information from one qubit to another non-adjacent qubit.
    In noisy intermediate-scale quantum (NISQ) hardware, introducing swap gates can require a large experimental overhead. The swap gate must often be decomposed into native gates, such as controlled-NOT gates. Therefore, when designing quantum circuits with limited qubit connectivity, it is important to use a smart compiler that can search for, decompose, and cancel redundant quantum gates to improve the runtime of a quantum algorithm or application.
    The research partnership used Super.tech’s SuperstaQ software enabling scientists to finely tailor their applications and automate the compilations of circuits for AQT’s superconducting hardware, particularly for a native high-fidelity controlled-S gate, which is not available on most hardware systems. This smart compiling approach with four transmon qubits allows the SWAP networks to be decomposed more efficiently than standard decomposition methods. More

  • in

    New chip-based beam steering device lays groundwork for smaller, cheaper lidar

    Researchers have developed a new chip-based beam steering technology that provides a promising route to small, cost-effective and high-performance lidar (or light detection and ranging) systems. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is used in a wide range of applications such as autonomous driving, free-space optical communications, 3D holography, biomedical sensing and virtual reality.
    “Optical beam steering is a key technology for lidar systems, but conventional mechanical-based beam steering systems are bulky, expensive, sensitive to vibration and limited in speed,” said research team leader Hao Hu from the Technical University of Denmark. “Although devices known as chip-based optical phased arrays (OPAs) can quickly and precisely steer light in a non-mechanical way, so far, these devices have had poor beam quality and a field of view typically below 100 degrees.”
    In Optica, Optica Publishing Group’s journal for high-impact research, Hu and co-author Yong Liu describe their new chip-based OPA that solves many of the problems that have plagued OPAs. They show that the device can eliminate a key optical artifact known as aliasing, achieving beam steering over a large field of view while maintaining high beam quality, a combination that could greatly improve lidar systems.
    “We believe our results are groundbreaking in the field of optical beam steering,” said Hu. “This development lays the groundwork for OPA-based lidar that is low cost and compact, which would allow lidar to be widely used for a variety of applications such as high-level advanced driver-assistance systems that can assist in driving and parking and increase safety.”
    A new OPA design
    OPAs perform beam steering by electronically controlling light’s phase profile to form specific light patterns. Most OPAs use an array of waveguides to emit many beams of light and then interference is applied in far field (away from the emitter) to form the pattern. However, the fact that these waveguide emitters are typically spaced far apart from each other and generate multiple beams in the far field creates an optical artifact known as aliasing. To avoid the aliasing error and achieve a 180° field of view, the emitters need to be close together, but this causes strong crosstalk between adjacent emitters and degrades the beam quality. Thus, until now, there has been a trade-off between OPA field of view and beam quality.
    To overcome this trade-off, the researchers designed a new type of OPA that replaces the multiple emitters of traditional OPAs with a slab grating to create a single emitter. This setup eliminates the aliasing error because the adjacent channels in the slab grating can be very close to each other. The coupling between the adjacent channels is not detrimental in the slab grating because it enables the interference and beam formation in the near field (close to the single emitter). The light can then be emitted to the far field with the desired angle. The researchers also applied additional optical techniques to lower the background noise and reduce other optical artifacts such as side lobes.
    High quality and wide field of view
    To test their new device, the researchers built a special imaging system to measure the average far-field optical power along the horizontal direction over a 180° field of view. They demonstrated aliasing-free beam steering in this direction, including steering beyond ±70°, although some beam degradation was seen.
    They then characterized beam steering in the vertical direction by tuning the wavelength from 1480 nm to 1580 nm, achieving a 13.5° tuning range. Finally, they showed the versatility of the OPA by using it to form 2D images of the letters “D,” “T” and “U” centered at the angles of -60°, 0° and 60° by tuning both the wavelength and the phase shifters. The experiments were performed with a beam width of 2.1°, which the researchers are now working to decrease to achieve beam steering with a higher resolution and a longer range.
    “Our new chip-based OPA shows an unprecedented performance and overcomes the long-standing issues of OPAs by simultaneously achieving aliasing-free 2D beam steering over the entire 180° field of view and high beam quality with a low side lobe level,” said Hu.
    This work is funded by VILLUM FONDEN and Innovationsfonden Denmark.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Pairing imaging, AI may improve colon cancer screening, diagnosis

    A research team from the lab of Quing Zhu, the Edwin H. Murty Professor of Engineering in the Department of Biomedical Engineering at the McKelvey School of Engineering at Washington University in St. Louis, has combined optical coherence tomography (OCT) and machine learning to develop a colorectal cancer imaging tool that may one day improve the traditional endoscopy currently used by doctors.
    The results were published in the June issue of the Journal of Biophotonics.
    Screening for colon cancer now relies on human visual inspection of tissue during a colonoscopy procedure. This technique, however, does not detect and diagnose subsurface lesions.
    An endoscopy OCT essentially shines a light in the colon to help a clinician see deeper to visualize and diagnose abnormalities. By collaborating with physicians at Washington University School of Medicine and with Chao Zhou, associate professor of biomedical engineering, the team developed a small OCT catheter, which uses a longer wavelength of light, to penetrate 1-2 mm into the tissue samples.
    Hongbo Luo, a PhD student in Zhu’s lab, led the work.
    The technique provided more information about an abnormality than surface-level, white-light images currently used by physicians. Shuying Li, a biomedical engineering PhD student, used the imaging data to train a machine learning algorithm to differentiate between “normal” and “cancerous” tissue. The combined system allowed them to detect and classify cancerous tissue samples with a 93% diagnostic accuracy.
    Zhu also is a professor of radiology at the School of Medicine. Her team worked with Vladimir Kushnir and Vladimir Lamm at the School of Medicine, Zhu’s team of PhD students, including Tiger Nie, started a trial in patients in July 2022.
    Story Source:
    Materials provided by Washington University in St. Louis. Original written by Brandie Jefferson. Note: Content may be edited for style and length. More

  • in

    Proteins and natural language: Artificial intelligence enables the design of novel proteins

    Artificial intelligence (AI) has created new possibilities for designing tailor-made proteins to solve everything from medical to ecological problems. A research team at the University of Bayreuth led by Prof. Dr. Birte Höcker has now successfully applied a computer-based natural language processing model to protein research. Completely independently, the ProtGPT2 model designs new proteins that are capable of stable folding and could take over defined functions in larger molecular contexts. The model and its potential are detailed scientifically in Nature Communications.
    Natural languages and proteins are actually similar in structure. Amino acids arrange themselves in a multitude of combinations to form structures that have specific functions in the living organism — similar to the way words form sentences in different combinations that express certain facts. In recent years, numerous approaches have therefore been developed to use principles and processes that control the computer-assisted processing of natural language in protein research. “Natural language processing has made extraordinary progress thanks to new AI technologies. Today, models of language processing enable machines not only to understand meaningful sentences but also to generate them themselves. Such a model was the starting point of our research. With detailed information concerning about 50 million sequences of natural proteins, my colleague Noelia Ferruz trained the model and enabled it to generate protein sequences on its own. It now understands the language of proteins and can use it creatively. We have found that these creative designs follow the basic principles of natural proteins,” says Prof. Dr. Birte Höcker, Head of the Protein Design Group at the University of Bayreuth.
    The language processing model transferred to protein evolution is called “ProtGPT2.” It can now be used to design proteins that adopt stable structures through folding and are permanently functional in this state. In addition, the Bayreuth biochemists have found out, through complex investigations, that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution. These findings shed light on the immeasurable world of possible proteins and open a door to designing them in novel and unexplored ways. There is a further advantage: Most proteins that have been designed de novo so far have idealised structures. Before such structures can have a potential application, they usually must pass through an elaborate functionalization process — for example by inserting extensions and cavities — so that they can interact with their environment and take on precisely defined functions in larger system contexts. ProtGPT2, on the other hand, generates proteins that have such differentiated structures innately, and are thus already operational in their respective environments.
    “Our new model is another impressive demonstration of the systemic affinity of protein design and natural language processing. Artificial intelligence opens up highly interesting and promising possibilities to use methods of language processing for the production of customised proteins. At the University of Bayreuth, we hope to contribute in this way to developing innovative solutions for biomedical, pharmaceutical, and ecological problems,” says Prof. Dr. Birte Höcker.
    Story Source:
    Materials provided by Universität Bayreuth. Note: Content may be edited for style and length. More