More stories

  • in

    Software program allows simultaneous viewing of tissue images through dimensionality reduction

    Imaging of tissue specimens is an important aspect of translational research that bridges the gap between basic laboratory science and clinical science to improve the understanding of cancer and aid in the development of new therapies. To analyze images to their fullest potential, scientists ideally need an application that enables multiple images to be viewed simultaneously. In an article published in the journal Patterns, Moffitt Cancer Center researchers describe a new open-source software program they developed that allows users to view many multiplexed images simultaneously.
    There have been significant improvements in the approaches to study cancer over the past decade, including new techniques to study tissue samples. For example, machines can now be programmed to stain hundreds of slides simultaneously, or alternatively, up to 1,000 different tissue sample cores can be placed on a single slide and stained for biomarkers at the same time. With the advent of these approaches comes a wealth of possibilities to generate new data and information. Due to the magnitude of this information and the complex nature of cancer itself, computational modeling and software are needed to view and study the cancer biomarkers, tissue architecture, and cellular interactions among these samples.
    As researchers in Moffitt’s Integrated Mathematical Oncology Department (IMO) were working on a project, they realized that the currently available software for image viewing was not amenable to their needs.
    “We were interested in understanding the underlying spatial patterns between tumor and immune cells and how the tumors were organized. This required us to compare multiple images simultaneously and we realized there was no software, free or commercial, enabling this,” said Sandhya Prabhakaran, Ph.D., lead author and applied research scientist at Moffitt.
    The IMO team decided to create a software program that would enable them to view multiple images at the same time and extract data through additional analyses that could be used for a variety of purposes, including identifying biomarkers and understanding tissue architecture and the spatial organization of different cell types. Their program, called Mistic, takes information from multidimensional images and uses dimensionality reduction methods called t-distributed stochastic neighbor embedding (t-SNE) to abstract each image to a point in reduced space. Mistic is an open-source software that can be used with images from Vectra, CyCIF, t-CyCIF and CODEX.
    In their publication, the researchers describe the creation of Mistic and some of the applications that it could be used for. For example, they demonstrated that the software could be used to view 92 images from patients with non-small cell lung cancer and deduce how biomarkers cluster across patients with different responses to treatment. In another example, the researchers used Mistic combined with statistical analysis to assess the spatial colocalization and coexpression of immune cell markers in 210 endometrial cancer samples.
    The team is excited about the potential applications for Mistic and have plans to improve the software.
    “We will enhance Mistic to use biologically meaningful regions of interest from the multiplexed image to render the overall image t-SNE. We also have plans to augment Mistic with other visualization software and build a cross-platform viewer plugin to improve the adoption, usability and functionality of Mistic in the biomedical research community,” said Sandy Anderson, Ph.D., author and chair of Moffitt’s IMO Department.
    In addition to Mistic, the Patterns featured the IMO team in a People of Data article titled “Developing tools for analyzing and viewing multiplexed images.” Here, the IMO team gets to introduce themselves, discuss their research passion and the challenges and opportunities relevant to imaging in mathematical oncology. More

  • in

    AI speeds sepsis detection to prevent hundreds of deaths

    Patients are 20% less likely to die of sepsis because a new AI system developed at Johns Hopkins University catches symptoms hours earlier than traditional methods, an extensive hospital study demonstrates. The system, created by a Johns Hopkins researcher whose young nephew died from sepsis, scours medical records and clinical notes to identify patients at risk of life-threatening complications. The work, which could significantly cut patient mortality from one of the top causes of hospital deaths worldwide, is published today in Nature Medicine and Nature Digital Medicine.
    “It is the first instance where AI is implemented at the bedside, used by thousands of providers, and where we’re seeing lives saved,” said Suchi Saria, founding research director of the Malone Center for Engineering in Healthcare at Johns Hopkins and lead author of the studies, which evaluated more than a half million patients over two years. “This is an extraordinary leap that will save thousands of sepsis patients annually. And the approach is now being applied to improve outcomes in other important problem areas beyond sepsis.” Sepsis occurs when an infection triggers a chain reaction throughout the body. Inflammation can lead to blood clots and leaking blood vessels, and ultimately can cause organ damage or organ failure. About 1.7 million adults develop sepsis every year in the United States and more than 250,000 of them die.
    Sepsis is easy to miss since symptoms such as fever and confusion are common in other conditions, Saria said. The faster it’s caught, the better a patient’s chances for survival. “One of the most effective ways of improving outcomes is early detection and giving the right treatments in a timely way, but historically this has been a difficult challenge due to lack of systems for accurate early identification,” said Saria, who directs the Machine Learning and Healthcare Lab at Johns Hopkins.
    To address the problem, Saria and other Johns Hopkins doctors and researcher developed the Targeted Real-Time Early Warning System. Combining a patient’s medical history with current symptoms and lab results, the machine-learning system shows clinicians when someone is at risk for sepsis and suggests treatment protocols, such as starting antibiotics. The AI tracks patients from when they arrive in the hospital through discharge, ensuring that critical information isn’t overlooked even if staff changes or a patient moves to a different department. During the study, more than 4,000 clinicians from five hospitals used the AI in treating 590,000 patients. The system also reviewed 173,931 previous patient cases. In 82% of sepsis cases, the AI was accurate nearly 40% of the time.
    Previous attempts to use electronic tools to detect sepsis caught less than half that many cases and were accurate 2% to 5% of the time. All sepsis cases are eventually caught, but with the current standard of care, the condition kills 30% of the people who develop it. In the most severe sepsis cases where an hour delay is the difference between life and death, the AI detected it an average of nearly six hours earlier than traditional methods. “This is a breakthrough in many ways,” said co-author Albert Wu, an internist and director of the Johns Hopkins Center for Health Services and Outcomes Research.
    “Up to this point, most of these types of systems have guessed wrong much more often than they get it right. Those false alarms undermine confidence.” Unlike conventional approaches, the system allows doctors to see why the tool is making specific recommendations. The work is extremely personal to Saria, who lost her nephew as a young adult to sepsis. “Sepsis develops very quickly and this is what happened in my nephew’s case,” she said. “When doctors detected it, he was already in septic shock.” Bayesian Health, a company spun-off from Johns Hopkins, led and managed the deployment across all testing sites. The team also partnered with the two largest electronic health record system providers, Epic and Cerner, to ensure that the tool can be implemented at other hospitals. The team has adapted the technology to identify patients at risk for pressure injuries, commonly known as bed sores, and those at risk for sudden deterioration caused by bleeding, acute respiratory failure, and cardiac arrest.
    “The approach used here is foundationally different,” Saria said. “It’s adaptive and takes into consideration the diversity of the patient population, the unique ways in which doctors and nurses deliver care across different sites, and the unique characteristics of each health system, allowing it to be significantly more accurate and to gain provider trust and adoption.”
    Co-authors of the three studies in Nature Medicine and Nature Digital Medicine include Katharine Henry, Roy Adams, Cassandra Parent, David Hager, Edward Chen, Mustapha Saheed, and Albert Wu of Johns Hopkins University; Hossein Soleimani of University of California, San Francisco; Anirudh Sridharan of Howard County General Hospital; Lauren Johnson, Maureen Henley, Sheila Miranda, Katrina Houston, and Anushree Ahluwalia of The Johns Hopkins Hospital; Sara Cosgrove and Eili Klein of Johns Hopkins University School of Medicine; Andrew Markowski of Suburban Hospital; and Robert Linton of Howard County General Hospital.
    The work was funded by the Gordon and Betty Moore Foundation (No. 3926 and 3186.01), the National Science Foundation Future of Work at the Human-technology Frontier (No. 1840088), and the Alfred P. Sloan Foundation research fellowship (2018).
    Story Source:
    Materials provided by Johns Hopkins University. Original written by Laura Cech. Note: Content may be edited for style and length. More

  • in

    Quantum digits unlock more computational power with fewer quantum particles

    For decades computers have been synonymous with binary information — zeros and ones. Now a team at the University of Innsbruck, Austria, realized a quantum computer that breaks out of this paradigm and unlocks additional computational resources, hidden in almost all of today’s quantum devices.
    We all learn from early on that computers work with zeros and ones, also known as binary information. This approach has been so successful that computers now power everything from coffee machines to self-driving cars and it is hard to imagine a life without them.
    Building on this success, today’s quantum computers are also designed with binary information processing in mind. “The building blocks of quantum computers, however, are more than just zeros and ones,” explains Martin Ringbauer, an experimental physicist from Innsbruck, Austria. “Restricting them to binary systems prevents these devices from living up to their true potential.”
    The team led by Thomas Monz at the Department of Experimental Physics at the University of Innsbruck, now succeeded in developing a quantum computer that can perform arbitrary calculations with so-called quantum digits (qudits), thereby unlocking more computational power with fewer quantum particles.
    Quantum systems are different
    Although storing information in zeros and ones is not the most efficient way of doing calculations, it is the simplest way. Simple often also means reliable and robust to errors and so binary information has become the unchallenged standard for classical computers.
    In the quantum world, the situation is quite different. In the Innsbruck quantum computer, for example, information is stored in individual trapped Calcium atoms. Each of these atoms naturally has eight different states, of which typically only two are used to store information. Indeed, almost all existing quantum computers have access to more quantum states than they use for computation.
    A natural approach for hardware and software
    The physicists from Innsbruck now developed a quantum computer that can make use of the full potential of these atoms, by computing with qudits. Contrary to the classical case, using more states does not make the computer less reliable. “Quantum systems naturally have more than just two states and we showed that we can control them all equally well,” says Thomas Monz.
    On the flipside, many of the tasks that need quantum computers, such as problems in physics, chemistry, or material science, are also naturally expressed in the qudit language. Rewriting them for qubits can often make them too complicated for today’s quantum computers. “Working with more than zeros and ones is very natural, not only for the quantum computer but also for its applications, allowing us to unlock the true potential of quantum systems,” explains Martin Ringbauer.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Blockchain gives Indigenous Americans control over their genomic data

    Scientists today can access genomic data from Indigenous Peoples without their free, prior, and informed consent, leading to potential misuse and the reinforcement of stereotypes. Despite existing tools that facilitate the sharing of genomic information with researchers, none of those options give Indigenous governments control over how these data are used. In an article publishing in the journal Cell on July 21, authors propose a new blockchain model where researchers are only allowed to access the genomic data after the Indigenous entities have approved the research project.

    advertisement More

  • in

    Patient deterioration predictor could surpass limits of traditional vital signs

    An artificial intelligence-driven device that works to detect and predict hemodynamic instability may provide a more accurate picture of patient deterioration than traditional vital sign measurements, a Michigan Medicine study suggests.
    Researchers captured data from over 5,000 adult patients at University of Michigan Health with the Analytic for Hemodynamic Instability. Developed at the U-M Weil Institute for Critical Care Research and Innovation, AHI is a software as a medical device designed to detect and predict changes in hemodynamic status in real-time using data from a single electrocardiogram lead. The researchers compared the results against gold standard vital sign measurements of continuous heart rate and blood pressure measured by invasive arterial monitoring in several intensive care units to determine if the AHI could indicate hemodynamic instability in real-time.
    They found that the AHI detected standard indications of hemodynamic instability, a combination of elevated heart rate and low blood pressure, with nearly 97% sensitivity and 79% specificity. The results are published in Critical Care Explorations (a Society of Critical Care Medicine journal).
    The findings suggest that the AHI may be able to provide continuous dynamic monitoring capabilities in patients who traditionally have intermittent static vital sign measurements, says senior author Ben Bassin, M.D., director of the Joyce and Don Massey Family Foundation Emergency Critical Care Center, also known as EC3, and an associate professor of emergency medicine at U-M Medical School.
    “AHI performs extremely well, and it functions in a way that we think may have transformative clinical utility,” Bassin said. “Most vital signs measurements are static, subject to human error, and require validation and interpretation. AHI is the opposite of that. It’s dynamic, produces a binary output of ‘stable’ or ‘unstable,’ and it may enable early martialing of resources to patients who may not have been on a clinician’s radar.”
    Traditional vital signs have limitations, including limited accuracy in non-invasive monitoring and the fact that patients who are not at obvious risk for immediate deterioration may only be monitored periodically every 4-6 hours or longer. The AHI, which was approved by the United States Food and Drug Administration in 2021 and is licensed to Fifth Eye, Inc. (a U-M spinoff), was designed to address those limitations.
    “The vision of AHI was born out of our continued inability to identify unstable patients and to predict when patients would become unstable, especially in settings where they cannot be intensively monitored, said co-author Kevin Ward, M.D., executive director of the Weil Institute and professor of emergency medicine and biomedical engineering at Michigan Medicine.
    “AHI is ideally suited to be utilized with wearable monitors such as ECG patches, that could make any hospital bed, waiting room or other setting into a sophisticated monitoring environment. The implication of such a technology is that it has the potential to save lives not only in the hospital, but also at home, in the ambulance and on the battlefield.”
    Researchers say future studies are needed to determine if AHI provides clinical and resource allocation benefits in patients undergoing infrequent blood pressure monitoring. The next phase of research will focus on how AHI is used at Michigan Medicine.
    Story Source:
    Materials provided by Michigan Medicine – University of Michigan. Original written by Noah Fromson. Note: Content may be edited for style and length. More

  • in

    Flexible method for shaping laser beams extends depth-of-focus for OCT imaging

    Researchers have developed a new method for flexibly creating various needle-shaped laser beams. These long, narrow beams can be used to improve optical coherence tomography (OCT), a noninvasive and versatile imaging tool that is used for scientific research and various types of clinical diagnoses.
    “Needle-shaped laser beams can effectively extend the depth-of-focus of an OCT system, improving the lateral resolution, signal-to-noise ratio, contrast and image quality over a long depth range,” said research team leader Adam de la Zerda from Stanford University School of Medicine. “However, before now, implementing a specific needle-shaped beam has been difficult due to the lack of a common, flexible generation method.”
    In Optica, Optica Publishing Group’s journal for high-impact research, the researchers describe their new platform for creating needle-shaped beams with different lengths and diameters. It can be used to create various types of beams such as one with an extremely long depth of field or one that is smaller than the diffraction-limit of light, for example.
    The needle-shaped beams generated with this method could benefit a variety of OCT applications. For example, utilizing a long, narrow beam could allow high-resolution OCT imaging of the retina without any dynamic focusing, making the process faster and thus more comfortable for patients. It could also extend the depth-of-focus for OCT endoscopy, which would improve diagnosis accuracy.
    “The rapid high-resolution imaging ability of needle-shaped beams can also get rid of adverse effects that occur due to human movements during image acquisition,” said the paper’s first author Jingjing Zhao. “This can help to pinpoint melanoma and other skin problems using OCT.”
    A flexible solution
    As a noninvasive imaging tool, OCT features an axial resolution that is constant along its imaging depth. However, its axial resolution, which is determined by the light source, has a very small depth of focus. To address this issue, OCT instruments are often made so that the focus can be moved along the depth to capture clear images of an entire region of interest. However, this dynamic focusing can make imaging slower and doesn’t work well for applications where the sample isn’t static. More

  • in

    At the water's edge: Self-assembling 2D materials at a liquid-liquid interface

    The past few decades have witnessed a great amount of research in the field of two-dimensional (2D) materials. As the name implies, these thin film-like materials are composed of layers that are only a few atoms thick. Many of the chemical and physical properties of 2D materials can be fine-tuned, leading to promising applications in many fields, including optoelectronics, catalysis, renewable energy, and more.
    Coordination nanosheets are one particularly interesting type of 2D material. The “coordination” refers to the effect of metallic ions in these molecules, which act as coordination centers. These centers can spontaneously create organized molecular dispositions that span multiple layers in 2D materials. This has attracted the attention of materials scientists due to their favorable properties. In fact, we have only begun to scratch the surface regarding what heterolayer coordination nanosheets — coordination nanosheets whose layers have different atomic composition — can offer.
    In a recent study published first on June 13, 2022, and featured on the front cover of Chemistry — A European Journal, a team of scientists from Tokyo University of Science (TUS) and The University of Tokyo in Japan reported a remarkably simple way to synthesize heterolayer coordination nanosheets. Composed of the organic ligand, terpyridine, coordinating iron and cobalt, these nanosheets assemble themselves at the interface between two immiscible liquids in a peculiar way. The study, led by Prof. Hiroshi Nishihara from TUS, also included contributions from Mr. Joe Komeda, Dr. Kenji Takada, Dr. Hiroaki Maeda, and Dr. Naoya Fukui from TUS.
    To synthesize the heterolayer coordination nanosheets, the team first created the liquid-liquid interface to enable their assembly. They dissolved tris(terpyridine) ligand in dichloromethane (CH2Cl2), an organic liquid that does not mix with water. They then poured a solution of water and ferrous tetrafluoroborate, an iron-containing chemical, on top of the CH2Cl2. After 24 hours, the first layer of the coordination nanosheet, bis(terpyridine)iron (or “Fe-tpy”), formed at the interface between both liquids.
    Following this, they removed the iron-containing water and replaced it with cobalt-containing water. In the next few days, a bis(terpyridine)cobalt (or “Co-tpy”) layer formed right below the iron-containing one at the liquid-liquid interface.
    The team made detailed observations of the heterolayer using various advanced techniques, such as scanning electron microscopy, X-ray photoelectron spectroscopy, atomic force microscopy, and scanning transmission electron microscopy. They found that the Co-tpy layer formed neatly below the Fe-tpy layer at the liquid-liquid interface. Moreover, they could control the thickness of the second layer depending on how long they left the synthesis process run its course.
    Interestingly, the team also found that the ordering of the layers could be swapped by simply changing the order of the synthesis steps. In other words, if they first added a cobalt-containing solution and then replaced it with an iron-containing solution, the synthesized heterolayer would have cobalt coordination centers on the top layer and iron coordination centers on the bottom layer. “Our findings indicate that metal ions can go through the first layer from the aqueous phase to the CH2Cl2 phase to react with terpyridine ligands right at the boundary between the nanosheet and the CH2Cl2 phase,” explains Prof. Nishihara. “This is the first ever clarification of the growth direction of coordination nanosheets at a liquid/liquid interface.”
    Additionally, the team investigated the reduction-oxidation properties of their coordination nanosheets as well as their electrical rectification characteristics. They found that the heterolayers behaved much like a diode in a way that is consistent with the electronic energy levels of Co-tpy and Fe-tpy. These insights, coupled with the easy synthesis procedure developed by the team, could help in the design of heterolayer nanosheets made of other materials and tailored for specific electronics applications. “Our synthetic method could be applicable to other coordination polymers synthesized at liquid-liquid interfaces,” highlights Prof. Nishihara. “Therefore, the results of this study will expand the structural and functional diversity of molecular 2D materials.”
    With eyes set on the future, the team will keep investigating chemical phenomena occurring at liquid-liquid interfaces, elucidating the mechanisms of mass transport and chemical reactions. Their findings can help expand the design of 2D materials and, hopefully, lead to better performance of optoelectronic devices, such as solar cells.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Electric nanomotor made from DNA material

    A research team led by the Technical University of Munich (TUM) has succeeded for the first time in producing a molecular electric motor using the DNA origami method. The tiny machine made of genetic material self-assembles and converts electrical energy into kinetic energy. The new nanomotors can be switched on and off, and the researchers can control the rotation speed and rotational direction.
    Be it in our cars, drills or the automatic coffee grinders — motors help us perform work in our everyday lives to accomplish a wide variety of tasks. On a much smaller scale, natural molecular motors perform vital tasks in our bodies. For instance, a motor protein known as ATP synthase produces the molecule adenosine triphosphate (ATP), which our body uses for short-term storage and transfer of energy.
    While natural molecular motors are essential, it has been quite difficult to recreate motors on this scale with mechanical properties roughly similar to those of natural molecular motors like ATP synthase. A research team has now constructed a working nanoscale molecular rotary motor using the DNA origami method. The team was led by Hendrik Dietz, Professor of Biomolecular Nanotechnology at TUM, Friedrich Simmel, Professor of Physics of Synthetic Biological Systems at TUM, and Ramin Golestanian, director at the Max Planck Institute for Dynamics and Self-Organization.
    A self-assembling nanomotor
    The novel molecular motor consists of DNA — genetic material. The researchers used the DNA origami method to assemble the motor from DNA molecules. This method was invented by Paul Rothemund in 2006 and was later further developed by the research team at TUM. Several long single strands of DNA serve as a basis to which additional DNA strands attach themselves to as counterparts. The DNA sequences are selected in such a way that the attached strands and folds create the desired structures.
    “We’ve been advancing this method of fabrication for many years and can now develop very precise and complex objects, such as molecular switches or hollow bodies that can trap viruses. If you put the DNA strands with the right sequences in solution, the objects self-assemble,” says Dietz.
    The new nanomotor made of DNA material consists of three components: base, platform and rotor arm. The base is approximately 40 nanometers high and is fixed to a glass plate in solution via chemical bonds on a glass plate. A rotor arm of up to 500 nanometers in length is mounted on the base so that it can rotate. Another component is crucial for the motor to work as intended: a platform that lies between the base and the rotor arm. This platform contains obstacles that influence the movement of the rotor arm. To pass the obstacles and rotate, the rotor arm must bend upward a little, similar to a ratchet.
    Targeted movement through AC voltage
    Without energy supply, the rotor arms of the motors move randomly in one direction or the other, driven by random collisions with molecules from the surrounding solvent. However, as soon as AC voltage is applied via two electrodes, the rotor arms rotate in a targeted and continuous manner in one direction.
    “The new motor has unprecedented mechanical capabilities: It can achieve torques in the range of 10 piconewton times nanometer. And it can generate more energy per second than what’s released when two ATP molecules are split,” explains Ramin Golestanian, who led the theoretical analysis of the mechanism of the motor.
    The targeted movement of the motors results from a superposition of the fluctuating electrical forces with the forces experienced by the rotor arm due to the ratchet obstacles. The underlying mechanism realizes a so-called “flashing Brownian ratchet.” The researchers can control the speed and direction of the rotation via the direction of the electric field and also via the frequency and amplitude of the AC voltage.
    “The new motor could also have technical applications in the future. If we develop the motor further we could possibly use it in the future to drive user-defined chemical reactions, inspired by how ATP synthase makes ATP driven by rotation. Then, for example, surfaces could be densely coated with such motors. Then you would add starting materials, apply a little AC voltage and the motors produce the desired chemical compound,” says Dietz. More