More stories

  • in

    AI method uses transformer models to study human cells

    Researchers in Carnegie Mellon University’s School of Computer Science have developed a method that uses artificial intelligence to augment how cells are studied and could help scientists better understand and eventually treat disease.
    Images of organ or tissue samples contain millions of cells. And while analyzing these cells in situ is an important part of biological research, such images make it nearly impossible to identify individual cells, determine their function and understand their organization. A technique called spatial transcriptomics brings these cells into focus by combining imaging with the ability to quantify the level of genes in each cell — giving researchers the ability to study in detail several key biological mechanisms, ranging from how immune cells fight cancer to the cellular impact of drugs and aging.
    Many current spatial transcriptomics platforms still lack the resolution required for closer, more detailed analysis. These technologies often group cells in clusters that range from several to 50 cells for each measurement, a resolution that may be sufficient for well-represented large cells but that is problematic for small cells or ones that aren’t well represented. These rare cells may be the most critical for the disease or condition being studied.
    In a new paper published in Nature Methods, Computational Biology Department researchers Hao Chen, Dongshunyi Li and Ziv Bar-Joseph unveiled a method that uses artificial intelligence to augment the latest spatial transcriptomics technologies.
    The CMU research focuses on more recent technologies that produce images at a much closer scale, allowing for subcellular resolution (or multiple measurements per cell). While these techniques solve the resolution issue, they present new challenges because the resulting images are so close-up that rather than capturing 15 to 50 cells per image, they capture only a few genes. This reversal of the previous problem creates difficulties in identifying the individual components and determining how to group these measurements to learn about specific cells. It also obscures the big picture.
    The algorithm developed by the CBD researchers, called subcellular spatial transcriptomics cell segmentation (SCS), harnesses AI and advanced deep neural networks to adaptively identify cells and their constituent parts. SCS uses transformer models, similar to those used by large language models like ChatGPT, to gather information from the area surrounding each measurement. Just as ChatGPT uses the entire context of a sentence or paragraph for word completion, the SCS method fills in missing information for a specific measurement by incorporating information from the cells around it.
    When applied to images of brain and liver samples with hundreds of thousands of cells, SCS accurately identified the exact location and type of each cell. SCS also identified several cells missed by current analysis approaches, such as rare and small cells that may play a crucial role in specific diseases or processes, including aging. SCS also provided information on location of molecules within cells, greatly improving the resolution at which researchers can study cellular organization.
    “The ability to use the most recent advances in AI to aid the study of the human body opens the door to several downstream applications of spatial transcriptomics to improve human health,” said Ziv Bar-Joseph, the FORE Systems Professor of Machine Learning and Computational Biology at CMU. Such downstream applications are already being investigated by several large consortiums, including the Human BioMolecular Atlas Program (HuBMAP), that are using spatial transcriptomics to create a detailed, 3D map of the human body.
    “By integrating state-of the-art biotechnology and AI, SCS helps unlock several open questions about cellular organization that are key to our ability to understand, and ultimately treat, disease,” added Hao Chen, a Lane Postdoctoral Fellow in CBD.
    SCS is available free on GitHub and was supported by grants from the National Institutes of Health and the National Science Foundation. More

  • in

    Robotic exoskeletons and neurorehabilitation for acquired brain injury: Determining the potential for recovery of overground walking

    A team of New Jersey researchers reviewed the evidence for the impact of robotic exoskeleton devices on recovery of ambulation among individua5ls with acquired brain injury, laying out a systematic framework for the evaluation of such devices that is needed for rigorous research studies. The open access article, “Lower extremity robotic exoskeleton devices for overground ambulation recovery in acquired brain injury — A review”, was published May 25, 2023 in Frontiers in Neurorobotics.
    The authors are Kiran Karunakaran, PhD, Sai Pamula, Caitlyn Bach, Soha Saleh, PhD, and Karen Nolan, PhD, from the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation, and Eliana Legelen, MA, from Montclair State University.
    Acquired brain injury was defined as cerebral palsy, traumatic brain injury or stroke. The review focused on 57 published studies of overground training in wearable robotic exoskeleton devices. The manuscript provides a comprehensive review of clinical and pre-clinical research on the therapeutic effects of various devices.
    “Despite rapid progress in robotic exoskeleton design and technology, the efficacy of such devices is not fully understood. This review lays the foundation to understand the knowledge gaps that currently exist in robotic rehabilitation research,” said lead and corresponding author Dr. Karunakaran, citing the many variables among the devices and the clinical characteristics of acquired brain injury. “The control mechanisms vary widely among these devices, for example, which has a major influence on how training is delivered,” she added. “There’s also wide variability in other factors that affect the trajectory of recovery, including the timing, duration, dosing, and intensity of training in these devices.”
    Developing a framework for future research requires a comprehensive approach based on diagnosis, stage of recovery, and domain, according to co-author Karen J. Nolan, PhD, associate director of the Center for Mobility and Rehabilitation Engineering Research and director of the Acquired Brain Injury Mobility Laboratory. “Through this approach, we will find the optimal ways to use lower extremity robotic exoskeletons to improve mobility in individuals with acquired brain injury,” said Dr. Nolan.
    “It’s important to note that our review is unique in presenting both the downstream (functional, biomechanical, physiological) and upstream (cortical) evaluations after rehabilitation using various robotic devices for different types of acquired brain injury,” Dr. Karunakaran noted. “Each device needs to be evaluated by domain in each population and throughout all stages of recovery. This is the necessary scope for determining the response to treatment.” More

  • in

    Carbon-based quantum technology

    Quantum technology is promising, but also perplexing. In the coming decades, it is expected to provide us with various technological breakthroughs: smaller and more precise sensors, highly secure communication networks, and powerful computers that can help develop new drugs and materials, control financial markets, and predict the weather much faster than current computing technology ever could.
    To achieve this, we need so-called quantum materials: substances that exhibit pronounced quantum physical effects. One such material is graphene. This two-dimensional structural form of carbon has unusual physical properties, such as extraordinarily high tensile strength, thermal and electrical conductivity — as well as certain quantum effects. Restricting the already two-dimensional material even further, for instance, by giving it a ribbon-like shape, gives rise to a range of controllable quantum effects.
    This is precisely what Mickael Perrin’s team leverage in their work: For several years now, scientists in Empa’s Transport at Nanoscale Interfaces laboratory, headed by Michel Calame, have been conducting research on graphene nanoribbons under Perrin’s leadership. “Graphene nanoribbons are even more fascinating than graphene itself,” explains Perrin. “By varying their length and width, as well as the shape of their edges, and by adding other atoms to them, you can give them all kinds of electrical, magnetic, and optical properties.”
    Ultimate precision — down to single atoms
    Research on the promising ribbons isn’t easy. The narrower the ribbon, the more pronounced its quantum properties are — but it also becomes more difficult to access a single ribbon at a time. This is precisely what must be done in order to understand the unique characteristics and possible applications of this quantum material and distinguish them from collective effects.
    In a new study published recently in the journal Nature Electronics, Perrin and Empa researcher Jian Zhang, together with an international team, have succeeded for the first time in contacting individual long and atomically precise graphene nanoribbons. Not a trivial task: “A graphene nanoribbon that is just nine carbon atoms wide measures as little as 1 nanometer in width,” Zhang says. To ensure that only a single nanoribbon is contacted, the researchers employed electrodes of a similar size: They used carbon nanotubes that were also only 1 nanometer in diameter.
    Precision is key for such a delicate experiment. It begins with the source materials. The researchers obtained the graphene nanoribbons via a strong and long-standing collaboration with Empa’s nanotech surfaces laboratory, headed by Roman Fasel. “Roman Fasel and his team have been working on graphene nanoribbons for a long time and can synthesize many different types with atomic precision from individual precursor molecules,” Perrin explains. The precursor molecules came from the Max Planck Institute for Polymer Research in Mainz. More

  • in

    Making sense of life’s random rhythms

    Life’s random rhythms surround us-from the hypnotic, synchronized blinking of fireflies…to the back-and-forth motion of a child’s swing… to slight variations in the otherwise steady lub-dub of the human heart.
    But truly understanding those rhythms — called stochastic, or random, oscillations — has eluded scientists. While researchers and clinicians have some success in parsing brain waves and heartbeats, they’ve been unable to compare or catalogue an untold number of variations and sources.
    Gaining such insight into the underlying source of oscillations “could lead to advances in neural science, cardiac science and any number of different fields,” said Peter Thomas, a professor of applied mathematics at Case Western Reserve University.
    Thomas is part of an international team that says it has developed a novel, universal framework for comparing and contrasting oscillations — regardless of their different underlying mechanisms — which could become a critical step toward someday fully understanding them.
    Their findings were recently published in Proceedings of the National Academy of Sciences.
    “We turned the problem of comparing oscillators into a linear algebra problem,” Thomas said. “What we have done is vastly more precise than what was available before. It’s a major conceptual advance.”
    The researchers say others can now compare, better understand — and even manipulate — oscillators previously considered to have completely different properties. More

  • in

    Can AI help hospitals spot patients in need of extra non-medical assistance?

    In the rush to harness artificial intelligence and machine learning tools to make care more efficient at hospitals nationwide, a new study points to another possible use: identifying patients with non-medical needs that could affect their health and ability to receive care.
    These social determinants of health — everything from transportation and housing to food supply and availability of family and friends as supports — can play a major role in a patient’s health and use of health care services.
    The new study focuses on a patient population with especially complex needs: people with Alzheimer’s disease or other forms of dementia. Their condition can make them especially reliant on others to get them to medical appointments and social activities, handle medications and finances, shop and prepare food, and more.
    The results of the study show that a rule-based natural language processing tool successfully identified patients with unstable access to transportation, food insecurity, social isolation, financial problems and signs of abuse, neglect, or exploitation.
    The researchers found that a rule-based NLP tool — a kind of AI that analyzes human speech or writing — was far superior to deep learning and regularized logistic regression algorithms for identifying patients’ social determinants of health.
    However, even the NLP tool did not do well enough at identifying needs related to housing or affording or taking medication.
    The study was led by Elham Mahmoudi, Ph.D., a health economist at Michigan Medicine, the University of Michigan’s academic medical center, and Wenbo Wu, Ph.D., who completed the work while earning a doctorate at the U-M School of Public Health and is now at New York University. Mahmoudi and two other authors are in the Department of Family Medicine. More

  • in

    Distribution of genetic information during bacterial cell division

    The precise segregation of DNA and the faithful inheritance of plasmids are crucial steps in bacterial cell division. Now, a team of researchers led by Seán Murray at the Max Planck Institute for Terrestrial Microbiology has developed a computational simulation that explains a key mechanism of DNA segregation. Their findings pave the way for experimental testing and reveal fundamental biochemical principles relevant to synthetic biology and medical applications.
    The faithful inheritance of genetic material to the next generation is a fundamental process underlying all forms of life. Central to this process is the accurate transmission of copied genetic material during cell division. A research team led by Seán Murray at the Max Planck Institute for Terrestrial Microbiology has now successfully developed a computational simulation for this central process. Unlike experimental techniques, which are often limited by their resolution, stochastic modeling makes it possible to unravel the underlying processes of DNA segregation and to understand the fine structure of the proteins involved.
    An essential part of this process, in many bacteria, is the formation of a large macromolecular complex called the partition complex, which is formed as part of the ParABS system. Here the ParB protein operates by moving the DNA through interacting with DNA-bound ParA-ATP, thereby allowing active separation of the DNA. For its correct functioning it requires precise interactions between its protein subparts and the DNA.
    “Sliding and bridging” principle
    Despite their significance, both the structure of the protein complexes and the mechanisms behind their assembly have remained elusive. Building on recent discoveries, the research team has developed a model showing that the DNA and ParB dimers can follow a “sliding and bridging” principle.
    Graduate student Lara Connolley, first author of the study, focused on the process of loading ParB dimers onto DNA, which occurs at specific regions known as parS sites. “According to our stochastic model, ParB dimers attach to DNA at parS sites by forming a protein clamp and then slide along the DNA strand, much like beads on a chain. We also predict that short-lived bridges organize the DNA into hairpin and helical structures to condense the DNA. Furthermore, these bridges do not interfere with sliding,” explains Lara Connolley. Research group leader Seán Murray adds: “The bridging interactions between dimers lead to DNA bending and the formation of a variety of structures. Further research into these structural variations could potentially be the key to understanding the role of ParB in different biological contexts.” The study opens the door for further research and experimentation to build on the findings.
    The next step is to carry out experiments to test and validate the model predictions in more detail. In addition, studies in different bacterial species would help to better understand the diversity present in the structure of the partition complex. “Our study provides a deeper insight into the world of DNA segregation and has potential relevance to many different bacterial species, as well as low copy number plasmids, which are also segregated by the ParABS system,” says Max Planck scientist Seán Murray. “Antibiotic resistance genes are located on such plasmids. Therefore, in addition to being important as basic research, these results could also be important for public health.” More

  • in

    Scientists discover novel way of reading data in antiferromagnets, unlocking their use as computer memory

    Scientists led by Nanyang Technological University, Singapore (NTU Singapore) investigators have made a significant advance in developing alternative materials for the high-speed memory chips that let computers access information quickly and that bypass the limitations of existing materials.
    They have discovered a way that allows them to make sense of previously hard-to-read data stored in these alternative materials, known as antiferromagnets.
    Researchers consider antiferromagnets to be attractive materials for making computer memory chips because they are potentially more energy efficient than traditional ones made of silicon. Memory chips made of antiferromagnets are not subject to the size and speed constraints nor corruption issues that are inherent to chips made with certain magnetic materials.
    Computer data is stored as code comprising a string of 1s and 0s. Currently, methods exist to “write” data onto antiferromagnets, by configuring them so that they can represent either the number 1 or 0.
    However, “reading” this data from antiferromagnets has proved elusive to researchers as there were no practical methods in the past that could figure out which number the materials were coded as.
    Now scientists led by Associate Professor Gao Weibo from NTU’s School of Physical and Mathematical Sciences (SPMS) have found a solution.
    Results from their experiments, published online in the scientific journal Nature in June 2023, showed that at ultra-low temperatures close to the coldness of outer space, if they passed a current through antiferromagnets, a unique voltage was measured across them. More

  • in

    Scientists invent smallest known way to guide light

    Directing light from place to the place is the backbone of our modern world. Beneath the oceans and across continents, fiber optic cables carry light that encodes everything from YouTube videos to banking transmissions — all inside strands about the size of a hair.
    University of Chicago Prof. Jiwoong Park, however, wondered what would happen if you made even thinner and flatter strands — in effect, so thin that they’re actually 2D instead of 3D. What would happen to the light?
    Through a series of innovative experiments, he and his team found that a sheet of glass crystal just a few atoms thick could trap and carry light. Not only that, but it was surprisingly efficient and could travel relatively long distances — up to a centimeter, which is very far in the world of light-based computing.
    The research, published Aug. 10 in Science, demonstrates what are essentially 2D photonic circuits, and could open paths to new technology.
    “We were utterly surprised by how powerful this super-thin crystal is; not only can it hold energy, but deliver it a thousand times further than anyone has seen in similar systems,” said lead study author Jiwoong Park, a professor and chair of chemistry and faculty member of the James Franck Institute and Pritzker School of Molecular Engineering. “The trapped light also behaved like it is traveling in a 2D space.”
    Guiding light
    The newly invented system is a way to guide light — known as a waveguide — that is essentially two-dimensional. In tests, the researchers found they could use extremely tiny prisms, lenses, and switches to guide the path of the light along a chip — all the ingredients for circuits and computations. More