More stories

  • in

    Estimating tumor-specific total mRNA level predicts cancer outcomes

    Researchers at The University of Texas MD Anderson Cancer Center have developed a new approach to quantify tumor-specific total mRNA levels from patient tumor samples, which contain both cancer and non-cancer cells. Using this technique on tumors from more than 6,500 patients across 15 cancer types, the researchers demonstrated that higher mRNA levels in cancer cells were associated with reduced patient survival.
    The study, published today in Nature Biotechnology, suggests this computational approach could permit large-scale analyses of tumor-specific total mRNA levels from tumor samples, which could serve as a prognostic biomarker for many types of cancers.
    “Single-cell sequencing studies have shown us that total mRNA content in cancer cells is correlated with biological features of the tumor, but it’s not feasible to use single-cell approaches for analyzing large patient cohorts,” said corresponding author Wenyi Wang, Ph.D., professor of Bioinformatics & Computational Biology. “With this study, we propose a novel mathematical deconvolution technique to study this important biological feature of cancer at scale, using widely available bulk tumor sequencing data.”
    Whereas single-cell sequencing approaches can profile thousands of individual cells from a sample, bulk sequencing generates an overall picture of the tumor across a larger number of cells. Because a tumor sample contains a diverse mixture of cancer and non-cancer cells, additional steps are required to isolate the cancer-specific information from bulk sequencing data.
    Deconvolution is a computational technique designed to separate bulk sequencing data into its different components. This study is the first to report a deconvolution approach for quantifying total tumor-specific mRNA levels from bulk sequencing data, providing a scalable complement to single-cell analysis.
    Together with Wang, the study was led by Shaolong Cao, Ph.D., former postdoctoral fellow, Jennifer R. Wang, M.D., assistant professor of Head & Neck Surgery, and Shuangxi Ji, Ph.D., postdoctoral fellow in Bioinformatics & Computational Biology. More

  • in

    Rubbery camouflage skin exhibits smart and stretchy behaviors

    The skin of cephalopods, such as octopuses, squids and cuttlefish, is stretchy and smart, contributing to these creatures’ ability to sense and respond to their surroundings. A Penn State-led collaboration has harnessed these properties to create an artificial skin that mimics both the elasticity and the neurologic functions of cephalopod skin, with potential applications for neurorobotics, skin prosthetics, artificial organs and more.  
    Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and Biomedical Engineering, the team published its findings on June 1 in the Proceedings of the National Academy of Sciences. 
    Cephalopod skin is a soft organ that can endure complex deformations, such as expanding, contracting, bending and twisting. It also possesses cognitive sense-and-respond functions that enable the skin to sense light, react and camouflage its wearer. While artificial skins with either these physical or these cognitive capabilities have existed previously, according to Yu, until now none has simultaneously exhibited both qualities — the combination needed for advanced, artificially intelligent bioelectronic skin devices.  
    “Although several artificial camouflage skin devices have been recently developed, they lack critical noncentralized neuromorphic processing and cognition capabilities, and materials with such capabilities lack robust mechanical properties,” Yu said. “Our recently developed soft synaptic devices have achieved brain-inspired computing and artificial nervous systems that are sensitive to touch and light that retain these neuromorphic functions when biaxially stretched.”  
    To simultaneously achieve both smartness and stretchability, the researchers constructed synaptic transistors entirely from elastomeric materials. These rubbery semiconductors operate in a similar fashion to neural connections, exchanging critical messages for system-wide needs, impervious to physical changes in the system’s structure. The key to creating a soft skin device with both cognitive and stretching capabilities, according to Yu, was using elastomeric rubbery materials for every component. This approach resulted in a device that can successfully exhibit and maintain neurological synaptic behaviors, such as image sensing and memorization, even when stretched, twisted and poked 30% beyond a natural resting state.  
    “With the recent surge of smart skin devices, implementing neuromorphic functions into these devices opens the door for a future direction toward more powerful biomimetics,” Yu said. “This methodology for implementing cognitive functions into smart skin devices could be extrapolated into many other areas, including neuromorphic computing wearables, artificial organs, soft neurorobotics and skin prosthetics for next-generation intelligent systems.”
    The Office of Naval Research Young Investigator Program and the National Science Foundation supported this work.
    Co-authors include Hyunseok Shim, Seonmin Jang and Shubham Patel, Penn State Department of Engineering Science and Mechanics; Anish Thukral and Bin Kan, University of Houston Department of Mechanical Engineering; Seongsik Jeong, Hyeson Jo and Hai-Jin Kim, Gyeongsang National University School of Mechanical and Aerospace Engineering; Guodan Wei, Tsinghua-Berkeley Shenzhen Institute; and Wei Lan, Lanzhou University School of Physical Science and Technology. 
    Story Source:
    Materials provided by Penn State. Original written by Mary Fetzer. Note: Content may be edited for style and length. More

  • in

    Virtual CT scans cut patient radiation exposure in half during PET/CT studies

    A novel artificial intelligence method can be used to generate high-quality “PET/CT” images and subsequently decrease radiation exposure to the patient. Developed by the National Cancer Institute, the method bypasses the need for CT-based attenuation correction, potentially allowing for more frequent PET imaging to monitor disease and treatment progression without radiation exposure from CT acquisition. This research was presented at the Society of Nuclear Medicine and Molecular Imaging 2022 Annual Meeting.
    Cancer patients often undergo several imaging studies throughout diagnosis and treatment, potentially including multiple PET/CT scans in close succession. The CT portion of the exam contributes to a patient’s overall radiation exposure yet is largely redundant. In this study, researchers sought to reduce or eliminate the need for low-dose CT in PET/CT by using an artificial intelligence model to generate virtual attenuation-corrected PET scans.
    The data cohort for artificial intelligence model development included 305 18F-DCFPyL PSMA PET/CT studies. Each study contained three scans: non-attenuation-corrected PET, attenuation-corrected PET, and low-dose CT. Studies were broken down into three sets for training (185), validation (60) and testing (60). A 2D Pix2Pix generator was then used to generate synthetic attenuation-corrected PET scans (gen-PET) from the original non-attenuation-corrected PET.
    For qualitative evaluation, two nuclear medicine physicians reviewed 40 PET/CT studies in a randomized order, blinded to whether the image was from original attenuation-corrected PET or gen-PET. Each expert recorded the number and locations of PET-positive lesions and qualitatively reviewed overall noise and image quality. The readers were able to successfully detect lesions on the gen-PET images with reasonable sensitivity values.
    “High-quality artificial intelligence-generated images preserve vital information from raw PET images without the additional radiation exposure from CT scans,” said Kevin Ma, PhD, a post-doctoral researcher at the National Cancer Institute in Bethesda, Maryland. “This opens opportunities for increasing the frequency and number of PET scans per patient per year, which could provide more accurate assessment for lesion detection, treatment efficacy, radiotracer effectivity, and other measures in research and patient care.”
    Abstract 151. “Artificial Intelligence-generated PET images for PSMA-PET/CT studies: Quantitative and Qualitative Assessment,” Kevin Ma, National Cancer Institute, National Institutes of Health, College Park, Maryland; Esther Mena, Liza Lindenberg, Deborah Citrin, William Dahut, James Gulley, Peter Choyke, Baris Turkbey, and Stephanie Harmon, National Cancer Institute, National Institutes of Health, Bethesda, Maryland; Peter Pinto, Urologic Oncology Branch, National Cancer Insititute, National Insitutes of Health, Bethesda, Maryland; Bradford Wood, Radiology and Imaging Sciences, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland; and Ravi Madan, Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland. More

  • in

    Researchers solve mystery surrounding dielectric properties of unique metal oxide

    A University of Minnesota Twin Cities-led research team has solved a longstanding mystery surrounding strontium titanate, an unusual metal oxide that can be an insulator, a semiconductor, or a metal. The research provides insight for future applications of this material to electronic devices and data storage.
    The paper is published in the Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed, multidisciplinary, scientific journal.
    When an insulator like strontium titanateis placed between oppositely charged metal plates, the electric field between the plates causes the negatively charged electrons and the positive nuclei to line up in the direction of the field. This orderly lining up of electrons and nuclei is resisted by thermal vibrations, and the degree of order is measured by a fundamental quantity called the dielectric constant. At low temperature, where the thermal vibrations are weak, the dielectric constant is larger.
    In semiconductors, the dielectric constant plays an important role by providing effective “screening,” or protection, of the conducting electrons from other charged defects in the material. For applications in electronic devices, it is critical to have a large dielectric constant.
    High quality centimeter-size samples of strontium titanateexhibit a measured low-temperature dielectric constant of 22,000, which is quite large, and encouraging for applications. But most applications in computers and other devices would call for thin films. Despite an enormous effort by many researchers using diverse methods to grow thin films, only a modest dielectric constant of 100-1,000 has been achieved in thin films of strontium titanate.
    In thin films, which can be just a few atomic layers thick, the interface between the film and substrate, or the film and the next layer up, can play an important role. More

  • in

    Engineers build artificial intelligence chip

    Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste.
    Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.
    The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.
    The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.
    “You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
    The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing. More

  • in

    Energy harvesting to power the Internet of Things

    The wireless interconnection of everyday objects known as the Internet of Things depends on wireless sensor networks that need a low but constant supply of electrical energy. This can be provided by electromagnetic energy harvesters that generate electricity directly from the environment. Lise-Marie Lacroix from the Université de Toulouse, France, with colleagues from Toulouse, Grenoble and Atlanta, Georgia, USA, has used a mathematical technique, finite element simulation, to optimise the design of one such energy harvester so that it generates electricity as efficiently as possible. This work has now been published in the journal EPJ Special Topics.

    advertisement More

  • in

    Learning and remembering movement

    From the moment we are born, and even before that, we interact with the world through movement. We move our lips to smile or to talk. We extend our hand to touch. We move our eyes to see. We wiggle, we walk, we gesture, we dance. How does our brain remember this wide range of motions? How does it learn new ones? How does it make the calculations necessary for us to grab a glass of water, without dropping it, squashing it, or missing it?
    Technion Professor Jackie Schiller from the Ruth and Bruce Rappaport Faculty of Medicine and her team examined the brain at a single-neuron level to shed light on this mystery. They found that computation happens not just in the interaction between neurons (nerve cells ), but within each individual neuron. Each of these cells, it turns out, is not a simple switch, but a complicated calculating machine. This discovery, published recently in the Science magazine, promises changes not only to our understanding of how the brain works, but better understanding of conditions ranging from Parkinson’s disease to autism. And if that weren’t enough, these same findings are expected to advance machine learning, offering inspiration for new architectures.
    Movement is controlled by the primary motor cortex of the brain. In this area, researchers are able to pinpoint exactly which neuron(s) fire at any given moment to produce the movement we see. Prof. Schiller’s team was the first to get even closer, examining the activity not of the whole neuron as a single unit, but of its parts.
    Every neuron has branched extensions called dendrites. These dendrites are in close contact with the terminals (called axons) of other nerve cells, allowing the communication between them. A signal travels from the dendrites to the cell’s body, and then transferred onwards through the axon. The number and structure of dendrites varies greatly between nerve cells, like the crown of one tree differs from the crown of another.
    The particular neurons Prof. Schiller’s team focused on were the largest pyramidal neurons of the cortex. These cells, known to be heavily involved in movement, have a large dendritic tree, with many branches, sub-branches, and sub-sub-branches. What the team discovered is that these branches do not merely pass information onwards. Each sub-sub-branch performs a calculation on the information it receives and passes the result to the bigger sub-branch. The sub-branch than performs a calculation on the information received from all its subsidiaries and passes that on. Moreover, multiple dendritic branchlets can interact with one another to amplify their combined computational product. The result is a complex calculation performed within each individual neuron. For the first time, Prof. Schiller’s team showed that the neuron is compartmentalised, and that its branches perform calculations independently.
    “We used to think of each neuron as a sort of whistle, which either toots, or doesn’t,” Prof. Schiller explains. “Instead, we are looking at a piano. Its keys can be struck simultaneously, or in sequence, producing an infinity of different tunes.” This complex symphony playing in our brains is what enables us to learn and perform an infinity of different, complex and precise movements.
    Multiple neurodegenerative and neurodevelopmental disorders are likely to be linked to alterations in the neuron’s ability to process data. In Parkinson’s disease, it has been observed that the dendritic tree undergoes anatomical and physiological changes. In light of the new discoveries by the Technion team, we understand that as a result of these changes, the neuron’s ability to perform parallel computation is reduced. In autism, it looks possible that the excitability of the dendritic branches is altered, resulting in the numerous effects associated with the condition. The novel understanding of how neurons work opens new research pathways with regards to these and other disorders, with the hope of their alleviation.
    These same findings can also serve as an inspiration for the machine learning community. Deep neural networks, as their name suggests, attempt to create software that learns and functions somewhat similarly to a human brain. Although their advances constantly make the news, these networks are primitive compared to a living brain. A better understanding of how our brain actually works can help in designing more complex neural networks, enabling them to perform more complex tasks.
    This study was led by two of Prof. Schiller’s M.D.-Ph.D. candidate students Yara Otor and Shay Achvat, who contributed equally to the research. The team also included postdoctoral fellow Nate Cermak (now a neuroengineer) and Ph.D. student Hadas Benisty, as well as three collaborators: Professors Omri Barak, Yitzhak Schiller, and Alon Poleg-Polsky.
    The study was partially supported by the Israeli Science Foundation, Prince funds, the Rappaport Foundation, and the Zuckerman Postdoctoral Fellowship. More

  • in

    Quantum physics exponentially improves some types of machine learning

    Machine learning can get a boost from quantum physics.

    On certain types of machine learning tasks, quantum computers have an exponential advantage over standard computation, scientists report in the June 10 Science. The researchers proved that, according to quantum math, the advantage applies when using machine learning to understand quantum systems. And the team showed that the advantage holds up in real-world tests.

    “People are very excited about the potential of using quantum technology to improve our learning ability,” says theoretical physicist and computer scientist Hsin-Yuan Huang of Caltech. But it wasn’t entirely clear if machine learning could benefit from quantum physics in practice.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    In certain machine learning tasks, scientists attempt to glean information about a quantum system — say a molecule or a group of particles — by performing repeated experiments, and analyzing data from those experiments to learn about the system.

    Huang and colleagues studied several such tasks. In one, scientists aim to discern properties of the quantum system, such as the position and momentum of particles within. Quantum data from multiple experiments could be input into a quantum computer’s memory, and the computer would process the data jointly to learn the quantum system’s characteristics.

    The researchers proved theoretically that doing the same characterization with standard, or classical, techniques would require exponentially more experiments in order to learn the same information. Unlike a classical computer, a quantum computer can exploit entanglement — ethereal quantum linkages — to better analyze the results of multiple experiments.

    But the new work goes beyond just the theoretical. “It’s crucial to understand if this is realistic, if this is something we could see in the lab or if this is just theoretical,” says Dorit Aharonov of Hebrew University in Jerusalem, who was not involved with the research.

    So the researchers tested machine learning tasks with Google’s quantum computer, Sycamore (SN: 10/23/19). Rather than measuring a real quantum system, the team used simulated quantum data, and analyzed it using either quantum or classical techniques.

    Quantum machine learning won out there, too, even though Google’s quantum computer is noisy, meaning errors can slip into calculations. Eventually, scientists plan to build quantum computers that can correct their own errors (SN: 6/22/20). But for now, even without that error correction, quantum machine learning prevailed. More