More stories

  • in

    Virtual CT scans cut patient radiation exposure in half during PET/CT studies

    A novel artificial intelligence method can be used to generate high-quality “PET/CT” images and subsequently decrease radiation exposure to the patient. Developed by the National Cancer Institute, the method bypasses the need for CT-based attenuation correction, potentially allowing for more frequent PET imaging to monitor disease and treatment progression without radiation exposure from CT acquisition. This research was presented at the Society of Nuclear Medicine and Molecular Imaging 2022 Annual Meeting.
    Cancer patients often undergo several imaging studies throughout diagnosis and treatment, potentially including multiple PET/CT scans in close succession. The CT portion of the exam contributes to a patient’s overall radiation exposure yet is largely redundant. In this study, researchers sought to reduce or eliminate the need for low-dose CT in PET/CT by using an artificial intelligence model to generate virtual attenuation-corrected PET scans.
    The data cohort for artificial intelligence model development included 305 18F-DCFPyL PSMA PET/CT studies. Each study contained three scans: non-attenuation-corrected PET, attenuation-corrected PET, and low-dose CT. Studies were broken down into three sets for training (185), validation (60) and testing (60). A 2D Pix2Pix generator was then used to generate synthetic attenuation-corrected PET scans (gen-PET) from the original non-attenuation-corrected PET.
    For qualitative evaluation, two nuclear medicine physicians reviewed 40 PET/CT studies in a randomized order, blinded to whether the image was from original attenuation-corrected PET or gen-PET. Each expert recorded the number and locations of PET-positive lesions and qualitatively reviewed overall noise and image quality. The readers were able to successfully detect lesions on the gen-PET images with reasonable sensitivity values.
    “High-quality artificial intelligence-generated images preserve vital information from raw PET images without the additional radiation exposure from CT scans,” said Kevin Ma, PhD, a post-doctoral researcher at the National Cancer Institute in Bethesda, Maryland. “This opens opportunities for increasing the frequency and number of PET scans per patient per year, which could provide more accurate assessment for lesion detection, treatment efficacy, radiotracer effectivity, and other measures in research and patient care.”
    Abstract 151. “Artificial Intelligence-generated PET images for PSMA-PET/CT studies: Quantitative and Qualitative Assessment,” Kevin Ma, National Cancer Institute, National Institutes of Health, College Park, Maryland; Esther Mena, Liza Lindenberg, Deborah Citrin, William Dahut, James Gulley, Peter Choyke, Baris Turkbey, and Stephanie Harmon, National Cancer Institute, National Institutes of Health, Bethesda, Maryland; Peter Pinto, Urologic Oncology Branch, National Cancer Insititute, National Insitutes of Health, Bethesda, Maryland; Bradford Wood, Radiology and Imaging Sciences, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland; and Ravi Madan, Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland. More

  • in

    Researchers solve mystery surrounding dielectric properties of unique metal oxide

    A University of Minnesota Twin Cities-led research team has solved a longstanding mystery surrounding strontium titanate, an unusual metal oxide that can be an insulator, a semiconductor, or a metal. The research provides insight for future applications of this material to electronic devices and data storage.
    The paper is published in the Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed, multidisciplinary, scientific journal.
    When an insulator like strontium titanateis placed between oppositely charged metal plates, the electric field between the plates causes the negatively charged electrons and the positive nuclei to line up in the direction of the field. This orderly lining up of electrons and nuclei is resisted by thermal vibrations, and the degree of order is measured by a fundamental quantity called the dielectric constant. At low temperature, where the thermal vibrations are weak, the dielectric constant is larger.
    In semiconductors, the dielectric constant plays an important role by providing effective “screening,” or protection, of the conducting electrons from other charged defects in the material. For applications in electronic devices, it is critical to have a large dielectric constant.
    High quality centimeter-size samples of strontium titanateexhibit a measured low-temperature dielectric constant of 22,000, which is quite large, and encouraging for applications. But most applications in computers and other devices would call for thin films. Despite an enormous effort by many researchers using diverse methods to grow thin films, only a modest dielectric constant of 100-1,000 has been achieved in thin films of strontium titanate.
    In thin films, which can be just a few atomic layers thick, the interface between the film and substrate, or the film and the next layer up, can play an important role. More

  • in

    Engineers build artificial intelligence chip

    Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste.
    Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.
    The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.
    The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.
    “You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
    The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing. More

  • in

    Energy harvesting to power the Internet of Things

    The wireless interconnection of everyday objects known as the Internet of Things depends on wireless sensor networks that need a low but constant supply of electrical energy. This can be provided by electromagnetic energy harvesters that generate electricity directly from the environment. Lise-Marie Lacroix from the Université de Toulouse, France, with colleagues from Toulouse, Grenoble and Atlanta, Georgia, USA, has used a mathematical technique, finite element simulation, to optimise the design of one such energy harvester so that it generates electricity as efficiently as possible. This work has now been published in the journal EPJ Special Topics.

    advertisement More

  • in

    Learning and remembering movement

    From the moment we are born, and even before that, we interact with the world through movement. We move our lips to smile or to talk. We extend our hand to touch. We move our eyes to see. We wiggle, we walk, we gesture, we dance. How does our brain remember this wide range of motions? How does it learn new ones? How does it make the calculations necessary for us to grab a glass of water, without dropping it, squashing it, or missing it?
    Technion Professor Jackie Schiller from the Ruth and Bruce Rappaport Faculty of Medicine and her team examined the brain at a single-neuron level to shed light on this mystery. They found that computation happens not just in the interaction between neurons (nerve cells ), but within each individual neuron. Each of these cells, it turns out, is not a simple switch, but a complicated calculating machine. This discovery, published recently in the Science magazine, promises changes not only to our understanding of how the brain works, but better understanding of conditions ranging from Parkinson’s disease to autism. And if that weren’t enough, these same findings are expected to advance machine learning, offering inspiration for new architectures.
    Movement is controlled by the primary motor cortex of the brain. In this area, researchers are able to pinpoint exactly which neuron(s) fire at any given moment to produce the movement we see. Prof. Schiller’s team was the first to get even closer, examining the activity not of the whole neuron as a single unit, but of its parts.
    Every neuron has branched extensions called dendrites. These dendrites are in close contact with the terminals (called axons) of other nerve cells, allowing the communication between them. A signal travels from the dendrites to the cell’s body, and then transferred onwards through the axon. The number and structure of dendrites varies greatly between nerve cells, like the crown of one tree differs from the crown of another.
    The particular neurons Prof. Schiller’s team focused on were the largest pyramidal neurons of the cortex. These cells, known to be heavily involved in movement, have a large dendritic tree, with many branches, sub-branches, and sub-sub-branches. What the team discovered is that these branches do not merely pass information onwards. Each sub-sub-branch performs a calculation on the information it receives and passes the result to the bigger sub-branch. The sub-branch than performs a calculation on the information received from all its subsidiaries and passes that on. Moreover, multiple dendritic branchlets can interact with one another to amplify their combined computational product. The result is a complex calculation performed within each individual neuron. For the first time, Prof. Schiller’s team showed that the neuron is compartmentalised, and that its branches perform calculations independently.
    “We used to think of each neuron as a sort of whistle, which either toots, or doesn’t,” Prof. Schiller explains. “Instead, we are looking at a piano. Its keys can be struck simultaneously, or in sequence, producing an infinity of different tunes.” This complex symphony playing in our brains is what enables us to learn and perform an infinity of different, complex and precise movements.
    Multiple neurodegenerative and neurodevelopmental disorders are likely to be linked to alterations in the neuron’s ability to process data. In Parkinson’s disease, it has been observed that the dendritic tree undergoes anatomical and physiological changes. In light of the new discoveries by the Technion team, we understand that as a result of these changes, the neuron’s ability to perform parallel computation is reduced. In autism, it looks possible that the excitability of the dendritic branches is altered, resulting in the numerous effects associated with the condition. The novel understanding of how neurons work opens new research pathways with regards to these and other disorders, with the hope of their alleviation.
    These same findings can also serve as an inspiration for the machine learning community. Deep neural networks, as their name suggests, attempt to create software that learns and functions somewhat similarly to a human brain. Although their advances constantly make the news, these networks are primitive compared to a living brain. A better understanding of how our brain actually works can help in designing more complex neural networks, enabling them to perform more complex tasks.
    This study was led by two of Prof. Schiller’s M.D.-Ph.D. candidate students Yara Otor and Shay Achvat, who contributed equally to the research. The team also included postdoctoral fellow Nate Cermak (now a neuroengineer) and Ph.D. student Hadas Benisty, as well as three collaborators: Professors Omri Barak, Yitzhak Schiller, and Alon Poleg-Polsky.
    The study was partially supported by the Israeli Science Foundation, Prince funds, the Rappaport Foundation, and the Zuckerman Postdoctoral Fellowship. More

  • in

    Scientists craft living human skin for robots

    From action heroes to villainous assassins, biohybrid robots made of both living and artificial materials have been at the center of many sci-fi fantasies, inspiring today’s robotic innovations. It’s still a long way until human-like robots walk among us in our daily lives, but scientists from Japan are bringing us one step closer by crafting living human skin on robots. The method developed, presented June 9 in the journal Matter, not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.
    “The finger looks slightly ‘sweaty’ straight out of the culture medium,” says first author Shoji Takeuchi, a professor at the University of Tokyo, Japan. “Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”
    Looking “real” like a human is one of the top priorities for humanoid robots that are often tasked to interact with humans in healthcare and service industries. A human-like appearance can improve communication efficiency and evoke likability. While current silicone skin made for robots can mimic human appearance, it falls short when it comes to delicate textures like wrinkles and lacks skin-specific functions. Attempts at fabricating living skin sheets to cover robots have also had limited success, since it’s challenging to conform them to dynamic objects with uneven surfaces.
    “With that method, you have to have the hands of a skilled artisan who can cut and tailor the skin sheets,” says Takeuchi. “To efficiently cover surfaces with skin cells, we established a tissue molding method to directly mold skin tissue around the robot, which resulted in a seamless skin coverage on a robotic finger.”
    To craft the skin, the team first submerged the robotic finger in a cylinder filled with a solution of collagen and human dermal fibroblasts, the two main components that make up the skin’s connective tissues. Takeuchi says the study’s success lies within the natural shrinking tendency of this collagen and fibroblast mixture, which shrank and tightly conformed to the finger. Like paint primers, this layer provided a uniform foundation for the next coat of cells — human epidermal keratinocytes — to stick to. These cells make up 90% of the outermost layer of skin, giving the robot a skin-like texture and moisture-retaining barrier properties.
    The crafted skin had enough strength and elasticity to bear the dynamic movements as the robotic finger curled and stretched. The outermost layer was thick enough to be lifted with tweezers and repelled water, which provides various advantages in performing specific tasks like handling electrostatically charged tiny polystyrene foam, a material often used in packaging. When wounded, the crafted skin could even self-heal like humans’ with the help of a collagen bandage, which gradually morphed into the skin and withstood repeated joint movements.
    “We are surprised by how well the skin tissue conforms to the robot’s surface,” says Takeuchi. “But this work is just the first step toward creating robots covered with living skin.” The developed skin is much weaker than natural skin and can’t survive long without constant nutrient supply and waste removal. Next, Takeuchi and his team plan to address those issues and incorporate more sophisticated functional structures within the skin, such as sensory neurons, hair follicles, nails, and sweat glands.
    “I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies,” says Takeuchi.
    This work was supported by funding from JSPS Grants-in-Aid for Scientific Research (KAKENHI) and JSPS Grant-in-Aid for Early-Career Scientists (KAKENHI).
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Researchers demonstrate 40-channel optical communication link

    Researchers have demonstrated a silicon-based optical communication link that combines two multiplexing technologies to create 40 optical data channels that can simultaneously move data. The new chip-scale optical interconnect can transmit about 400 GB of data per second — the equivalent of about 100,000 streaming movies. This could improve data-intensive internet applications from video streaming services to high-capacity transactions for the stock market.
    “As demands to move more information across the internet continue to grow, we need new technologies to push data rates further,” said Peter Delfyett, who led the University of Central Florida College of Optics and Photonics (CREOL) research team. “Because optical interconnects can move more data than their electronic counterparts, our work could enable better and faster data processing in the data centers that form the backbone of the internet.”
    A multi-institutional group of researchers describes the new optical communication link in the Optica Publishing Group journal Optics Letters. It achieves 40 channels by combining a frequency comb light source based on a new photonic crystal resonator developed by the National Institute of Standards and Technology (NIST) with an optimized mode-division multiplexer designed by the researchers at Stanford University. Each channel can be used to carry information much like different stereo channels, or frequencies, transmit different music stations.
    “We show that these new frequency combs can be used in fully integrated optical interconnects,” said Chinmay Shirpurkar, co-first author of the paper. “All the photonic components were made from silicon-based material, which demonstrates the potential for making optical information handling devices from low-cost, easy-to-manufacture optical interconnects.”
    In addition to improving internet data transmission, the new technology could also be used to make faster optical computers that could provide the high levels of computing power needed for artificial intelligence, machine learning, large-scale emulation and other applications.
    Using multiple light dimensions
    The new work involved research teams led by Firooz Aflatouni of the University of Pennsylvania, Scott B. Papp from NIST, Jelena Vuckovic from Stanford University and Delfyett from CREOL. It is part of the DARPA Photonics in the Package for Extreme Scalability (PIPES) program, which aims to use light to vastly improve the digital connectivity of packaged integrated circuits using microcomb-based light sources. More

  • in

    Artificial intelligence reveals a never-before described 3D structure in rotavirus spike protein

    Of the three groups of rotavirus that cause gastroenteritis in people, called groups A, B and C, groups A and C affect mostly children and are the best characterized. On the other hand, of group B, which causes severe diarrhea predominantly in adults, little is known about the tip of the virus’s spike protein, called VP8* domain, which mediates the infection of cells in the gut.
    “Determining the structure of VP8* in group B rotavirus is important because it will help us understand how the virus infects gastrointestinal cells and design strategies to prevent and treat this infection that causes severe diarrheal outbreaks,” said corresponding author Dr B. V. Venkataram Prasad, professor of biochemistry and molecular biology at Baylor College of Medicine.
    The team’s first step was to determine the 3D structure of VP8* B using X-Ray crystallography, a laborious and time-consuming process. However, this traditional approach was unsuccessful in this case. The researchers then turned to a recently developed artificial intelligence-based computational program called AlphaFold2.
    “AlphaFold2 predicts the 3D structure of proteins according to their genetic sequence,” said first author and co-corresponding author Dr. Liya Hu, assistant professor of biochemistry and molecular biology at Baylor. “We knew that the protein sequence of VP8* of rotavirus group B was about 10% similar to the sequences of VP8* of rotavirus A and C, so we expected differences in the 3D structure as well. But we were surprised when AlphaFold2 predicted a 3D structure for the VP8* B that was not just totally different from that of the VP8* domain in rotavirus A and C, but also that no other protein before had been reported to have this structure.”
    With this information in hand, the researchers went back to the lab bench and experimentally confirmed that the structure of VP8* B predicted by ALphaFold2 indeed coincided with the actual structure of the protein using X-ray crystallography.
    How rotavirus infects cells
    Previous research has shown that rotavirus A and C infect cells by using the VP8* domain to bind to specific sugar components on histo-blood group antigens, including the A, B, AB and O blood groups, present in many cells in the body. It has been proposed that the ability of different rotavirus to bind to different sugars on the histo-group antigens might explain why some of these viruses specifically infect young children while others affect other populations. Unlike the VP8* A and VP8* C, the sugar specificity of VP8* B had not been characterized until now. More