More stories

  • in

    CT-derived body composition with deep learning predicts cardiovascular events

    According to ARRS’ American Journal of Roentgenology (AJR), fully automated and normalized body composition analysis of abdominal CT has promise to augment traditional cardiovascular risk prediction models.
    “Visceral fat area from fully automated and normalized analysis of abdominal CT examinations predicts subsequent myocardial infarction or stroke in Black and White patients, independent of traditional weight metrics, and should be considered as an adjunct to BMI in risk models,” wrote first author Kirti Magudia, MD, PhD, currently from the department of radiology at Duke University School of Medicine.
    Dr. Magudia and colleagues’ retrospective study numbered 9,752 outpatients (5,519 women, 4,233 men; 890 self-reported Black, 8,862 self-reported White; mean age, 53.2 years) who underwent routine abdominal CT at Brigham and Women’s Hospital or Massachusetts General Hospital from January-December 2012, sans a major cardiovascular or oncologic diagnosis within 3 months of examination. Fully automated deep learning body composition analysis was performed at the L3 vertebral level to determinate three body composition areas: skeletal muscle area, visceral fat area, and subcutaneous fat area. Subsequent myocardial infarction or stroke was established via electronic health records.
    Ultimately, after normalization for age, sex, and race, visceral fat area derived from routine CT was associated with risk of myocardial infarction (HR 1.31 [1.03-1.67], p=.04 for overall effect) and stroke (HR 1.46 [1.07-2.00], p=.04 for overall effect) in multivariable models in Black and White patients; normalized weight, BMI, skeletal muscle area, and subcutaneous fat area were not.
    Noting that their large study demonstrates a pipeline for body composition analysis and age-, sex-, and race-specific reference values to add prognostic utility to clinical practice, “we anticipate that fully automated body composition analysis using machine learning could be widely adopted to harness latent value from routine imaging studies,” the authors of this AJR article concluded.
    Story Source:
    Materials provided by American Roentgen Ray Society. Note: Content may be edited for style and length. More

  • in

    Simple technique ushers in long-sought class of semiconductors

    Breakthroughs in modern microelectronics depend on understanding and manipulating the movement of electrons in metal. Reducing the thickness of metal sheets to the order of nanometers can enable exquisite control over how the metal’s electrons move. In so doing, one can impart properties that aren’t seen in bulk metals, such as ultrafast conduction of electricity. Now, researchers from Osaka University and collaborating partners have synthesized a novel class of nanostructured superlattices. This study enables an unusually high degree of control over the movement of electrons within metal semiconductors, which promises to enhance the functionality of everyday technologies.
    Precisely tuning the architecture of metal nanosheets, and thus facilitating advanced microelectronics functionalities, remains an ongoing line of work worldwide. In fact, several Nobel prizes have been awarded on this topic. Researchers conventionally synthesize nanostructured superlattices — regularly alternating layers of metals, sandwiched together — from materials of the same dimension; for example, sandwiched 2D sheets. A key aspect of the present researchers’ work is its facile fabrication of hetero-dimensional superlattices; for example, 1D nanoparticle chains sandwiched within 2D nanosheets.
    “Nanoscale hetero-dimensional superlattices are typically challenging to prepare, but can exhibit valuable physical properties, such as anisotropic electrical conductivity,” explains Yung-Chang Lin, senior author. “We developed a versatile means of preparing such structures, and in so doing we will inspire synthesis of a wide range of custom superstructures.”
    The researchers used chemical vapor deposition — a common nanofabrication technique in industry — to prepare vanadium-based superlattices. These magnetic semiconductors exhibit what is known as an anisotropic anomalous Hall effect (AHE): meaning directionally focused charge accumulation under in-plane magnetic field conditions (in which the conventional Hall effect isn’t observed). Usually, the AHE is observed only at ultra-low temperatures. In the present research, the AHE was observed at room temperature and higher, up to around at least the boiling point of water. Generation of the AHE at practical temperatures will facilitate its use in everyday technologies.
    “A key promise of nanotechnology is its provision of functionalities that you can’t get from bulk materials,” states Lin. “Our demonstration of an unconventional anomalous Hall effect at room temperature and above opens up a wealth of possibilities for future semiconductor technology, all accessible by conventional nanofabrication processes.”
    The present work will help improve the density of data storage, the efficiency of lighting, and the speed of electronic devices. By precisely controlling the nanoscale architecture of metals that are commonly used in industry, researchers will fabricate uniquely versatile technology that surpasses the functionality of natural materials.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence model outperforms clinicians in diagnosing pediatric ear infections

    An artificial-intelligence (AI) model built at Mass Eye and Ear was shown to be significantly more accurate than doctors at diagnosing pediatric ear infections in the first head-to-head evaluation of its kind, a research team working to develop the model for clinical use reported.
    According to a new study published August 16 in Otolaryngology-Head and Neck Surgery, the model, called OtoDX, was more than 95 percent accurate in diagnosing an ear infection in a set of 22 test images compared to 65 percent accuracy among a group of clinicians consisting of ENTs, pediatricians and primary care doctors, who reviewed the same images.
    When tested in a dataset of more than 600 inner ear images, the AI model had a diagnostic accuracy of more than 80 percent, representing a significant leap over the average accuracy of clinicians reported in medical literature.
    The model utilizes a type of AI called deep learning and was built from hundreds of photographs collected from children prior to undergoing surgery at Mass Eye and Ear for recurrent ear infections or fluid in the ears. The results signify a major step towards the development of a diagnostic tool that can one day be deployed to clinics to assist doctors during patient evaluations, according to the authors. An AI-based diagnostic tool can give providers, like pediatricians and urgent care clinics, an additional test to better inform their clinical decision-making.
    “Ear infections are incredibly common in children yet frequently misdiagnosed, leading to delays in care or unnecessary antibiotic prescriptions,” said lead study author Matthew Crowson, MD, an otolaryngologist and artificial intelligence researcher at Mass Eye and Ear, and assistant professor of Otolaryngology-Head and Neck Surgery at Harvard Medical School. “This model won’t replace the judgment of clinicians but can serve to supplement their expertise and help them be more confident in their treatment decisions.”
    Difficult to diagnose common condition
    Ear infections occur from a buildup of bacteria inside the middle ear. According to the National Institute on Deafness and Other Communication Disorders, at least five out of six children in the United States have had at least one ear infection before the age of three. When left untreated, ear infections can lead to hearing loss, developmental delays, complications like meningitis, and, in some developing nations, death. Conversely, overtreating children when they don’t have an ear infection can lead to antibiotic resistance and render the medications ineffective against future infections. This latter problem is of significant public health importance. More

  • in

    New algorithm uncovers the secrets of cell factories

    Drug molecules and biofuels can be made to order by living cell factories, where biological enzymes do the job. Now researchers at Chalmers University of Technology have developed a computer model that can predict how fast enzymes work, making it possible to find the most efficient living factories, as well as to study difficult diseases.
    Enzymes are proteins found in all living cells. Their job is to act as catalysts that increase the rate of specific chemical reactions that take place in the cells. The enzymes thus play a crucial role in making life on earth work and can be compared to nature’s small factories. They are also used in detergents, and to manufacture, among other things, sweeteners, dyes and medicines. The potential uses are almost endless, but are hindered by the fact that it is expensive and time consuming to study the enzymes.
    “To study every natural enzyme with experiments in a laboratory would be impossible, they are simply too many. But with our algorithm, we can predict which enzymes are most promising just by looking at the sequence of amino acids they are made up of,” says Eduard Kerkhoven, researcher in systems biology at Chalmers University of Technology and the study’s lead author.
    Only the most promising enzymes need to be tested
    The enzyme turnover number or kcat value, describes how fast and efficient an enzyme works and is essential for understanding a cell’s metabolism. In the new study, Chalmers researchers have developed a computer model that can quickly calculate the kcat value. The only information needed is the order of the amino acids that build up the enzyme — something that is often widely available in open databases. After the model makes a first selection, only the most promising enzymes need to be tested in the lab.
    Given the number of naturally occurring enzymes, the researchers believe that the new calculation model may be of great importance. More

  • in

    PetTrack lets owners know exactly where their dog is

    The pandemic gave people a lot more time with their dogs and cats, but return to the office has disrupted that connection. Pet cameras can help but are needed in every room and don’t really tell owners what their furry friend has been up to without reviewing all footage. Now, researchers at the Georgia Institute of Technology have created a new device that can put pet owners at ease.
    PetTrack uses a combination of sensors to give the accurate, real-time indoor location of an animal. Ultra-wideband (UWB) radio wireless sensors locate the pet and accelerometers determine if it’s sitting or moving regardless of objects or walls in the way, giving owners more detail on what their pet is doing than a camera or GPS. All of this is located on a small sensor that can be put on a collar for minimal invasiveness and can be viewed via a compatible smartphone app.
    “PetTrack comprises two things: one is knowing the pet’s indoor location and second is trying to understand their activity,” said Ashutosh Dhekne, an assistant professor in the School of Computer Science (SCS).
    Dhekne and his students presented the research in the paper “PetTrack: Tracking Pet Location and Activity Indoors”at BodySys 2022 in July, a workshop on body-centric computing systems that was part of MobiSys 2022 in Portland, Oregon.
    How PetTrack Works
    PetTrack’s innovative combination of sensors makes it unique compared to other pet-monitoring devices. The UWB radio wireless signal locates where the pet is in the home from up to 100 feet away, while the accelerometer acts as an inertial sensor that can track the pet’s pose. This means owners can learn whether their pet is standing, sitting, or even lying down. More

  • in

    Entangled photons tailor-made

    In order to effectively use a quantum computer, a larger number of specially prepared — in technical terms: entangled — basic building blocks are needed to carry out computational operations. A team of physicists at the Max Planck Institute of Quantum Optics in Garching has now for the very first time demonstrated this task with photons emitted by a single atom. Following a novel technique, the researchers generated up to 14 entangled photons in an optical resonator, which can be prepared into specific quantum physical states in a targeted and very efficient manner. The new method could facilitate the construction of powerful and robust quantum computers, and serve the secure transmission of data in the future.
    The phenomena of the quantum world, which often seem bizarre from the perspective of the common everyday world, have long since found their way into technology. For example, entanglement: a quantum-physical connection between particles that links them in a strange way over arbitrarily long distances. It can be used, for example, in a quantum computer — a computing machine that, unlike a conventional computer, can perform numerous mathematical operations simultaneously. However, in order to use a quantum computer profitably, a large number of entangled particles must work together. They are the basic elements for calculations, so-called qubits.
    “Photons, the particles of light, are particularly well suited for this because they are robust by nature and easy to manipulate,” says Philip Thomas, a doctoral student at the Max Planck Institute of Quantum Optics (MPQ) in Garching near Munich. Together with colleagues from the Quantum Dynamics Division led by Prof. Gerhard Rempe, he has now succeeded in taking an important step towards making photons usable for technological applications such as quantum computing: For the first time, the team generated up to 14 entangled photons in a defined way and with high efficiency.
    One atom as a photon source
    “The trick to this experiment was that we used a single atom to emit the photons and interweave them in a very specific way,” says Thomas. To do this, the Max Planck researchers placed a rubidium atom at the center of an optical cavity — a kind of echo chamber for electromagnetic waves. With laser light of a certain frequency, the state of the atom could be precisely addressed. Using an additional control pulse, the researchers also specifically triggered the emission of a photon that is entangled with the quantum state of the atom.
    “We repeated this process several times and in a previously determined manner,” Thomas reports. In between, the atom was manipulated in a certain way — in technical jargon: rotated. In this way, it was possible to create a chain of up to 14 light particles that were entangled with each other by the atomic rotations and brought into a desired state. “To the best of our knowledge, the 14 interconnected light particles are the largest number of entangled photons that have been generated in the laboratory so far,” Thomas emphasises.
    Deterministic generation process
    But it is not only the quantity of entangled photons that marks a major step towards the development of powerful quantum computers — the way they are generated is also very different from conventional methods. “Because the chain of photons emerged from a single atom, it could be produced in a deterministic way,” Thomas explains. This means: in principle, each control pulse actually delivers a photon with the desired properties. Until now, the entanglement of photons usually took place in special, non-linear crystals. The shortcoming: there, the light particles are essentially created randomly and in a way that cannot be controlled. This also limits the number of particles that can be bundled into a collective state.
    The method used by the Garching team, on the other hand, allows basically any number of entangled photons to be generated. In addition, the method is particularly efficient — another important measure for possible future technical applications: “By measuring the photon chain produced, we were able to prove an efficiency of almost 50 percent,” says Philip Thomas. This means: almost every second “push of a button” on the rubidium atom delivered a usable light particle — far more than has been achieved in previous experiments. “All in all, our work removes a long-standing obstacle on the path to scalable, measurement-based quantum computing,” summarises department Director Gerhard Rempe the results.
    More space for quantum communication
    he scientists at the MPQ want to remove yet another hurdle. Complex computing operations for instance would require at least two atoms as photon sources in the resonator. The quantum physicists speak of a two-dimensional cluster state. “We are already working on tackling this task,” reveals Philip Thomas. The Max Planck researcher also emphasises that possible technical applications extend far beyond quantum computing: “Another application example is quantum communication” — the tap-proof transmission of information, for example by light in an optical fibre. There, the light experiences unavoidable losses during its propagation due to optical effects such as scattering and absorption — which limits the distance over which data can be transported. Using the method developed in Garching, quantum information could be packaged in entangled photons and could also survive a certain amount of light loss — and enable secure communication over greater distances. More

  • in

    Researchers use computer modeling to understand how self-renewal processes impact skin cell evolution

    All normal human tissues acquire mutations over time. Some of these mutations may be driver mutations that promote the development of cancer through increased proliferation and survival, while other mutations may be neutral passenger mutations that have no impact on cancer development. Currently, it is unclear how the normal self-renewal process of the skin called homeostasis impacts the development and evolution of gene mutations in cells. In a new study published in the Proceedings of the National Academy of Sciences (PNAS), Moffitt Cancer Center used mathematical and computer modeling to demonstrate the impact of skin homeostasis on driver and passenger mutations.
    Skin cells undergo a normal life and death cycle of homeostasis. Cells in the lower basal layer proliferate, grow and move into the upper layers of the skin while undergoing cell differentiation and maturation. Eventually, the skin cells migrate into the uppermost layer of the skin where they form a protective barrier, die and are sloughed off.
    Homeostasis is typically maintained in the skin. Its thickness and growth do not significantly change over time, despite the accumulation of mutations. This is different from other tissue types that undergo increased growth and proliferation due to mutations. However, scientists are not sure how cell mutations in the skin evolve and form subclones, or groups of cells derived from a single parent cell, without impacting normal skin homeostasis.
    Moffitt researchers developed a computer simulation model to address these uncertainties and improve their understanding of the impact of skin homeostasis on gene mutations and subclone evolution. Computer modeling can address complex biological relationships among cells that cannot be studied in typical laboratory settings. The researchers built their model based on the normal structure of the skin, including a constant cell number based on self-renewal, a constant tissue height and a constant number of immature stem cells. They incorporated patient mutation data using GATTACA, a tool allowing you to introduce and track mutations, into their model to assess how mutations and UV exposure impact skin homeostasis and clonal populations. They also investigated the impact of two genes that are commonly mutated in nonmelanoma skin cancer, NOTCH1 and TP53.
    “This study prompted the creation of several new tools, such as GATTACA, which allows you to induce and track base pair resolution mutations in any agent-based modeling framework with temporal, spatial and genomic positional information,” said the study’s lead author Ryan Schenck, Ph.D., a mathematical oncology programmer in Moffitt’s Department of Integrated Mathematical Oncology. “Along with my lab colleague Dr. Chandler Gatenbee, we also developed EvoFreq to help visualize evolutionary dynamics, now being used in many of our publications.”
    The researchers demonstrated that both passenger and driver mutations exist in subclones within the skin with a similar size and frequency. Most mutations that occur in immature stem cells are lost or are present in smaller subclones due to random stem cell death and replacement, while larger subclones are likely due to persistence and older age. Large NOTCH1and TP53 subclones are rarely observed because they would destroy the homeostasis of the skin; however, those large subclones that do exist likely arose during an early age.
    The researchers used their model to determine when subclones with NOTCH1 and TP53mutations have a selective fitness advantage over neighboring cells without mutations. They showed using their model that subclones with NOTCH1 mutations may prevent neighboring cells from dividing into their positions, while subclones with TP53 mutations may be resistant to cell death from UV exposure. The researchers hope that their model can be used to study other processes impacted by homeostasis that cannot be studied with typical laboratory approaches.
    “This work broadens our current understanding of selection and fitness acting in a homeostatic, normal tissue, where subclone size more reflects persistence rather than selective sweeps, with larger subclones being predominately older subclones,” said Alexander Anderson, Ph.D., chair of Moffitt’s Department of Integrated Mathematical Oncology. “This model strives to provide a means to explore mechanisms of increased fitness in normal, homeostatic tissue and provides a simple framework for future researchers to model their hypothesized mechanisms within squamous tissue.”
    This study was supported by grants received from the National Cancer Institute (U54CA193489, U54CA217376, P01 CA196569, U01CA23238), the Wellcome Trust (108861/7/15/7, 206314/Z/17/Z), the Wellcome Centre for Human Genetics (203141/7/16/7) and Moffitt’s Center of Excellence for Evolutionary Therapy. More

  • in

    Successful labor outcomes in expectant mothers using AI

    Mayo Clinic researchers have found that using artificial intelligence (AI) algorithms to analyze patterns of changes in women who are in labor can help identify whether a successful vaginal delivery will occur with good outcomes for mom and baby. The findings were published in PLOS ONE.
    “This is the first step to using algorithms in providing powerful guidance to physicians and midwives as they make critical decisions during the labor process,” says Abimbola Famuyide, M.D., a Mayo Clinic OB-GYN and senior author of the study. “Once validated with further research, we believe the algorithm will work in real time, meaning every input of new data during an expectant woman’s labor automatically recalculate the risk of adverse outcome. This may help reduce the rate of cesarean delivery, and maternal and neonatal complications.”
    Women in labor understand the importance of periodic cervical examinations to gauge the progress of labor. This is an essential step, as it helps obstetricians predict the likelihood of a vaginal delivery in a specified period of time. The problem is that cervical dilation in labor varies from person to person, and many important factors can determine the course of labor.
    In the study, researchers used data from the Eunice Kennedy Shriver National Institute of Child Health and Human Development’s multicenter Consortium on Safe Labor database to create the prediction model. They examined more than 700 clinical and obstetric factors in 66,586 deliveries from the time of admission and during labor progression.
    The risk-prediction model consisted of data known at the time of admission in labor, including patient baseline characteristics, the patient’s most recent clinical assessment, as well as cumulative labor progress from admission. The researchers explain that the models may provide an alternative to conventional labor charts and promote individualization of clinical decisions using baseline and labor characteristics of each patient.
    “It is very individualized to the person in labor,” says Dr. Famuyide. He adds that this will be a powerful tool for midwives and physicians remotely as it will allow time for transfers of patients to occur from rural or remote settings to the appropriate level of care.
    “The AI algorithm’s ability to predict individualized risks during the labor process will not only help reduce adverse birth outcomes but it can also reduce healthcare costs associated with maternal morbidity in the U.S., which has been estimated to be over $30 billion,” adds Bijan Borah, Ph.D., Robert D. and Patricia E. Kern Scientific Director for Health Services and Outcomes Research.
    Validation studies are ongoing to assess the outcomes of these models after they were implemented in labor units.
    This study was conducted in collaboration with scientists from the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery. The authors have declared no competing or potential conflicts of interest.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Kelley Luckstein. Note: Content may be edited for style and length. More