More stories

  • in

    MOGONET provides more holistic view of biological processes underlying disease

    Genomics, proteomics, metabolomics, transcriptomics — rapid advances in high-throughput biomedical technologies has enabled the collection of data with unprecedented detail from the growing number of omics. But, how best to take advantage of the interactions and complementary information in omics data?
    To fully utilize the advances in omics technologies to achieve a more comprehensive understanding of the biological processes underlying human diseases, researchers from Regenstrief Institute and Indiana, Purdue and Tulane Universities have developed and tested MOGONET, a novel multi-omics data analysis algorithm and computational methodology. Integrating data from various omics provides a more holistic view of biological processes underlying human diseases. The creators have made MOGONET open source, free and accessible to all researchers.
    In a study published in Nature Communications, the scientists demonstrated that MOGONET, short for Multi-Omics Graph cOnvolutional NETworks, outperforms existing supervised multi-omics integrative analysis approaches of different biomedical classification applications using mRNA expression data, DNA methylation data, and microRNA expression data.
    They also determined that MOGONET can identify important omics signatures and biomarkers from different omics data types.
    “With MOGONET, our new AI [artificial intelligence] tool, we employ machine learning based on a neural network, to capture complex biological process relationships. We have made the understanding of omics more comprehensive and also are learning more about disease subtypes that biomarkers help us differentiate,” said Regenstrief Institute Research Scientist Kun Huang, PhD, who led the study. “The ultimate goal is to improve disease prognosis and enhance disease-outcome predictions.” A bioinformatician, he credits the diversity of the MOGONET research group, which included computer scientists as well as data scientists and bioinformaticians, with their varying perspectives, as instrumental in its development and success. He serves as director of data sciences and informatics for the Indiana University Precision Health Initiative.
    The researchers tested MOGONET on datasets related to o Alzheimer’s disease, gliomas, kidney cancer and breast invasive carcinoma as well as on healthy patient datasets. They determined MOGONET handily outperformed existing supervised multi-omics integration methods.
    “Learning and integrating intuitive recognition, MOGONET could generate new biomarker disease candidates,”said study co-author Regenstrief Institute Affiliated Scientist Jie Zhang, PhD, a bioinformatician. “MOGONET also could predict new cancer subtypes, tumor grade and disease progression. It can identify normal brain activity versus Alzheimer’s disease.”
    Drs. Huang and Zhang plan to expand this work beyond omics to include imaging data, noting the abundance of brain images for AD and cancer-related pathology images which can teach MOGONET to recognize even cases it had not previously encountered. Both scientists note that following rigorous clinical studies, MOGONET could support improved patient care in many areas.
    In addition to Drs. Huang and Zhang, authors of “MOGONET integrates multi-omics data using graph convolutional networks allowing patient classification and biomarker identification” are Tongxin Wang, PhD, and Haixu Tang, PhD, of Indiana University, Wei Shao, PhD, of IU School of Medicine; Zhi Huang of IU School of Medicine and Purdue University; and Zhengming Ding, PhD of Tulane University. Dr. Wang worked in Dr. Huang’s laboratory. Dr. Ding, formerly of Indiana University, is an expert in the field of machine learning.
    The development and testing of MOGONET was supported by National Institutes of Health grants R01EB025018 and U54AG065181 and the Indiana University Precision Health Initiative.
    Story Source:
    Materials provided by Regenstrief Institute. Note: Content may be edited for style and length. More

  • in

    Physical activity in children can be improved through ‘exergames’

    Physical activity among young people can be improved by well-designed and delivered online interventions such as ‘exergames’ and smartphone apps, new research shows.
    According to a review study carried out at the University of Birmingham, children and young people reacted positively in PE lessons to the use of exergames, which deliver physical activity lessons via games or personalised activities. Changes included increases in physical activity levels, but also improved emotions, attitudes and motivations towards physical activity.
    The study, published in Physical Education and Sport Pedagogy is one of the first to examine not only the impact of online interventions on physical behaviours in non-clinical groups of young people but the effects of digital mediums on physical activity knowledge, social development and improving mental health.
    The evidence can be used to inform guidance for health and education organisations on how they can design online interventions to reach and engage young people in physical activity.
    The authors analysed 26 studies of online interventions for physical activity. They found three main mechanisms at work: gamification, in which participants progress through different levels of achievement; personalisation, in which participants received tailored feedback and rewards based on progress; and information, in which participants received educational material or guidance to encourage behavioural change.
    Most of the interventions were focused on gamification or personalisation and the researchers found the majority of studies (70%) reported an increase and/or improvement in outcomes related to physical activity for children and young people who participated in online interventions. Primary school age pupils in particular who participated during PE lessons benefited.
    Lead author Dr Victoria Goodyear, in the University of Birmingham’s School of Sport, Exercise and Rehabilitation Science, said: “We find convincing evidence that PE teachers can use online learning to boost attitudes and participation in physical activity among young people, particularly at primary school age. There’s a real opportunity here for the PE profession to lead the way in designing meaningful and effective online exercise opportunities, as well as an opportunity to embed positive approaches to exercise and online games and apps at an early stage.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Robot mimics the powerful punch of the mantis shrimp

    Mantis shrimp pack the strongest punch of any creature in the animal kingdom. Their club-like appendages accelerate faster than a bullet out of a gun and just one strike can knock the arm off a crab or break through a snail shell. These small but mighty crustaceans have been known to take on octopus and win.
    How mantis shrimp produce these deadly, ultra-fast movements has long fascinated biologists. Recent advancements in high-speed imaging make it possible to see and measure these strikes but some of the mechanics have not been well understood.
    Now, an interdisciplinary team of roboticists, engineers and biologists have modeled the mechanics of the mantis shrimp’s punch and built a robot that mimics the movement. The research sheds light on the biology of these pugnacious crustaceans and paves the way for small but mighty robotic devices.
    The research is published in the Proceedings of the National Academy of Sciences.
    “We are fascinated by so many remarkable behaviors we see in nature, in particular when these behaviors meet or exceed what can be achieved by human-made devices,” said Robert Wood, the Harry Lewis and Marlyn McGrath Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and senior author of the paper. “The speed and force of mantis shrimp strikes, for example, are a consequence of a complex underlying mechanism. By constructing a robotic model of a mantis shrimp striking appendage, we are able to study these mechanisms in unprecedented detail.”
    Many small organisms — including frogs, chameleons, even some kinds of plants — produce ultra-fast movements by storing elastic energy and rapidly releasing it through a latching mechanism, like a mouse trap. In mantis shrimp, two small structures embedded in the tendons of the muscles called sclerites act as the appendage’s latch. In a typical spring-loaded mechanism, once the physical latch is removed, the spring would immediately release the stored energy. More

  • in

    A game changer: Virtual reality reduces pain and anxiety in children

    It isn’t a matter of one needle puncture. Many children coming through the doors of Children’s Hospital Los Angeles are seen for chronic conditions and often require frequent visits. Painful procedures — like a blood draw or catheter placement — can cause anxiety and fear in patients. Now, a study published in JAMA Network Open shows that virtual reality can decrease pain and anxiety in children undergoing intravenous (IV) catheter placement.
    For nearly two decades, Jeffrey I. Gold, PhD, an investigator at The Saban Research Institute of Children’s Hospital Los Angeles, has been investigating the use of virtual reality (VR) as a technique to help children undergoing painful medical procedures. His research shows that the technology can have powerful effects. VR works so well that Children’s Hospital Los Angeles now offers it routinely for blood draws.
    “Some patients don’t even realize that their blood is being drawn,” says Dr. Gold, who is also a Professor of Clinical Anesthesiology, Pediatrics, and Psychiatry & Behavioral Sciences at The Keck School of Medicine of USC. “Compare that to a child who is panicking and screaming, and it’s a no-brainer. We want kids to feel safe.”
    In his recent publication, Dr. Gold’s team reports the results of a study to test whether VR could prevent pain and distress for patients undergoing peripheral intravenous catheter (PIVC) placement. The game is simple, but requires focus and participation. Patients in one group used VR throughout the procedure, while those in another group received standard of care, which includes simple distraction techniques and the use of a numbing cream. The patients who used VR reported significantly lower levels of pain and anxiety.
    “We can actually reduce pain without the use of a medication,” says Dr. Gold. “The mind is incredibly powerful at shifting focus and actually preventing pain from being registered. If we can tap into that, we can make the experience much better for our kids.”
    But the story is bigger than that. More

  • in

    Baby detector software embedded in digital camera rivals ECG

    University of South Australia researchers have designed a computer vision system that can automatically detect a tiny baby’s face in a hospital bed and remotely monitor its vital signs from a digital camera with the same accuracy as an electrocardiogram machine.
    Using artificial intelligence-based software to detect human faces is now common with adults, but this is the first time that researchers have developed software to reliably detect a premature baby’s face and skin when covered in tubes, clothing, and undergoing phototherapy.
    Engineering researchers and a neonatal critical care specialist from UniSA remotely monitored heart and respiratory rates of seven infants in the Neonatal Intensive Care Unit (NICU) at Flinders Medical Centre in Adelaide, using a digital camera.
    “Babies in neonatal intensive care can be extra difficult for computers to recognise because their faces and bodies are obscured by tubes and other medical equipment,” says UniSA Professor Javaan Chahl, one of the lead researchers.
    “Many premature babies are being treated with phototherapy for jaundice, so they are under bright blue lights, which also makes it challenging for computer vision systems.”
    The ‘baby detector’ was developed using a dataset of videos of babies in NICU to reliably detect their skin tone and faces. More

  • in

    Creation of the most perfect graphene

    A team of researchers led by Director Rod Ruoff at the Center for Multidimensional Carbon Materials (CMCM) within the Institute for Basic Science (IBS), including graduate students at the Ulsan National Institute of Science and Technology (UNIST), have achieved growth and characterization of large area, single-crystal graphene that has no wrinkles, folds, or adlayers. It can be said to be the most perfect graphene that has been grown and characterized, to date.
    Director Ruoff notes, “This pioneering breakthrough was due to many contributing factors, including human ingenuity and the ability of the CMCM researchers to reproducibly make large-area single-crystal Cu-Ni(111) foils, on which the graphene was grown by chemical vapor deposition (CVD) using a mixture of ethylene with hydrogen in a stream of argon gas.” Student Meihui Wang, Dr. Ming Huang, and Dr. Da Luo along with Ruoff undertook a series of experiments of growing single-crystal and single-layer graphene on such ‘home-made’ Cu-Ni(111) foils under different temperatures.
    The team had previously reported single-crystal and adlayer-free films of graphene which were grown using methane at temperatures of ~1320 Kelvin (K) degrees on Cu(111) foils. Adlayers refer to small “islands” of regions that have another layer of graphene present. However, these films always contained long “folds” that are the consequence of tall wrinkles that form as the graphene is cooled from the growth temperature down to room temperature. This results in an undesirable reduction in the performance of graphene field effect transistor (GFET) if the “fold” is in the active region of the GFET. The folds also contain “cracks” that lower the mechanical strength of the graphene.
    The next exciting challenge was thus eliminating these folds.
    CMCM researchers first implemented a series of ‘cycling’ experiments that involved “cycling” the temperature immediately after growing the graphene at 1320 K. These experiments showed that the folds are formed at or above 1020 K during the cooling process. After learning this, the team decided to grow graphene on Cu-Ni(111) foils at several different temperatures around 1020 K, which led to a discovery that large-area, high-quality, fold-free, and adlayer-free single-crystal graphene films can be grown in a temperature range between 1000 K and 1030 K. “This fold-free graphene film forms as a single crystal over the entire growth substrate because it shows a single orientation over a large-area low-energy electron diffraction (LEED) patterns,” noted SEONG Won Kyung, a senior research fellow in CMCM who installed the LEED equipment in the center. GFETs were then patterned on this single-crystal fold-free graphene in a variety of directions by UNIST graduate student Yunqing Li. These GFETs showed remarkably uniform performance with average room temperature electron and hole mobilities of 7.0 ± 1.0 × 103 cm2 V-1 s-1. Li notes, “Such remarkably uniform performance is possible because the fold-free graphene film is a single crystal with essentially no imperfections.”
    Importantly, the research team was able to achieve “scaling up” of graphene production using this method. The graphene was successfully grown on 5 foils (dimension 4 cm x 7 cm) simultaneously in a 6-inch diameter home-built quartz furnace. “Our method of growing fold-free graphene films is very reproducible, with each foil yielding two identical pieces of high-quality graphene films on both sides of the foil,” and “By using the electrochemical bubbling transfer method, graphene can be delaminated in about 1 minute and the Cu-Ni(111) foil can be quickly readied for the next growth/transfer cycle,” notes Meihui Wang. Ming Huang adds, “When we tested the weight loss of Cu-Ni(111) foils after 5 runs of growth and transfers, the net loss was only 0.0001 grams. This means that our growth and transfer methods using the Cu-Ni(111) can be performed repeatedly, essentially indefinitely.”
    In the process of achieving fold-free single-crystal graphene, the researchers also discovered the reasons behind the formation of these folds. High-resolution TEM imaging was performed by student CHOE Myeonggi and Prof. LEE Zonghoon (a group leader in CMCM and professor at UNIST) to observe the cross-sections of the samples grown above 1040 K. They discovered that the deadhesion, which is the cause of the folds, is initiated at the “bunched step edge” regions between the single crystal Cu-Ni(111) plateaus. “This deadhesion at the bunched step edge regions triggers the formation of graphene folds perpendicular to the step edge direction,” noted co-corresponding author Luo. Ruoff further notes that “We discovered that step-bunching of a Cu-Ni(111) foil surface suddenly occurs at about 1030 K, and this ‘surface reconstruction’ is the reason why the critical growth temperature of fold-free graphene is at ~1030 K or below.”
    Such large-area fold-free single-crystal graphene film allows for the straightforward fabrication of integrated high-performance devices oriented in any direction over the entire graphene film. These single-crystal graphene films will be important for further advances in basic science, which will lead to new applications in electronic, photonic, mechanical, thermal, and other areas. The near-perfect graphene is also useful for stacking, either with itself and/or with other 2D materials, to further expand the range of likely applications. Given that the Cu-Ni(111) foils can be used repeatedly and that the graphene can be transferred to other substrates in less than one minute, the scalable manufacturing using this process is also highly promising.
    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More

  • in

    'Nanopore-tal' enables cells to talk to computers

    Genetically encoded reporter proteins have been a mainstay of biotechnology research, allowing scientists to track gene expression, understand intracellular processes and debug engineered genetic circuits.
    But conventional reporting schemes that rely on fluorescence and other optical approaches come with practical limitations that could cast a shadow over the field’s future progress. Now, researchers at the University of Washington and Microsoft have created a “nanopore-tal” into what is happening inside these complex biological systems, allowing scientists to see reporter proteins in a whole new light.
    The team introduced a new class of reporter proteins that can be directly read by a commercially available nanopore sensing device. The new system ? dubbed “Nanopore-addressable protein Tags Engineered as Reporters” or “NanoporeTERs” ? can detect multiple protein expression levels from bacterial and human cell cultures far beyond the capacity of existing techniques.
    The study was published Aug. 12 in Nature Biotechnology.
    “NanoporeTERs offer a new and richer lexicon for engineered cells to express themselves and shed new light on the factors they are designed to track. They can tell us a lot more about what is happening in their environment all at once,” said co-lead author Nicolas Cardozo, a doctoral student with the UW Molecular Engineering and Sciences Institute. “We’re essentially making it possible for these cells to ‘talk’ to computers about what’s happening in their surroundings at a new level of detail, scale and efficiency that will enable deeper analysis than what we could do before.”
    For conventional labeling methods, researchers can track only a few optical reporter proteins, such as green fluorescent protein, simultaneously because of their overlapping spectral properties. For example, it’s difficult to distinguish between more than three different colors of fluorescent proteins at once. In contrast, NanoporeTERs were designed to carry distinct protein “barcodes” composed of strings of amino acids that, when used in combination, allow at least ten times more multiplexing possibilities. More

  • in

    Using your smartwatch to reduce stress

    The old adage “never let them see you sweat,” doesn’t apply in the electrical and computer engineering lab of Rose Faghih, assistant professor of electrical and computer engineering in the University of Houston Cullen College of Engineering. In fact, Faghih seeks sweat, the kind that beads on your upper lip when you’re nervous — skin conductance response (SCR) as the change in sweat activity is scientifically called. It is through that measure that Faghih is reporting the ability to monitor stress and even help lower it.
    To collect and study these physiological signals of stress, Faghih’s research team has built a new closed-loop technology by placing two electrodes on smartwatch-type wearables. Once the signal for stress is detected, a reminder is sent through the smartwatch, for example, to listen to relaxing music to calm down. Thus, the loop is closed as the detected stress launches the subtle suggestion.
    “This study is one of the very first steps toward the ultimate goal of monitoring brain responses using wearable devices and closing the loop to keep a person’s stress state within a pleasant range,” reports Faghih in the journal IEEE Xplore.
    Electrodermal activity (i.e., the electrical conductivity of the skin) carries important information about the brain’s cognitive stress. Faghih uses signal processing techniques to track the hidden stress state and design an appropriate control algorithm for regulating the stress state and closing the loop. The results of the research illustrate the efficiency of the proposed approach and validate its feasibility of being implemented in real life.
    “To the best of our knowledge, this research is one of the very first to relate the cognitive stress state to the changes in SCR events and design the control mechanism to close the loop in a real-time simulation system,” said UH doctoral student and lead study author Fekri Azgomi, who accomplished the task of closed-loop cognitive stress regulation in a simulation study based on experimental data.
    Due to the increased ubiquity of wearable devices capable of measuring cognitive stress-related variables, the proposed architecture is an initial step toward treating cognitive disorders using non-invasive brain state decoding.
    “The final results verify that the proposed architecture has great potential to be implemented in a wrist-worn wearable device and used in daily life,” said Faghih.
    Stress is a worldwide issue that can result in catastrophic health and financial complications. A recent Gallup poll found that more than one in three adults (35%) worldwide said they experienced stress during “a lot of the day yesterday.”
    Story Source:
    Materials provided by University of Houston. Original written by Laurie Fickman. Note: Content may be edited for style and length. More