More stories

  • in

    Artificial intelligence could be new blueprint for precision drug discovery

    Writing in the July 12, 2021 online issue of Nature Communications , researchers at University of California San Diego School of Medicine describe a new approach that uses machine learning to hunt for disease targets and then predicts whether a drug is likely to receive FDA approval.
    The study findings could measurably change how researchers sift through big data to find meaningful information with significant benefit to patients, the pharmaceutical industry and the nation’s health care systems.
    “Academic labs and pharmaceutical and biotech companies have access to unlimited amounts of ‘big data’ and better tools than ever to analyze such data. However, despite these incredible advances in technology, the success rates in drug discovery are lower today than in the 1970s,” said Pradipta Ghosh, MD, senior author of the study and professor in the departments of Medicine and Cellular and Molecular Medicine at UC San Diego School of Medicine.
    “This is mostly because drugs that work perfectly in preclinical inbred models, such as laboratory mice, that are genetically or otherwise identical to each other, don’t translate to patients in the clinic, where each individual and their disease is unique. It is this variability in the clinic that is believed to be the Achilles heel for any drug discovery program.”
    In the new study, Ghosh and colleagues replaced the first and last steps in preclinical drug discovery with two novel approaches developed within the UC San Diego Institute for Network Medicine (iNetMed), which unites several research disciplines to develop new solutions to advance life sciences and technology and enhance human health.
    The researchers used the disease model for inflammatory bowel disease (IBD), which is a complex, multifaceted, relapsing autoimmune disorder characterized by inflammation of the gut lining. Because it impacts all ages and reduces the quality of life in patients, IBD is a priority disease area for drug discovery and is a challenging condition to treat because no two patients behave similarly. More

  • in

    MaxDIA: Taking proteomics to the next level

    Proteomics produces enormous amounts of data, which can be very complex to analyze and interpret. The free software platform MaxQuant has proven to be invaluable for data analysis of shotgun proteomics over the past decade. Now, Jürgen Cox, group leader at the Max Planck Institute of Biochemistry, and his team present the new version 2.0. It provides an improved computational workflow for data-independent acquisition (DIA) proteomics, called MaxDIA. MaxDIA includes library-based and library-free DIA proteomics and permits highly sensitive and accurate data analysis. Uniting data-dependent and data-independent acquisition into one world, MaxQuant 2.0 is a big step towards improving applications for personalized medicine.
    Proteins are essential for our cells to function, yet many questions about their synthesis, abundance, functions, and defects still remain unanswered. High-throughput techniques can help improve our understanding of these molecules. For analysis by liquid chromatography followed by mass spectrometry (MS), proteins are broken down into smaller peptides, in a process referred to as “shotgun proteomics.” The mass-to-charge ratio of these peptides is subsequently determined with a mass spectrometer, resulting in MS spectra. From these spectra, information about the identity of the analyzed proteins can be reconstructed. However, the enormous amount and complexity of data make data analysis and interpretation challenging.
    Two ways to analyze proteins with mass spectrometry
    Two main methods are used in shotgun proteomics: Data-dependent acquisition (DDA) and data-independent acquisition (DIA). In DDA, the most abundant peptides of a sample are preselected for fragmentation and measurement. This allows to reconstruct the sequences of these few preselected peptides, making analysis simpler and faster. However, this method induces a bias towards highly abundant peptides. DIA, in contrast, is more robust and sensitive. All peptides from a certain mass range are fragmented and measured at once, without preselection by abundance.
    As a result, this method generates large amounts of data, and the complexity of the obtained information increases considerably. Up to now, identification of the original proteins was only possible by matching the newly measured spectra against spectra in libraries that comprise previously measured spectra.
    Combining DDA and DIA into one world
    Jürgen Cox and his team have now developed a software that provides a complete computational workflow for DIA data. It allows, for the first time, to apply algorithms to DDA and DIA data in the same way. Consequently, studies based on either DDA or DIA will now become more easily comparable. MaxDIA analyzes proteomics data with and without spectral libraries. Using machine learning, the software predicts peptide fragmentation and spectral intensities. Hence, it creates precise MS spectral libraries in silico. In this way, MaxDIA includes a library-free discovery mode with reliable control of false positive protein identifications.
    Furthermore, the software supports new technologies such as bootstrap DIA, BoxCar DIA and trapped ion mobility spectrometry DIA. What are the next steps? The team is already working on further improving the software. Several extensions are being developed, for instance for improving the analysis of posttranslational modifications and identification of cross-linked peptides.
    Enabling researchers to conduct complex proteomics data analysis
    MaxDIA is a free software available to scientists all over the world. It is embedded in the established software environment MaxQuant. “We would like to make proteomics data analysis accessible to all researchers,” says Pavel Sinitcyn, first author of the paper that introduces MaxDIA. Thus, at the MaxQuant summer school, Cox and his team offer hands-on training in this software for all interested researchers. They thereby help bridging the gap between wet lab work and complex data analysis.
    Sinitcyn states that the aim is to “bring mass spectrometry from the Max Planck Institute of Biochemistry to the clinics.” Instead of measuring only a few proteins, thousands of proteins can now be measured and analyzed. This opens up new possibilities for medical applications, especially in the field of personalized medicine.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    Mathematicians develop ground-breaking modeling toolkit to predict local COVID-19 impact

    A Sussex team — including university mathematicians — have created a new modelling toolkit which predicts the impact of COVID-19 at a local level with unprecedented accuracy. The details are published in the International Journal of Epidemiology, and are available for other local authorities to use online, just as the UK looks as though it may head into another wave of infections.
    The study used the local Sussex hospital and healthcare daily COVID-19 situation reports, including admissions, discharges, bed occupancy and deaths.
    Through the pandemic, the newly-published modelling has been used by local NHS and public health services to predict infection levels so that public services can plan when and how to allocate health resources — and it has been conclusively shown to be accurate. The team are now making their modelling available to other local authorities to use via the Halogen toolkit.
    Anotida Madzvamuse, professor of mathematical and computational biology within the School of Mathematical and Physical Sciences at the University of Sussex, who led of the study, said:
    “We undertook this study as a rapid response to the COVID-19 pandemic. Our objective was to provide support and enhance the capability of local NHS and Public Health teams to accurately predict and forecast the impact of local outbreaks to guide healthcare demand and capacity, policy making, and public health decisions.”
    “Working with outstanding mathematicians, Dr James Van Yperen and Dr Eduard Campillo-Funollet, we formulated an epidemiological model and inferred model parameters by fitting the model to local datasets to allow for short, and medium-term predictions and forecasts of the impact of COVID-19 outbreaks. More

  • in

    Training helps teachers anticipate how students with learning disabilities might solve problems

    North Carolina State University researchers found that a four-week training course made a substantial difference in helping special education teachers anticipate different ways students with learning disabilities might solve math problems. The findings suggest that the training would help instructors more quickly identify and respond to a student’s needs.
    Published in the Journal of Mathematics Teacher Education, researchers say their findings could help teachers in special education develop strategies to respond to kids’ math reasoning and questions in advance. They also say the findings point to the importance of mathematics education preparation for special education teachers — an area where researchers say opportunities are lacking.
    “Many special education programs do not include a focus on mathematics for students with disabilities, and few, if any, focus on understanding the mathematical thinking of students with disabilities in particular,” said the study’s first author Jessica Hunt, associate professor of mathematics education and special education at NC State. “This study was based on a course experience designed to do just that — to heighten teacher knowledge of the mathematical thinking of students with learning disabilities grounded in a stance of neurodiversity.”
    In the study, researchers evaluated the impact of a four-week course on 20 pre-service special education teachers. Researchers wanted to know if the course impacted the educators’ ability to anticipate the mathematical reasoning of students with learning disabilities, and help teachers adjust tasks to make them more accessible. The course also emphasized neurodiversity, which defines cognitive differences as a natural and beneficial outgrowth of neurological and biological diversity.
    “Neurodiversity says that all human brains are highly variable, with no average or ‘normal’ learners,” Hunt said. “This means that we all have strengths and challenges, and as humans we use what makes sense to us to understand the world. It’s a way to challenge pervasive deficit approaches to looking at disability, and to instead use an asset-based approach that positions students with learning disabilities as mathematically capable.”
    Before and after the course, the teachers took a 40-question assessment. In the test, researchers asked teachers to use words, pictures or symbols to describe a strategy that elementary school students with learning disabilities might use to solve a problem. They compared teachers’ responses to see how well they anticipated students’ thinking, and also how they might modify tasks for students.
    After the course, they saw more anticipation of what they called “implicit action,” which is using strategies like counting, halving, grouping, or predicting the number of people sharing a certain item to solve a problem. It’s often represented by pictures or words. Before the test, many teachers used “static representations” in which they used mathematical expressions to show solutions. While static representations are abstract representations of solutions, researchers argued implicit actions can reflect how students with learning disabilities themselves might work through a problem.
    They found teachers’ use of implicit action increased from 32 percent to 82 percent of answers before and after the test, while static representation decreased from 50 percent of answers to 17 percent. Their responses didn’t add up to 100 percent because some teachers left some answers blank.
    “The course helped teachers move from a top-down, one-size-fits-all view of ‘this is how you solve these problems,’ to an anticipation of how actual students who are learning these concepts for the first time might think through these problems,” Hunt said. “That’s a very different stance in terms of educating teachers to anticipate student thinking so they can meet it with responsive instruction.”
    Researchers also tracked how teachers modified math problems to make them more accessible to students before and after taking the course. After participating in the course, researchers saw that more teachers changed the problem type. They saw a shift in 50 percent of answers.
    “The benefit of anticipating students’ thinking is to help teachers to be responsive and support students’ prior knowledge as they’re teaching, which is a really hard thing to do,” Hunt said. “It’s even harder if you don’t yet appreciate what that thinking could be.”
    Story Source:
    Materials provided by North Carolina State University. Original written by Laura Oleniacz. Note: Content may be edited for style and length. More

  • in

    New electronic paper displays brilliant colors

    Imagine sitting out in the sun, reading a digital screen as thin as paper, but seeing the same image quality as if you were indoors. Thanks to research from Chalmers University of Technology, Sweden, it could soon be a reality. A new type of reflective screen — sometimes described as ‘electronic paper’ — offers optimal colour display, while using ambient light to keep energy consumption to a minimum.
    Traditional digital screens use a backlight to illuminate the text or images displayed upon them. This is fine indoors, but we’ve all experienced the difficulties of viewing such screens in bright sunshine. Reflective screens, however, attempt to use the ambient light, mimicking the way our eyes respond to natural paper.
    “For reflective screens to compete with the energy-intensive digital screens that we use today, images and colours must be reproduced with the same high quality. That will be the real breakthrough. Our research now shows how the technology can be optimised, making it attractive for commercial use,” says Marika Gugole, Doctoral Student at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology.
    The researchers had already previously succeeded in developing an ultra-thin, flexible material that reproduces all the colours an LED screen can display, while requiring only a tenth of the energy that a standard tablet consumes.
    But in the earlier design the colours on the reflective screen did not display with optimal quality. Now the new study, published in the journal Nano Letters takes the material one step further. Using a previously researched, porous and nanostructured material, containing tungsten trioxide, gold and platinum, they tried a new tactic — inverting the design in such a way as to allow the colours to appear much more accurately on the screen.
    Inverting the design for top quality colour
    The inversion of the design represents a great step forward. They placed the component which makes the material electrically conductive underneath the pixelated nanostructure that reproduces the colours — instead of above it, as was previously the case. This new design means you look directly at the pixelated surface, therefore seeing the colours much more clearly. More

  • in

    Thyroid cancer now diagnosed with machine learning-powered photoacoustic/ultrasound imaging

    A lump in the thyroid gland is called a thyroid nodule, and 5-10% of all thyroid nodules are diagnosed as thyroid cancer. Thyroid cancer has a good prognosis, a high survival rate, and a low recurrence rate, so early diagnosis and treatment are crucial. Recently, a joint research team in Korea has proposed a new non-invasive method to distinguish thyroid nodules from cancer by combining photoacoustic (PA) and ultrasound image technology with artificial intelligence.
    The joint research team — composed of Professor Chulhong Kim and Dr. Byullee Park of POSTECH’s Department of Electrical Engineering, Department of Convergence IT Engineering and Department of Mechanical Engineering, Professor Dong-Jun Lim and Professor Jeonghoon Ha of Seoul St. Mary’s Hospital of Catholic University of Korea, and Professor Jeesu Kim of Pusan National University — conducted a research to acquire PA images from patients with malignant and benign nodules and analyzed them with artificial intelligence. In recognition of their significance, the findings from this study were published in Cancer Research.
    Currently, the diagnosis of a thyroid nodule is performed using a fine-needle aspiration biopsy (FNAB) using an ultrasound image. But about 20% of FNABs are inaccurate which leads to repetitive and unnecessary biopsies.
    To overcome this problem, the joint research team explored the use of PA imaging to obtain an ultrasonic signal generated by light. When light (laser) is irradiated on the patient’s thyroid nodule, an ultrasound signal called a PA signal is generated from the thyroid gland and the nodule. By acquiring and processing this signal, PA images of both the gland and the nodule are collected. At this time, if multispectral PA signals are obtained, oxygen saturation information of the thyroid gland and thyroid nodule can be calculated.
    The researchers focused on the fact that the oxygen saturation of malignant nodules is lower than that of normal nodules, and acquired PA images of patients with malignant thyroid nodules (23 patients) and those with benign nodules (29 patients). Performing in vivo multispectral PA imaging on the patient’s thyroid nodules, the researchers calculated multiple parameters, including hemoglobin oxygen saturation level in the nodule area. This was analyzed using machine learning techniques to successfully and automatically classify whether the thyroid nodule was malignant or benign. In the initial classification, the sensitivity to classify malignancy as malignant was 78% and the specificity to classify benign as benign was 93%.
    The results of PA analysis obtained by machine learning techniques in the second analysis were combined with the results of the initial examination based on ultrasound images normally used in hospitals. Again, it was confirmed that the malignant thyroid nodules could be distinguished with a sensitivity of 83% and a specificity of 93%.
    Going a step further, when the researchers kept the sensitivity at 100% in the third analysis, the specificity reached 55%. This was about three times higher than the specificity of 17.3% (sensitivity of 98%) of the initial examination of thyroid nodules using the conventional ultrasound.
    As a result, the probability of correctly diagnosing benign, non-malignant nodules increased more than three times, which shows that overdiagnosis and unnecessary biopsies and repeated tests can be dramatically reduced, and thereby cut down on excessive medical costs.
    “This study is significant in that it is the first to acquire photoacoustic images of thyroid nodules and classify malignant nodules using machine learning,” remarked Professor Chulhong Kim of POSTECH. “In addition to minimizing unnecessary biopsies in thyroid cancer patients, this technique can also be applied to a variety of other cancers, including breast cancer.”
    “The ultrasonic device based on photoacoustic imaging will be helpful in effectively diagnosing thyroid cancer commonly found during health checkups and in reducing the number of biopsies,” explained Professor Dong-Jun Lim of Seoul St. Mary’s Hospital. “It can be developed into a medical device that can be readily used on thyroid nodule patients.” More

  • in

    Virtual learning may help NICU nurses recognize baby pain

    Babies younger than four weeks old, called neonates, were once thought not to perceive pain due to not-yet-fully-developed sensory systems, but modern research says otherwise, according to researchers from Hiroshima University in Japan.
    Not only do babies experience pain, but the various levels can be standardized to help nurses recognize and respond to the babies’ cues — if the nurses have the opportunity to learn the scoring tools and skills needed to react appropriately. With tight schedules and limited in-person courses available, the researchers theorized, virtual e-learning may be able to provide a path forward for nurses to independently pursue training in this area.
    To test this hypothesis, researchers conducted a pilot study of 115 nurses with varying levels of formal training and years of experience in seven hospitals across Japan. They published their results on May 27 in Advances in Neonatal Care.
    “Despite a growing body of knowledge and guidelines being published in many countries about the preventions and management of pain in neonates hospitalized in the NICU, neonatal pain remains unrecognized, undertreated, and generally challenging,” said paper author Mio Ozawa, associate professor in the Graduate School of Biomedical and Health Sciences at Hiroshima University.
    The researchers developed a comprehensive multimedia virtual program on neonatal pain management, based on selected standardized pain scales, for nursing staff to independently learn how to employ measurement tools. The program, called e-Pain Management of Neonates, is the first of its kind in Japan.
    “The aim of the study was to verify the feasibility of the program and whether e-learning actually improves nurses’ knowledge and scoring skills,” said paper author Mio Ozawa, associate professor in the Graduate School of Biomedical and Health Sciences at Hiroshima University. “The results of this study suggest that nurses could obtain knowledge and skills about the measurement of neonatal pain through e-learning.”
    The full cohort took a pre-test at the start of the study, before embarking on a self-paced, four-week e-learning program dedicated to learning standardized pain scales to measure discomfort in babies. However, only 52 nurses completed the post-test after four weeks. For those 52, scores increased across a range of years of experience and formal education.
    Ozawa noted that the sample size is small but also said that the improved test scores indicated the potential for e-learning.
    “Future research will need to go beyond the individual level to determine which benefits are produced in the management of neonatal pain in hospitals where nurses learn neonatal pain management through e-learning,” Ozawa said. “This study demonstrates that virtually delivered neonatal pain management program can be useful for nurses’ attainment of knowledge and skills for managing neonatal pain, including an appropriate use of selected scoring tools.”
    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    Seeing with radio waves

    Scientists from the Division of Physics at the University of Tsukuba used the quantum effect called “spin-locking” to significantly enhance the resolution when performing radio-frequency imaging of nitrogen-vacancy defects in diamond. This work may lead to faster and more accurate material analysis, as well as a path towards practical quantum computers.
    Nitrogen-vacancy (NV) centers have long been studied for their potential use in quantum computers. A NV center is a type of defect in the lattice of a diamond, in which two adjacent carbon atoms have been replaced with a nitrogen atom and a void. This leaves an unpaired electron, which can be detected using radio-frequency waves, because its probability of emitting a photon depends on its spin state. However, the spatial resolution of radio wave detection using conventional radio-frequency techniques has remained less than optimal.
    Now, researchers at the University of Tsukuba have pushed the resolution to its limit by employing a technique called “spin-locking.” Microwave pulses are used to put the electron’s spin in a quantum superposition of up and down simultaneously. Then, a driving electromagnetic field causes the direction of the spin to precess around, like a wobbling top. The end result is an electron spin that is shielded from random noise but strongly coupled to the detection equipment. “Spin-locking ensures high accuracy and sensitivity of the electromagnetic field imaging,” first author Professor Shintaro Nomura explains. Due to the high density of NV centers in the diamond samples used, the collective signal they produced could be easily picked up with this method. This permitted the sensing of collections of NV centers at the micrometer scale. “The spatial resolution we obtained with RF imaging was much better than with similar existing methods,” Professor Nomura continues, “and it was limited only by the resolution of the optical microscope we used.”
    The approach demonstrated in this project may be applied in a broad variety of application areas — for example, the characterizations of polar molecules, polymers, and proteins, as well as the characterization of materials. It might also be used in medical applications — for example, as a new way to perform magnetocardiography.
    This work was partly supported by a Grant-in-Aid for Scientific Research (Nos. JP18H04283, 291 JP18H01243, JP18K18726, and JP21H01009) from the Japan Society for the Promotion of 292 Science.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More