More stories

  • in

    'Hydrogel-based flexible brain-machine interface'

    A KAIST research team and collaborators revealed a newly developed hydrogel-based flexible brain-machine interface. To study the structure of the brain or to identify and treat neurological diseases, it is crucial to develop an interface that can stimulate the brain and detect its signals in real time. However, existing neural interfaces are mechanically and chemically different from real brain tissue. This causes foreign body response and forms an insulating layer (glial scar) around the interface, which shortens its lifespan.
    To solve this problem, the research team of Professor Seongjun Park developed a ‘brain-mimicking interface’ by inserting a custom-made multifunctional fiber bundle into the hydrogel body. The device is composed not only of an optical fiber that controls specific nerve cells with light in order to perform optogenetic procedures, but it also has an electrode bundle to read brain signals and a microfluidic channel to deliver drugs to the brain.
    The interface is easy to insert into the body when dry, as hydrogels become solid. But once in the body, the hydrogel will quickly absorb body fluids and resemble the properties of its surrounding tissues, thereby minimizing foreign body response.
    The research team applied the device on animal models, and showed that it was possible to detect neural signals for up to six months, which is far beyond what had been previously recorded. It was also possible to conduct long-term optogenetic and behavioral experiments on freely moving mice with a significant reduction in foreign body responses such as glial and immunological activation compared to existing devices.
    “This research is significant in that it was the first to utilize a hydrogel as part of a multifunctional neural interface probe, which increased its lifespan dramatically,” said Professor Park. “With our discovery, we look forward to advancements in research on neurological disorders like Alzheimer’s or Parkinson’s disease that require long-term observation.”
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Discovery of 10 faces of plasma leads to new insights in fusion and plasma science

    Scientists have discovered a novel way to classify magnetized plasmas that could possibly lead to advances in harvesting on Earth the fusion energy that powers the sun and stars. The discovery by theorists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) found that a magnetized plasma has 10 unique phases and the transitions between them might hold rich implications for practical development.
    The spatial boundaries, or transitions, between different phases will support localized wave excitations, the researchers found. “These findings could lead to possible applications of these exotic excitations in space and laboratory plasmas,” said Yichen Fu, a graduate student at PPPL and lead author of a paper in Nature Communications that outlines the research. “The next step is to explore what these excitations could do and how they might be utilized.”
    Possible applications
    Possible applications include using the excitations to create current in magnetic fusion plasmas or facilitating plasma rotation in fusion experiments. However, “Our paper doesn’t consider any practical applications,” said physicist Hong Qin, co-author of the paper and Fu’s advisor. “The paper is the basic theory and the technology will follow the theoretical understanding.”
    In fact, “the discovery of the 10 phases in plasma marks a primary development in plasma physics,” Qin said. “The first and foremost step in any scientific endeavor is to classify the objects under investigation. Any new classification scheme will lead to improvement in our theoretical understanding and subsequent advances in technology,” he said.
    Qin cites discovery of the major types of diabetes as an example of the role classification plays in scientific progress. “When developing treatments for diabetes, scientists found that there were three major types,” he said. “Now medical practitioners can effectively treat diabetic patients.”
    Fusion, which scientists around the world are seeking to produce on Earth, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that makes up 99 percent of the visible universe — to release massive amounts of energy. Such energy could serve as a safe and clean source of power for generating electricity.
    The plasma phases that PPPL has uncovered are technically known as “topological phases,” indicating the shapes of the waves supported by plasma. This unique property of matter was first discovered in the discipline of condensed matter physics during the 1970s — a discovery for which physicist Duncan Haldane of Princeton University shared the 2016 Nobel Prize for his pioneering work.
    Robust and intrinsic
    The localized plasma waves produced by phase transitions are robust and intrinsic because they are “topologically protected,” Qin said. “The discovery that this topologically protected excitation exists in magnetized plasmas is a big step forward that can be explored for practical applications,” he said.
    For first author Fu, “The most important progress in the paper is looking at plasma based on its topological properties and identifying its topological phases. Based on these phases we identify the necessary and sufficient condition for the excitations of these localized waves. As for how this progress can be applied to facilitate fusion energy research, we have to find out.”
    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by John Greenwald. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence could be new blueprint for precision drug discovery

    Writing in the July 12, 2021 online issue of Nature Communications , researchers at University of California San Diego School of Medicine describe a new approach that uses machine learning to hunt for disease targets and then predicts whether a drug is likely to receive FDA approval.
    The study findings could measurably change how researchers sift through big data to find meaningful information with significant benefit to patients, the pharmaceutical industry and the nation’s health care systems.
    “Academic labs and pharmaceutical and biotech companies have access to unlimited amounts of ‘big data’ and better tools than ever to analyze such data. However, despite these incredible advances in technology, the success rates in drug discovery are lower today than in the 1970s,” said Pradipta Ghosh, MD, senior author of the study and professor in the departments of Medicine and Cellular and Molecular Medicine at UC San Diego School of Medicine.
    “This is mostly because drugs that work perfectly in preclinical inbred models, such as laboratory mice, that are genetically or otherwise identical to each other, don’t translate to patients in the clinic, where each individual and their disease is unique. It is this variability in the clinic that is believed to be the Achilles heel for any drug discovery program.”
    In the new study, Ghosh and colleagues replaced the first and last steps in preclinical drug discovery with two novel approaches developed within the UC San Diego Institute for Network Medicine (iNetMed), which unites several research disciplines to develop new solutions to advance life sciences and technology and enhance human health.
    The researchers used the disease model for inflammatory bowel disease (IBD), which is a complex, multifaceted, relapsing autoimmune disorder characterized by inflammation of the gut lining. Because it impacts all ages and reduces the quality of life in patients, IBD is a priority disease area for drug discovery and is a challenging condition to treat because no two patients behave similarly. More

  • in

    MaxDIA: Taking proteomics to the next level

    Proteomics produces enormous amounts of data, which can be very complex to analyze and interpret. The free software platform MaxQuant has proven to be invaluable for data analysis of shotgun proteomics over the past decade. Now, Jürgen Cox, group leader at the Max Planck Institute of Biochemistry, and his team present the new version 2.0. It provides an improved computational workflow for data-independent acquisition (DIA) proteomics, called MaxDIA. MaxDIA includes library-based and library-free DIA proteomics and permits highly sensitive and accurate data analysis. Uniting data-dependent and data-independent acquisition into one world, MaxQuant 2.0 is a big step towards improving applications for personalized medicine.
    Proteins are essential for our cells to function, yet many questions about their synthesis, abundance, functions, and defects still remain unanswered. High-throughput techniques can help improve our understanding of these molecules. For analysis by liquid chromatography followed by mass spectrometry (MS), proteins are broken down into smaller peptides, in a process referred to as “shotgun proteomics.” The mass-to-charge ratio of these peptides is subsequently determined with a mass spectrometer, resulting in MS spectra. From these spectra, information about the identity of the analyzed proteins can be reconstructed. However, the enormous amount and complexity of data make data analysis and interpretation challenging.
    Two ways to analyze proteins with mass spectrometry
    Two main methods are used in shotgun proteomics: Data-dependent acquisition (DDA) and data-independent acquisition (DIA). In DDA, the most abundant peptides of a sample are preselected for fragmentation and measurement. This allows to reconstruct the sequences of these few preselected peptides, making analysis simpler and faster. However, this method induces a bias towards highly abundant peptides. DIA, in contrast, is more robust and sensitive. All peptides from a certain mass range are fragmented and measured at once, without preselection by abundance.
    As a result, this method generates large amounts of data, and the complexity of the obtained information increases considerably. Up to now, identification of the original proteins was only possible by matching the newly measured spectra against spectra in libraries that comprise previously measured spectra.
    Combining DDA and DIA into one world
    Jürgen Cox and his team have now developed a software that provides a complete computational workflow for DIA data. It allows, for the first time, to apply algorithms to DDA and DIA data in the same way. Consequently, studies based on either DDA or DIA will now become more easily comparable. MaxDIA analyzes proteomics data with and without spectral libraries. Using machine learning, the software predicts peptide fragmentation and spectral intensities. Hence, it creates precise MS spectral libraries in silico. In this way, MaxDIA includes a library-free discovery mode with reliable control of false positive protein identifications.
    Furthermore, the software supports new technologies such as bootstrap DIA, BoxCar DIA and trapped ion mobility spectrometry DIA. What are the next steps? The team is already working on further improving the software. Several extensions are being developed, for instance for improving the analysis of posttranslational modifications and identification of cross-linked peptides.
    Enabling researchers to conduct complex proteomics data analysis
    MaxDIA is a free software available to scientists all over the world. It is embedded in the established software environment MaxQuant. “We would like to make proteomics data analysis accessible to all researchers,” says Pavel Sinitcyn, first author of the paper that introduces MaxDIA. Thus, at the MaxQuant summer school, Cox and his team offer hands-on training in this software for all interested researchers. They thereby help bridging the gap between wet lab work and complex data analysis.
    Sinitcyn states that the aim is to “bring mass spectrometry from the Max Planck Institute of Biochemistry to the clinics.” Instead of measuring only a few proteins, thousands of proteins can now be measured and analyzed. This opens up new possibilities for medical applications, especially in the field of personalized medicine.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    Mathematicians develop ground-breaking modeling toolkit to predict local COVID-19 impact

    A Sussex team — including university mathematicians — have created a new modelling toolkit which predicts the impact of COVID-19 at a local level with unprecedented accuracy. The details are published in the International Journal of Epidemiology, and are available for other local authorities to use online, just as the UK looks as though it may head into another wave of infections.
    The study used the local Sussex hospital and healthcare daily COVID-19 situation reports, including admissions, discharges, bed occupancy and deaths.
    Through the pandemic, the newly-published modelling has been used by local NHS and public health services to predict infection levels so that public services can plan when and how to allocate health resources — and it has been conclusively shown to be accurate. The team are now making their modelling available to other local authorities to use via the Halogen toolkit.
    Anotida Madzvamuse, professor of mathematical and computational biology within the School of Mathematical and Physical Sciences at the University of Sussex, who led of the study, said:
    “We undertook this study as a rapid response to the COVID-19 pandemic. Our objective was to provide support and enhance the capability of local NHS and Public Health teams to accurately predict and forecast the impact of local outbreaks to guide healthcare demand and capacity, policy making, and public health decisions.”
    “Working with outstanding mathematicians, Dr James Van Yperen and Dr Eduard Campillo-Funollet, we formulated an epidemiological model and inferred model parameters by fitting the model to local datasets to allow for short, and medium-term predictions and forecasts of the impact of COVID-19 outbreaks. More

  • in

    Training helps teachers anticipate how students with learning disabilities might solve problems

    North Carolina State University researchers found that a four-week training course made a substantial difference in helping special education teachers anticipate different ways students with learning disabilities might solve math problems. The findings suggest that the training would help instructors more quickly identify and respond to a student’s needs.
    Published in the Journal of Mathematics Teacher Education, researchers say their findings could help teachers in special education develop strategies to respond to kids’ math reasoning and questions in advance. They also say the findings point to the importance of mathematics education preparation for special education teachers — an area where researchers say opportunities are lacking.
    “Many special education programs do not include a focus on mathematics for students with disabilities, and few, if any, focus on understanding the mathematical thinking of students with disabilities in particular,” said the study’s first author Jessica Hunt, associate professor of mathematics education and special education at NC State. “This study was based on a course experience designed to do just that — to heighten teacher knowledge of the mathematical thinking of students with learning disabilities grounded in a stance of neurodiversity.”
    In the study, researchers evaluated the impact of a four-week course on 20 pre-service special education teachers. Researchers wanted to know if the course impacted the educators’ ability to anticipate the mathematical reasoning of students with learning disabilities, and help teachers adjust tasks to make them more accessible. The course also emphasized neurodiversity, which defines cognitive differences as a natural and beneficial outgrowth of neurological and biological diversity.
    “Neurodiversity says that all human brains are highly variable, with no average or ‘normal’ learners,” Hunt said. “This means that we all have strengths and challenges, and as humans we use what makes sense to us to understand the world. It’s a way to challenge pervasive deficit approaches to looking at disability, and to instead use an asset-based approach that positions students with learning disabilities as mathematically capable.”
    Before and after the course, the teachers took a 40-question assessment. In the test, researchers asked teachers to use words, pictures or symbols to describe a strategy that elementary school students with learning disabilities might use to solve a problem. They compared teachers’ responses to see how well they anticipated students’ thinking, and also how they might modify tasks for students.
    After the course, they saw more anticipation of what they called “implicit action,” which is using strategies like counting, halving, grouping, or predicting the number of people sharing a certain item to solve a problem. It’s often represented by pictures or words. Before the test, many teachers used “static representations” in which they used mathematical expressions to show solutions. While static representations are abstract representations of solutions, researchers argued implicit actions can reflect how students with learning disabilities themselves might work through a problem.
    They found teachers’ use of implicit action increased from 32 percent to 82 percent of answers before and after the test, while static representation decreased from 50 percent of answers to 17 percent. Their responses didn’t add up to 100 percent because some teachers left some answers blank.
    “The course helped teachers move from a top-down, one-size-fits-all view of ‘this is how you solve these problems,’ to an anticipation of how actual students who are learning these concepts for the first time might think through these problems,” Hunt said. “That’s a very different stance in terms of educating teachers to anticipate student thinking so they can meet it with responsive instruction.”
    Researchers also tracked how teachers modified math problems to make them more accessible to students before and after taking the course. After participating in the course, researchers saw that more teachers changed the problem type. They saw a shift in 50 percent of answers.
    “The benefit of anticipating students’ thinking is to help teachers to be responsive and support students’ prior knowledge as they’re teaching, which is a really hard thing to do,” Hunt said. “It’s even harder if you don’t yet appreciate what that thinking could be.”
    Story Source:
    Materials provided by North Carolina State University. Original written by Laura Oleniacz. Note: Content may be edited for style and length. More

  • in

    New electronic paper displays brilliant colors

    Imagine sitting out in the sun, reading a digital screen as thin as paper, but seeing the same image quality as if you were indoors. Thanks to research from Chalmers University of Technology, Sweden, it could soon be a reality. A new type of reflective screen — sometimes described as ‘electronic paper’ — offers optimal colour display, while using ambient light to keep energy consumption to a minimum.
    Traditional digital screens use a backlight to illuminate the text or images displayed upon them. This is fine indoors, but we’ve all experienced the difficulties of viewing such screens in bright sunshine. Reflective screens, however, attempt to use the ambient light, mimicking the way our eyes respond to natural paper.
    “For reflective screens to compete with the energy-intensive digital screens that we use today, images and colours must be reproduced with the same high quality. That will be the real breakthrough. Our research now shows how the technology can be optimised, making it attractive for commercial use,” says Marika Gugole, Doctoral Student at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology.
    The researchers had already previously succeeded in developing an ultra-thin, flexible material that reproduces all the colours an LED screen can display, while requiring only a tenth of the energy that a standard tablet consumes.
    But in the earlier design the colours on the reflective screen did not display with optimal quality. Now the new study, published in the journal Nano Letters takes the material one step further. Using a previously researched, porous and nanostructured material, containing tungsten trioxide, gold and platinum, they tried a new tactic — inverting the design in such a way as to allow the colours to appear much more accurately on the screen.
    Inverting the design for top quality colour
    The inversion of the design represents a great step forward. They placed the component which makes the material electrically conductive underneath the pixelated nanostructure that reproduces the colours — instead of above it, as was previously the case. This new design means you look directly at the pixelated surface, therefore seeing the colours much more clearly. More

  • in

    Thyroid cancer now diagnosed with machine learning-powered photoacoustic/ultrasound imaging

    A lump in the thyroid gland is called a thyroid nodule, and 5-10% of all thyroid nodules are diagnosed as thyroid cancer. Thyroid cancer has a good prognosis, a high survival rate, and a low recurrence rate, so early diagnosis and treatment are crucial. Recently, a joint research team in Korea has proposed a new non-invasive method to distinguish thyroid nodules from cancer by combining photoacoustic (PA) and ultrasound image technology with artificial intelligence.
    The joint research team — composed of Professor Chulhong Kim and Dr. Byullee Park of POSTECH’s Department of Electrical Engineering, Department of Convergence IT Engineering and Department of Mechanical Engineering, Professor Dong-Jun Lim and Professor Jeonghoon Ha of Seoul St. Mary’s Hospital of Catholic University of Korea, and Professor Jeesu Kim of Pusan National University — conducted a research to acquire PA images from patients with malignant and benign nodules and analyzed them with artificial intelligence. In recognition of their significance, the findings from this study were published in Cancer Research.
    Currently, the diagnosis of a thyroid nodule is performed using a fine-needle aspiration biopsy (FNAB) using an ultrasound image. But about 20% of FNABs are inaccurate which leads to repetitive and unnecessary biopsies.
    To overcome this problem, the joint research team explored the use of PA imaging to obtain an ultrasonic signal generated by light. When light (laser) is irradiated on the patient’s thyroid nodule, an ultrasound signal called a PA signal is generated from the thyroid gland and the nodule. By acquiring and processing this signal, PA images of both the gland and the nodule are collected. At this time, if multispectral PA signals are obtained, oxygen saturation information of the thyroid gland and thyroid nodule can be calculated.
    The researchers focused on the fact that the oxygen saturation of malignant nodules is lower than that of normal nodules, and acquired PA images of patients with malignant thyroid nodules (23 patients) and those with benign nodules (29 patients). Performing in vivo multispectral PA imaging on the patient’s thyroid nodules, the researchers calculated multiple parameters, including hemoglobin oxygen saturation level in the nodule area. This was analyzed using machine learning techniques to successfully and automatically classify whether the thyroid nodule was malignant or benign. In the initial classification, the sensitivity to classify malignancy as malignant was 78% and the specificity to classify benign as benign was 93%.
    The results of PA analysis obtained by machine learning techniques in the second analysis were combined with the results of the initial examination based on ultrasound images normally used in hospitals. Again, it was confirmed that the malignant thyroid nodules could be distinguished with a sensitivity of 83% and a specificity of 93%.
    Going a step further, when the researchers kept the sensitivity at 100% in the third analysis, the specificity reached 55%. This was about three times higher than the specificity of 17.3% (sensitivity of 98%) of the initial examination of thyroid nodules using the conventional ultrasound.
    As a result, the probability of correctly diagnosing benign, non-malignant nodules increased more than three times, which shows that overdiagnosis and unnecessary biopsies and repeated tests can be dramatically reduced, and thereby cut down on excessive medical costs.
    “This study is significant in that it is the first to acquire photoacoustic images of thyroid nodules and classify malignant nodules using machine learning,” remarked Professor Chulhong Kim of POSTECH. “In addition to minimizing unnecessary biopsies in thyroid cancer patients, this technique can also be applied to a variety of other cancers, including breast cancer.”
    “The ultrasonic device based on photoacoustic imaging will be helpful in effectively diagnosing thyroid cancer commonly found during health checkups and in reducing the number of biopsies,” explained Professor Dong-Jun Lim of Seoul St. Mary’s Hospital. “It can be developed into a medical device that can be readily used on thyroid nodule patients.” More