More stories

  • in

    Training helps teachers anticipate how students with learning disabilities might solve problems

    North Carolina State University researchers found that a four-week training course made a substantial difference in helping special education teachers anticipate different ways students with learning disabilities might solve math problems. The findings suggest that the training would help instructors more quickly identify and respond to a student’s needs.
    Published in the Journal of Mathematics Teacher Education, researchers say their findings could help teachers in special education develop strategies to respond to kids’ math reasoning and questions in advance. They also say the findings point to the importance of mathematics education preparation for special education teachers — an area where researchers say opportunities are lacking.
    “Many special education programs do not include a focus on mathematics for students with disabilities, and few, if any, focus on understanding the mathematical thinking of students with disabilities in particular,” said the study’s first author Jessica Hunt, associate professor of mathematics education and special education at NC State. “This study was based on a course experience designed to do just that — to heighten teacher knowledge of the mathematical thinking of students with learning disabilities grounded in a stance of neurodiversity.”
    In the study, researchers evaluated the impact of a four-week course on 20 pre-service special education teachers. Researchers wanted to know if the course impacted the educators’ ability to anticipate the mathematical reasoning of students with learning disabilities, and help teachers adjust tasks to make them more accessible. The course also emphasized neurodiversity, which defines cognitive differences as a natural and beneficial outgrowth of neurological and biological diversity.
    “Neurodiversity says that all human brains are highly variable, with no average or ‘normal’ learners,” Hunt said. “This means that we all have strengths and challenges, and as humans we use what makes sense to us to understand the world. It’s a way to challenge pervasive deficit approaches to looking at disability, and to instead use an asset-based approach that positions students with learning disabilities as mathematically capable.”
    Before and after the course, the teachers took a 40-question assessment. In the test, researchers asked teachers to use words, pictures or symbols to describe a strategy that elementary school students with learning disabilities might use to solve a problem. They compared teachers’ responses to see how well they anticipated students’ thinking, and also how they might modify tasks for students.
    After the course, they saw more anticipation of what they called “implicit action,” which is using strategies like counting, halving, grouping, or predicting the number of people sharing a certain item to solve a problem. It’s often represented by pictures or words. Before the test, many teachers used “static representations” in which they used mathematical expressions to show solutions. While static representations are abstract representations of solutions, researchers argued implicit actions can reflect how students with learning disabilities themselves might work through a problem.
    They found teachers’ use of implicit action increased from 32 percent to 82 percent of answers before and after the test, while static representation decreased from 50 percent of answers to 17 percent. Their responses didn’t add up to 100 percent because some teachers left some answers blank.
    “The course helped teachers move from a top-down, one-size-fits-all view of ‘this is how you solve these problems,’ to an anticipation of how actual students who are learning these concepts for the first time might think through these problems,” Hunt said. “That’s a very different stance in terms of educating teachers to anticipate student thinking so they can meet it with responsive instruction.”
    Researchers also tracked how teachers modified math problems to make them more accessible to students before and after taking the course. After participating in the course, researchers saw that more teachers changed the problem type. They saw a shift in 50 percent of answers.
    “The benefit of anticipating students’ thinking is to help teachers to be responsive and support students’ prior knowledge as they’re teaching, which is a really hard thing to do,” Hunt said. “It’s even harder if you don’t yet appreciate what that thinking could be.”
    Story Source:
    Materials provided by North Carolina State University. Original written by Laura Oleniacz. Note: Content may be edited for style and length. More

  • in

    New electronic paper displays brilliant colors

    Imagine sitting out in the sun, reading a digital screen as thin as paper, but seeing the same image quality as if you were indoors. Thanks to research from Chalmers University of Technology, Sweden, it could soon be a reality. A new type of reflective screen — sometimes described as ‘electronic paper’ — offers optimal colour display, while using ambient light to keep energy consumption to a minimum.
    Traditional digital screens use a backlight to illuminate the text or images displayed upon them. This is fine indoors, but we’ve all experienced the difficulties of viewing such screens in bright sunshine. Reflective screens, however, attempt to use the ambient light, mimicking the way our eyes respond to natural paper.
    “For reflective screens to compete with the energy-intensive digital screens that we use today, images and colours must be reproduced with the same high quality. That will be the real breakthrough. Our research now shows how the technology can be optimised, making it attractive for commercial use,” says Marika Gugole, Doctoral Student at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology.
    The researchers had already previously succeeded in developing an ultra-thin, flexible material that reproduces all the colours an LED screen can display, while requiring only a tenth of the energy that a standard tablet consumes.
    But in the earlier design the colours on the reflective screen did not display with optimal quality. Now the new study, published in the journal Nano Letters takes the material one step further. Using a previously researched, porous and nanostructured material, containing tungsten trioxide, gold and platinum, they tried a new tactic — inverting the design in such a way as to allow the colours to appear much more accurately on the screen.
    Inverting the design for top quality colour
    The inversion of the design represents a great step forward. They placed the component which makes the material electrically conductive underneath the pixelated nanostructure that reproduces the colours — instead of above it, as was previously the case. This new design means you look directly at the pixelated surface, therefore seeing the colours much more clearly. More

  • in

    Thyroid cancer now diagnosed with machine learning-powered photoacoustic/ultrasound imaging

    A lump in the thyroid gland is called a thyroid nodule, and 5-10% of all thyroid nodules are diagnosed as thyroid cancer. Thyroid cancer has a good prognosis, a high survival rate, and a low recurrence rate, so early diagnosis and treatment are crucial. Recently, a joint research team in Korea has proposed a new non-invasive method to distinguish thyroid nodules from cancer by combining photoacoustic (PA) and ultrasound image technology with artificial intelligence.
    The joint research team — composed of Professor Chulhong Kim and Dr. Byullee Park of POSTECH’s Department of Electrical Engineering, Department of Convergence IT Engineering and Department of Mechanical Engineering, Professor Dong-Jun Lim and Professor Jeonghoon Ha of Seoul St. Mary’s Hospital of Catholic University of Korea, and Professor Jeesu Kim of Pusan National University — conducted a research to acquire PA images from patients with malignant and benign nodules and analyzed them with artificial intelligence. In recognition of their significance, the findings from this study were published in Cancer Research.
    Currently, the diagnosis of a thyroid nodule is performed using a fine-needle aspiration biopsy (FNAB) using an ultrasound image. But about 20% of FNABs are inaccurate which leads to repetitive and unnecessary biopsies.
    To overcome this problem, the joint research team explored the use of PA imaging to obtain an ultrasonic signal generated by light. When light (laser) is irradiated on the patient’s thyroid nodule, an ultrasound signal called a PA signal is generated from the thyroid gland and the nodule. By acquiring and processing this signal, PA images of both the gland and the nodule are collected. At this time, if multispectral PA signals are obtained, oxygen saturation information of the thyroid gland and thyroid nodule can be calculated.
    The researchers focused on the fact that the oxygen saturation of malignant nodules is lower than that of normal nodules, and acquired PA images of patients with malignant thyroid nodules (23 patients) and those with benign nodules (29 patients). Performing in vivo multispectral PA imaging on the patient’s thyroid nodules, the researchers calculated multiple parameters, including hemoglobin oxygen saturation level in the nodule area. This was analyzed using machine learning techniques to successfully and automatically classify whether the thyroid nodule was malignant or benign. In the initial classification, the sensitivity to classify malignancy as malignant was 78% and the specificity to classify benign as benign was 93%.
    The results of PA analysis obtained by machine learning techniques in the second analysis were combined with the results of the initial examination based on ultrasound images normally used in hospitals. Again, it was confirmed that the malignant thyroid nodules could be distinguished with a sensitivity of 83% and a specificity of 93%.
    Going a step further, when the researchers kept the sensitivity at 100% in the third analysis, the specificity reached 55%. This was about three times higher than the specificity of 17.3% (sensitivity of 98%) of the initial examination of thyroid nodules using the conventional ultrasound.
    As a result, the probability of correctly diagnosing benign, non-malignant nodules increased more than three times, which shows that overdiagnosis and unnecessary biopsies and repeated tests can be dramatically reduced, and thereby cut down on excessive medical costs.
    “This study is significant in that it is the first to acquire photoacoustic images of thyroid nodules and classify malignant nodules using machine learning,” remarked Professor Chulhong Kim of POSTECH. “In addition to minimizing unnecessary biopsies in thyroid cancer patients, this technique can also be applied to a variety of other cancers, including breast cancer.”
    “The ultrasonic device based on photoacoustic imaging will be helpful in effectively diagnosing thyroid cancer commonly found during health checkups and in reducing the number of biopsies,” explained Professor Dong-Jun Lim of Seoul St. Mary’s Hospital. “It can be developed into a medical device that can be readily used on thyroid nodule patients.” More

  • in

    Virtual learning may help NICU nurses recognize baby pain

    Babies younger than four weeks old, called neonates, were once thought not to perceive pain due to not-yet-fully-developed sensory systems, but modern research says otherwise, according to researchers from Hiroshima University in Japan.
    Not only do babies experience pain, but the various levels can be standardized to help nurses recognize and respond to the babies’ cues — if the nurses have the opportunity to learn the scoring tools and skills needed to react appropriately. With tight schedules and limited in-person courses available, the researchers theorized, virtual e-learning may be able to provide a path forward for nurses to independently pursue training in this area.
    To test this hypothesis, researchers conducted a pilot study of 115 nurses with varying levels of formal training and years of experience in seven hospitals across Japan. They published their results on May 27 in Advances in Neonatal Care.
    “Despite a growing body of knowledge and guidelines being published in many countries about the preventions and management of pain in neonates hospitalized in the NICU, neonatal pain remains unrecognized, undertreated, and generally challenging,” said paper author Mio Ozawa, associate professor in the Graduate School of Biomedical and Health Sciences at Hiroshima University.
    The researchers developed a comprehensive multimedia virtual program on neonatal pain management, based on selected standardized pain scales, for nursing staff to independently learn how to employ measurement tools. The program, called e-Pain Management of Neonates, is the first of its kind in Japan.
    “The aim of the study was to verify the feasibility of the program and whether e-learning actually improves nurses’ knowledge and scoring skills,” said paper author Mio Ozawa, associate professor in the Graduate School of Biomedical and Health Sciences at Hiroshima University. “The results of this study suggest that nurses could obtain knowledge and skills about the measurement of neonatal pain through e-learning.”
    The full cohort took a pre-test at the start of the study, before embarking on a self-paced, four-week e-learning program dedicated to learning standardized pain scales to measure discomfort in babies. However, only 52 nurses completed the post-test after four weeks. For those 52, scores increased across a range of years of experience and formal education.
    Ozawa noted that the sample size is small but also said that the improved test scores indicated the potential for e-learning.
    “Future research will need to go beyond the individual level to determine which benefits are produced in the management of neonatal pain in hospitals where nurses learn neonatal pain management through e-learning,” Ozawa said. “This study demonstrates that virtually delivered neonatal pain management program can be useful for nurses’ attainment of knowledge and skills for managing neonatal pain, including an appropriate use of selected scoring tools.”
    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    Seeing with radio waves

    Scientists from the Division of Physics at the University of Tsukuba used the quantum effect called “spin-locking” to significantly enhance the resolution when performing radio-frequency imaging of nitrogen-vacancy defects in diamond. This work may lead to faster and more accurate material analysis, as well as a path towards practical quantum computers.
    Nitrogen-vacancy (NV) centers have long been studied for their potential use in quantum computers. A NV center is a type of defect in the lattice of a diamond, in which two adjacent carbon atoms have been replaced with a nitrogen atom and a void. This leaves an unpaired electron, which can be detected using radio-frequency waves, because its probability of emitting a photon depends on its spin state. However, the spatial resolution of radio wave detection using conventional radio-frequency techniques has remained less than optimal.
    Now, researchers at the University of Tsukuba have pushed the resolution to its limit by employing a technique called “spin-locking.” Microwave pulses are used to put the electron’s spin in a quantum superposition of up and down simultaneously. Then, a driving electromagnetic field causes the direction of the spin to precess around, like a wobbling top. The end result is an electron spin that is shielded from random noise but strongly coupled to the detection equipment. “Spin-locking ensures high accuracy and sensitivity of the electromagnetic field imaging,” first author Professor Shintaro Nomura explains. Due to the high density of NV centers in the diamond samples used, the collective signal they produced could be easily picked up with this method. This permitted the sensing of collections of NV centers at the micrometer scale. “The spatial resolution we obtained with RF imaging was much better than with similar existing methods,” Professor Nomura continues, “and it was limited only by the resolution of the optical microscope we used.”
    The approach demonstrated in this project may be applied in a broad variety of application areas — for example, the characterizations of polar molecules, polymers, and proteins, as well as the characterization of materials. It might also be used in medical applications — for example, as a new way to perform magnetocardiography.
    This work was partly supported by a Grant-in-Aid for Scientific Research (Nos. JP18H04283, 291 JP18H01243, JP18K18726, and JP21H01009) from the Japan Society for the Promotion of 292 Science.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Computer-assisted biology: Decoding noisy data to predict cell growth

    Scientists from The University of Tokyo Institute of Industrial Science have designed a machine learning algorithm to predict the size of an individual cell as it grows and divides. By using an artificial neural network that does not impose the assumptions commonly employed in biology, the computer was able to make more complex and accurate forecasts than previously possible. This work may help advance the field of quantitative biology as well as improve the industrial production of medications or fermented products.
    As in all of the natural sciences, biology has developed mathematical models to help fit data and make predictions about the future. However, because of the inherent complexities of living systems, many of these equations rely on simplifying assumptions that do not always reflect the actual underlying biological processes. Now, researchers at The University of Tokyo Institute of Industrial Science have implemented a machine learning algorithm that can use the measured size of single cells over time to predict their future size. Because the computer automatically recognizes patterns in the data, it is not constrained like conventional methods.
    “In biology, simple models are often used based on their capacity to reproduce the measured data,” first author Atsushi Kamimura says. “However, the models may fail to capture what is really going on because of human preconceptions,.”
    The data for this latest study were collected from either an Escherichia coli bacterium or a Schizosaccharomyces pombe yeast cell held in a microfluidic channel at various temperatures. The plot of size over time looked like a “sawtooth” as exponential growth was interrupted by division events. Human biologists usually use a “sizer” model, based on the absolute size of the cell, or “adder” model, based on the increase in size since birth, to predict when divisions will occur. The computer algorithm found support for the “adder” principle, but as part of a complex web of biochemical reactions and signaling.
    “Our deep-learning neural network can effectively separate the history-dependent deterministic factors from the noise in given data,” senior author Tetsuya Kobayashi says.
    This method can be extended to many other aspects of biology besides predicting cell size. In the future, life science may be driven more by objective artificial intelligence than human models. This may lead to more efficient control of microorganisms we use to ferment products and produce drugs.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Physicists take big step in race to quantum computing

    A team of physicists from the Harvard-MIT Center for Ultracold Atoms and other universities has developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits, or “qubits.”
    The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.
    “This moves the field into a new domain where no one has ever been to thus far,” said Mikhail Lukin, the George Vasmer Leverett Professor of Physics, co-director of the Harvard Quantum Initiative, and one of the senior authors of the study published today in the journal Nature. “We are entering a completely new part of the quantum world.”
    According to Sepehr Ebadi, a physics student in the Graduate School of Arts and Sciences and the study’s lead author, it is the combination of system’s unprecedented size and programmability that puts it at the cutting edge of the race for a quantum computer, which harnesses the mysterious properties of matter at extremely small scales to greatly advance processing power. Under the right circumstances, the increase in qubits means the system can store and process exponentially more information than the classical bits on which standard computers run.
    “The number of quantum states that are possible with only 256 qubits exceeds the number of atoms in the solar system,” Ebadi said, explaining the system’s vast size.
    Already, the simulator has allowed researchers to observe several exotic quantum states of matter that had never before been realized experimentally, and to perform a quantum phase transition study so precise that it serves as the textbook example of how magnetism works at the quantum level. More

  • in

    First study of nickelate's magnetism finds a strong kinship with cuprate superconductors

    Ever since the 1986 discovery that copper oxide materials, or cuprates, could carry electrical current with no loss at unexpectedly high temperatures, scientists have been looking for other unconventional superconductors that could operate even closer to room temperature. This would allow for a host of everyday applications that could transform society by making energy transmission more efficient, for instance.
    Nickel oxides, or nickelates, seemed like a promising candidate. They’re based on nickel, which sits next to copper on the periodic table, and the two elements have some common characteristics. It was not unreasonable to think that superconductivity would be one of them.
    But it took years of trying before scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University finally created the first nickelate that showed clear signs of superconductivity.
    Now SLAC, Stanford and Diamond Light Source researchers have made the first measurements of magnetic excitations that spread through the new material like ripples in a pond. The results reveal both important similarities and subtle differences between nickelates and cuprates. The scientists published their results in Science today.
    “This is exciting, because it gives us a new angle for exploring how unconventional superconductors work, which is still an open question after 30-plus years of research,” said
    Haiyu Lu, a Stanford graduate student who did the bulk of the research with Stanford postdoctoral researcher Matteo Rossi and SLAC staff scientist Wei-Sheng Lee. More