More stories

  • in

    New deep learning model helps the automated screening of common eye disorders

    A new deep learning (DL) model that can identify disease-related features from images of eyes has been unveiled by a group of Tohoku University researchers. This ‘lightweight’ DL model can be trained with a small number of images, even ones with a high-degree of noise, and is resource-efficient, meaning it is deployable on mobile devices.
    Details were published in the journal Scientific Reports on May 20, 2022.
    With many societies aging and limited medical personnel, DL model reliant self-monitory and tele-screening of diseases are becoming more routine. Yet, deep learning algorithms are generally task specific, and identify or detect general objects such as humans, animals, or road signs.
    Identifying diseases, on the other hand, demands precise measurement of tumors, tissue volume, or other sorts of abnormalities. To do so requires a model to look at separate images and mark boundaries in a process known as segmentation. But accurate prediction takes greater computational output, rendering them difficult to deploy on mobile devices.
    “There is always a trade-off between accuracy, speed and computational resources when it comes to DL models,” says Toru Nakazawa, co-author of the study and professor at Tohoku University’s Department of Ophthalmology. “Our developed model has better segmentation accuracy and enhanced model training reproducibility, even with fewer parameters — making it efficient and more lightweight when compared to other commercial softwares.”
    Professor Nakazawa, Associate Professor Parmanand Sharma, Dr Takahiro Ninomiya, and students from the Department of Ophthalmology worked with professor Takayuki Okatani from Tohoku University’s Graduate School of Information Sciences to produce the model.
    Using low resource devices, they obtained measurements of the foveal avascular zone, a region with the fovea centralis at the center of the retina, to enhance screening for glaucoma.
    “Our model is also capable of detecting/segmenting optic discs and hemorrhages in fundus images with high precision,” added Nakazawa.
    In the future, the group is hopeful of deploying the lightweight model to screen for other common eye disorders and other diseases.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Wearable chemical sensor is as good as gold

    Researchers created a special ultrathin sensor, spun from gold, that can be attached directly to the skin without irritation or discomfort. The sensor can measure different biomarkers or substances to perform on-body chemical analysis. It works using a technique called Raman spectroscopy, where laser light aimed at the sensor is changed slightly depending on whatever chemicals are present on the skin at that point. The sensor can be finely tuned to be extremely sensitive, and is robust enough for practical use.
    Wearable technology is nothing new. Perhaps you or someone you know wears a smartwatch. Many of these can monitor certain health matters such as heart rate, but at present they cannot measure chemical signatures which could be useful for medical diagnosis. Smartwatches or more specialized medical monitors are also relatively bulky and often quite costly. Prompted by such shortfalls, a team comprising researchers from the Department of Chemistry at the University of Tokyo sought a new way to sense various health conditions and environmental matters in a noninvasive and cost-effective manner.
    “A few years ago, I came across a fascinating method for producing robust stretchable electronic components from another research group at the University of Tokyo,” said Limei Liu, a visiting scholar at the time of the study and currently a lecturer at Yangzhou University in China. “These devices are spun from ultrafine threads coated with gold, so can be attached to the skin without issue as gold does not react with or irritate the skin in any way. As sensors, they were limited to detecting motion however, and we were looking for something that could sense chemical signatures, biomarkers and drugs. So we built upon this idea and created a noninvasive sensor that exceeded our expectations and inspired us to explore ways to improve its functionality even further.”
    The main component of the sensor is the fine gold mesh, as gold is unreactive, meaning that when it comes into contact with a substance the team wishes to measure — for example a potential disease biomarker present in sweat — it does not chemically alter that substance. But instead, as the gold mesh is so fine, it can provide a surprisingly large surface for that biomarker to bind to, and this is where the other components of the sensor come in. As a low-power laser is pointed at the gold mesh, some of the laser light is absorbed and some is reflected. Of the light reflected, most has the same energy as the incoming light. However, some incoming light loses energy to the biomarker or other measurable substance, and the discrepancy in energy between reflected and incident light is unique to the substance in question. A sensor called a spectrometer can use this unique energy fingerprint to identify the substance. This method of chemical identification is known as Raman spectroscopy.
    “Currently, our sensors need to be finely tuned to detect specific substances, and we wish to push both the sensitivity and specificity even further in future,” said Assistant Professor Tinghui Xiao. “With this, we think applications like glucose monitoring, ideal for sufferers of diabetes, or even virus detection, might be possible.”
    “There is also potential for the sensor to work with other methods of chemical analysis besides Raman spectroscopy, such as electrochemical analysis, but all these ideas require a lot more investigation,” said Professor Keisuke Goda. “In any case, I hope this research can lead to a new generation of low-cost biosensors that can revolutionize health monitoring and reduce the financial burden of health care.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    A new model sheds light on how we learn motor skills

    Researchers from the University of Tsukuba have developed a mathematical model of motor learning that reflects the motor learning process in the human brain. Their findings suggest that motor exploration — that is, increased variability in movements — is important when learning a new task. These results may lead to improved motor rehabilitation in patients after injury or disease.
    Even seemingly simple movements are very complex to perform, and the way we learn how to perform new movements remains unclear. Researchers from Japan have recently proposed a new model of motor learning that combines a number of different theories. A study published this month in Neural Networks revealed that their model can simulate motor learning in humans surprisingly well, paving the way for a greater understanding of how our brains work.
    For even a relatively simple task, such as to reach out and pick up an object, there are a huge number of potential combinations of angles between your body and the different joints that are involved. The same goes for each of your muscles — there is an almost endless combination of muscles and forces that can be used together to perform an action. With all of these possible combinations of joints and muscles — not to mention the underlying neuronal activity — how do we ever learn to make any movements at all? Researchers at the University of Tsukuba aimed to address this question.
    The research team first created a mathematical model to imitate the learning process that occurs for new motor tasks. They designed the model to reflect many of the processes that are thought to occur in the brain when a new skill is learned. The researchers then tested their model by attempting to simulate the results of three recent studies that were conducted in humans, in which individuals were asked to perform completely new motor tasks.
    “We were surprised at how well our simulations managed to reproduce many of the results of previous studies in humans,” says Professor Jun Izawa, senior author of the study. “With our model, we were able to bridge the gap between a number of different proposed mechanisms of motor learning, such as motor exploration, redundancy solving, and error-based learning.”
    In their model, larger amounts of motor exploration — that is, variability in movements — were found to help with the learning of sensitivity derivatives, which measure how commands from the brain affect motor error. In this way, errors were transformed into motor corrections.
    “Our success at simulating real results from human studies was encouraging,” explains first author Lucas Rebelo Dal’Bello. “It suggests that our proposed learning mechanism might accurately reflect what occurs in the brain during motor learning.”
    The findings of this study, which indicate the importance of motor exploration in motor learning, provide insights into how motor learning might occur in the human brain. They also suggest that motor exploration should be encouraged when a new motor task is being learned; this may be helpful for motor rehabilitation after injury or disease.
    This work was supported by KAKENHI (Scientific Research on Innovative Areas 19H04977 and 19H05729). LD was supported by a Japanese Government (Monbukagakusho: MEXT) Scholarship.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Earth’s oldest known wildfires raged 430 million years ago

    Bits of charcoal entombed in ancient rocks unearthed in Wales and Poland push back the earliest evidence for wildfires to around 430 million years ago. Besides breaking the previous record by about 10 million years, the finds help pin down how much oxygen was in Earth’s atmosphere at the time.

    The ancient atmosphere must have contained at least 16 percent oxygen, researchers report June 13 in Geology. That conclusion is based on modern-day lab tests that show how much oxygen it takes for a wildfire to take hold and spread.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    While oxygen makes up 21 percent of our air today, over the last 600 million years or so, oxygen levels in Earth’s atmosphere have fluctuated between 13 percent and 30 percent (SN: 12/13/05). Long-term models simulating past oxygen concentrations are based on processes such as the burial of coal swamps, mountain building, erosion and the chemical changes associated with them. But those models, some of which predict lower oxygen levels as low as 10 percent for this time period, provide broad-brush strokes of trends and may not capture brief spikes and dips, say Ian Glasspool and Robert Gastaldo, both paleobotanists at Colby College in Waterville, Maine.

    Charcoal, a remnant of wildfire, is physical evidence that provides, at the least, a minimum threshold for oxygen concentrations. That’s because oxygen is one of three ingredients needed to create a wildfire. The second, ignition, came from lightning in the ancient world, says Glasspool. The third, fuel, came from burgeoning plants and fungus 430 million years ago, during the Silurian Period. The predominant greenery were low-growing plants just a couple of centimeters tall. Scattered among this diminutive ground cover were occasional knee-high to waist-high plants and Prototaxites fungi that towered up to nine meters tall. Before this time, most plants were single-celled and lived in the seas.

    Once plants left the ocean and began to thrive, wildfire followed. “Almost as soon as we have evidence of plants on land, we have evidence of wildfire,” says Glasspool.

    That evidence includes tiny chunks of partially charred plants — including charcoal as identified by its microstructure — as well as conglomerations of charcoal and associated minerals embedded within fossilized hunks of Prototaxites fungi. Those samples came from rocks of known ages that formed from sediments dumped just offshore of ancient landmasses. This wildfire debris was carried offshore in streams or rivers before it settled, accumulated and was preserved, the researchers suggest.

    The microstructure of this fossilized and partially charred bit of plant unearthed in Poland from sediments that are almost 425 million years old reveals that it was burnt by some of Earth’s earliest known wildfires.Ian Glasspool/Colby College

    The discovery adds to previous evidence, including analyses of pockets of fluid trapped in halite minerals formed during the Silurian, that suggests that atmospheric oxygen during that time approached or even exceeded the 21 percent concentration seen today, the pair note.

    “The team has good evidence for charring,” says Lee Kump, a biogeochemist at Penn State who wasn’t involved in the new study. Although its evidence points to higher oxygen levels than some models suggest for that time, it’s possible that oxygen was a substantial component of the atmosphere even earlier than the Silurian, he says.

    “We can’t rule out that oxygen levels weren’t higher even further back,” says Kump. “It could be that plants from that era weren’t amenable to leaving a charcoal record.” More

  • in

    Methods from weather forecasting can be adapted to assess risk of COVID-19 exposure

    Techniques used in weather forecasting can be repurposed to provide individuals with a personalized assessment of their risk of exposure to COVID-19 or other viruses, according to new research published by Caltech scientists.
    The technique has the potential to be more effective and less intrusive than blanket lockdowns for combatting the spread of disease, says Tapio Schneider, the Theodore Y. Wu Professor of Environmental Science and Engineering; senior research scientist at JPL, which Caltech manages for NASA; and the lead author of a study on the new research that was published by PLOS Computational Biology on June 23.
    “For this pandemic, it may be too late,” Schneider says, “but this is not going to be the last epidemic that we will face. This is useful for tracking other infectious diseases, too.”
    In principle, the idea is simple: Weather forecasting models ingest a lot of data — for example, measurements of wind speed and direction, temperature, and humidity from local weather stations, in addition to satellite data. They use the data to assess what the current state of the atmosphere is, forecast the weather evolution into the future, and then repeat the cycle by blending the forecast atmospheric state with new data. In the same way, disease risk assessment also harnesses various types of available data to make an assessment about an individual’s risk of exposure to or infection with disease, forecasts the spread of disease across a network of human contacts using an epidemiological model, and then repeats the cycle by blending the forecast with new data. Such assessments might use the results of an institution’s surveillance testing, data from wearable sensors, self-reported symptoms and close contacts as recorded by smartphones, and municipalities’ disease-reporting dashboards.
    The research presented in PLOS Computational Biology is proof of concept. However, its end result would be a smart phone app that would provide an individual with a frequently updated numerical assessment (i.e., a percentage) that reflects their likelihood of having been exposed to or infected with a particular infectious disease agent, such as COVID-19.
    Such an app would be similar to existing COVID-19 exposure notification apps but more sophisticated and effective in its use of data, Schneider and his colleagues say. Those apps provide a binary exposure assessment (“yes, you have been exposed,” or, in the case of no exposure, radio silence); the new app described in the study would provide a more nuanced understanding of continually changing risks of exposure and infection as individuals come close to others and as data about infections is propagated across a continually evolving contact network. More

  • in

    Self-assembled, interlocked threads: Spinning yarn with no machine needed

    The spiral is pervasive throughout the universe — from the smallest DNA molecule to ferns and sunflowers, and from fingerprints to galaxies themselves. In science the ubiquity of this structure is associated with parsimony — that things will organize themselves in the simplest or most economical way.
    Researchers from the University of Pittsburgh and Princeton University unexpectedly discovered that this principle also applies to some non-biological systems that convert chemical energy into mechanical action — allowing two-dimensional polymer sheets to rise and rotate in spiral helices without the application of external power.
    This self-assembly into coherent three-dimensional structures represents the group’s latest contribution in the field of soft robotics and chemo-mechanical systems.
    The research was published this month in Proceedings of the National Academy of Sciences (PNAS) Nexus. Lead author is Raj Kumar Manna with Oleg E. Shklyaev, post-doctoral associates with Anna Balazs, Distinguished Professor of Chemical and Petroleum Engineering and the John A. Swanson Chair of Engineering in Pitt’s Swanson School of Engineering. Contributing author is Howard A. Stone, the Donald R. Dixon ’69 and Elizabeth W. Dixon Professor of Mechanical and Aerospace Engineering at Princeton.
    “Through computational modeling, we placed passive, uncoated polymer sheets around a circular, catalytic patch within a fluid-filled chamber. We added hydrogen peroxide to initiate a catalytic reaction, which then generated fluid flow. While one sheet alone did not spin in the solution, multiple sheets autonomously self-assembled into a tower-like structure,” Manna explained. “Then, as the tower experienced an instability, the sheets spontaneously formed an interweaving structure that rotates in the fluid.”
    As Balazs pointed out, “The whole thing resembles a thread of twisted yarn being formed by a rotating spindle, which was used to make fibers for weaving. Except, there is no spindle; the system naturally forms the intertwined, rotating structure.”
    Flow affects the sheet which affects the flow More

  • in

    Ultra-thin film creates vivid 3D images with large field of view

    Researchers have developed a new ultra-thin film that can create detailed 3D images viewable under normal illumination without any special reading devices. The images appear to float on top of the film and exhibit smooth parallax, which means they can be clearly viewed from all angles. With additional development, the new glass-free approach could be used as a visual security feature or incorporated into virtual or augmented reality devices.
    “Our ultra-thin, integrated reflective imaging film creates an image that can be viewed from a wide range of angles and appears to have physical depth,” said research team leader Su Shen from Soochow University in China. “It can be easily laminated to any surface as a tag or sticker or integrated into a transparent substrate, making it suitable for use as a security feature on banknotes or identity cards.”
    In the Optica Publishing Group journal Optics Letters, the researchers describe their new imaging film. At just 25 microns thick, the film is about twice as thick as household plastic wrap. It uses a technology known as light-field imaging, which captures the direction and intensity of all rays of light within a scene to create a 3D image.
    “Achieving glass-free 3D imaging with a large field of view, smooth parallax and a wide, focusable depth range under natural viewing conditions is one of the most exciting challenges in optics,” said Shen. “Our approach offers an innovative way to achieve vivid 3D images that cause no viewing discomfort or fatigue, are easy to see with the naked eye and are aesthetically pleasing.”
    High-density recording
    Various technical schemes have been investigated for creating the ideal 3D viewing experience, but they tend to suffer from drawbacks such as a limited viewing angle or low light efficiency. To overcome these shortcomings, the researchers developed a reflective light-field imaging film and new algorithm that allows both the position and angular information for the light field to be recorded with high density.
    The researchers also developed an economic self-releasing nanoimprinting lithography approach that can achieve the precision needed for high optical performance while using low-cost materials.The film is patterned with an array of reflective focusing elements on one side that act much like tiny cameras while the other side contains a micropattern array that encodes the image to be displayed.
    “The powerful microfabrication approach we used allowed us to make a reflective focusing that was extremely compact — measuring just tens of microns,” said Shen. “This lets the light radiance be densely collected, creating a realistic 3D effect.”
    A realistic 3D image
    The researchers demonstrated their new film by using it to create a 3D image of a cubic die that could be viewed clearly from almost any viewpoint. The resulting image measures 8 x 8 millimeters with an image depth that ranges from 0.1 to 8.0 millimeters under natural lighting conditions. They have also designed and fabricated an imaging film with a floating logo that can be used as a decorative element, for example on the back of a mobile phone.
    The researchers say that their algorithm and nanopatterning technique could be extended to other applications by creating the nanopatterns on a transparent display screen instead of a film, for example. They are also working toward commercializing the fabrication process by developing a double-sided nanoimprinting machine that would make it easier to achieve the precise alignment required between the micropatterns on each side of the film.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Personal health trackers may include smart face mask, other wearables

    For years, automotive companies have developed intelligent sensors to provide real-time monitoring of a vehicle’s health, including engine oil pressure, tire pressure and air-fuel mixture. Together, these sensors can provide an early warning system for a driver to identify a potential problem before it may need to be repaired.
    Now, in a similar vein biologically, Zheng Yan, an assistant professor in the MU College of Engineering at the University of Missouri, has recently published two studies demonstrating different ways to improve wearable bioelectronic devices and materials to provide better real-time monitoring of a person’s health, including vital signs.
    Developing a ‘smart’ face mask
    The onset of the COVID-19 pandemic has brought the idea of mask-wearing to the forefront of many people’s minds. In response, one focus of Yan’s lab has been to develop breathable soft bioelectronics. He said it was natural for him and his team to come up with the idea for integrating bioelectronics in a breathable face mask, which can monitor someone’s physiological status based on the nature of the person’s cough. Their findings were recently published in ACS Nano, a journal of the American Chemical Society.
    “Different respiratory problems lead to different cough frequencies and degrees,” Yan said. “Taking chronic obstructive pulmonary disease (COPD) as an example, the frequency of cough in the early morning is higher than that in the daytime and night. Our smart face mask can effectively monitor cough frequencies, which may assist physicians with knowing disease development and providing timely, customized interventions.”
    In addition to monitoring someone’s physiological status, the mask can also help identify proper mask wearing in public places using a bioelectronic sensor, Yan said. At this time, the mask does not have the capability to provide automatic reminders, but they would like to develop that function in the future. More