More stories

  • in

    New deep learning model helps the automated screening of common eye disorders

    A new deep learning (DL) model that can identify disease-related features from images of eyes has been unveiled by a group of Tohoku University researchers. This ‘lightweight’ DL model can be trained with a small number of images, even ones with a high-degree of noise, and is resource-efficient, meaning it is deployable on mobile devices.
    Details were published in the journal Scientific Reports on May 20, 2022.
    With many societies aging and limited medical personnel, DL model reliant self-monitory and tele-screening of diseases are becoming more routine. Yet, deep learning algorithms are generally task specific, and identify or detect general objects such as humans, animals, or road signs.
    Identifying diseases, on the other hand, demands precise measurement of tumors, tissue volume, or other sorts of abnormalities. To do so requires a model to look at separate images and mark boundaries in a process known as segmentation. But accurate prediction takes greater computational output, rendering them difficult to deploy on mobile devices.
    “There is always a trade-off between accuracy, speed and computational resources when it comes to DL models,” says Toru Nakazawa, co-author of the study and professor at Tohoku University’s Department of Ophthalmology. “Our developed model has better segmentation accuracy and enhanced model training reproducibility, even with fewer parameters — making it efficient and more lightweight when compared to other commercial softwares.”
    Professor Nakazawa, Associate Professor Parmanand Sharma, Dr Takahiro Ninomiya, and students from the Department of Ophthalmology worked with professor Takayuki Okatani from Tohoku University’s Graduate School of Information Sciences to produce the model.
    Using low resource devices, they obtained measurements of the foveal avascular zone, a region with the fovea centralis at the center of the retina, to enhance screening for glaucoma.
    “Our model is also capable of detecting/segmenting optic discs and hemorrhages in fundus images with high precision,” added Nakazawa.
    In the future, the group is hopeful of deploying the lightweight model to screen for other common eye disorders and other diseases.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Wearable chemical sensor is as good as gold

    Researchers created a special ultrathin sensor, spun from gold, that can be attached directly to the skin without irritation or discomfort. The sensor can measure different biomarkers or substances to perform on-body chemical analysis. It works using a technique called Raman spectroscopy, where laser light aimed at the sensor is changed slightly depending on whatever chemicals are present on the skin at that point. The sensor can be finely tuned to be extremely sensitive, and is robust enough for practical use.
    Wearable technology is nothing new. Perhaps you or someone you know wears a smartwatch. Many of these can monitor certain health matters such as heart rate, but at present they cannot measure chemical signatures which could be useful for medical diagnosis. Smartwatches or more specialized medical monitors are also relatively bulky and often quite costly. Prompted by such shortfalls, a team comprising researchers from the Department of Chemistry at the University of Tokyo sought a new way to sense various health conditions and environmental matters in a noninvasive and cost-effective manner.
    “A few years ago, I came across a fascinating method for producing robust stretchable electronic components from another research group at the University of Tokyo,” said Limei Liu, a visiting scholar at the time of the study and currently a lecturer at Yangzhou University in China. “These devices are spun from ultrafine threads coated with gold, so can be attached to the skin without issue as gold does not react with or irritate the skin in any way. As sensors, they were limited to detecting motion however, and we were looking for something that could sense chemical signatures, biomarkers and drugs. So we built upon this idea and created a noninvasive sensor that exceeded our expectations and inspired us to explore ways to improve its functionality even further.”
    The main component of the sensor is the fine gold mesh, as gold is unreactive, meaning that when it comes into contact with a substance the team wishes to measure — for example a potential disease biomarker present in sweat — it does not chemically alter that substance. But instead, as the gold mesh is so fine, it can provide a surprisingly large surface for that biomarker to bind to, and this is where the other components of the sensor come in. As a low-power laser is pointed at the gold mesh, some of the laser light is absorbed and some is reflected. Of the light reflected, most has the same energy as the incoming light. However, some incoming light loses energy to the biomarker or other measurable substance, and the discrepancy in energy between reflected and incident light is unique to the substance in question. A sensor called a spectrometer can use this unique energy fingerprint to identify the substance. This method of chemical identification is known as Raman spectroscopy.
    “Currently, our sensors need to be finely tuned to detect specific substances, and we wish to push both the sensitivity and specificity even further in future,” said Assistant Professor Tinghui Xiao. “With this, we think applications like glucose monitoring, ideal for sufferers of diabetes, or even virus detection, might be possible.”
    “There is also potential for the sensor to work with other methods of chemical analysis besides Raman spectroscopy, such as electrochemical analysis, but all these ideas require a lot more investigation,” said Professor Keisuke Goda. “In any case, I hope this research can lead to a new generation of low-cost biosensors that can revolutionize health monitoring and reduce the financial burden of health care.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    A new model sheds light on how we learn motor skills

    Researchers from the University of Tsukuba have developed a mathematical model of motor learning that reflects the motor learning process in the human brain. Their findings suggest that motor exploration — that is, increased variability in movements — is important when learning a new task. These results may lead to improved motor rehabilitation in patients after injury or disease.
    Even seemingly simple movements are very complex to perform, and the way we learn how to perform new movements remains unclear. Researchers from Japan have recently proposed a new model of motor learning that combines a number of different theories. A study published this month in Neural Networks revealed that their model can simulate motor learning in humans surprisingly well, paving the way for a greater understanding of how our brains work.
    For even a relatively simple task, such as to reach out and pick up an object, there are a huge number of potential combinations of angles between your body and the different joints that are involved. The same goes for each of your muscles — there is an almost endless combination of muscles and forces that can be used together to perform an action. With all of these possible combinations of joints and muscles — not to mention the underlying neuronal activity — how do we ever learn to make any movements at all? Researchers at the University of Tsukuba aimed to address this question.
    The research team first created a mathematical model to imitate the learning process that occurs for new motor tasks. They designed the model to reflect many of the processes that are thought to occur in the brain when a new skill is learned. The researchers then tested their model by attempting to simulate the results of three recent studies that were conducted in humans, in which individuals were asked to perform completely new motor tasks.
    “We were surprised at how well our simulations managed to reproduce many of the results of previous studies in humans,” says Professor Jun Izawa, senior author of the study. “With our model, we were able to bridge the gap between a number of different proposed mechanisms of motor learning, such as motor exploration, redundancy solving, and error-based learning.”
    In their model, larger amounts of motor exploration — that is, variability in movements — were found to help with the learning of sensitivity derivatives, which measure how commands from the brain affect motor error. In this way, errors were transformed into motor corrections.
    “Our success at simulating real results from human studies was encouraging,” explains first author Lucas Rebelo Dal’Bello. “It suggests that our proposed learning mechanism might accurately reflect what occurs in the brain during motor learning.”
    The findings of this study, which indicate the importance of motor exploration in motor learning, provide insights into how motor learning might occur in the human brain. They also suggest that motor exploration should be encouraged when a new motor task is being learned; this may be helpful for motor rehabilitation after injury or disease.
    This work was supported by KAKENHI (Scientific Research on Innovative Areas 19H04977 and 19H05729). LD was supported by a Japanese Government (Monbukagakusho: MEXT) Scholarship.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Methods from weather forecasting can be adapted to assess risk of COVID-19 exposure

    Techniques used in weather forecasting can be repurposed to provide individuals with a personalized assessment of their risk of exposure to COVID-19 or other viruses, according to new research published by Caltech scientists.
    The technique has the potential to be more effective and less intrusive than blanket lockdowns for combatting the spread of disease, says Tapio Schneider, the Theodore Y. Wu Professor of Environmental Science and Engineering; senior research scientist at JPL, which Caltech manages for NASA; and the lead author of a study on the new research that was published by PLOS Computational Biology on June 23.
    “For this pandemic, it may be too late,” Schneider says, “but this is not going to be the last epidemic that we will face. This is useful for tracking other infectious diseases, too.”
    In principle, the idea is simple: Weather forecasting models ingest a lot of data — for example, measurements of wind speed and direction, temperature, and humidity from local weather stations, in addition to satellite data. They use the data to assess what the current state of the atmosphere is, forecast the weather evolution into the future, and then repeat the cycle by blending the forecast atmospheric state with new data. In the same way, disease risk assessment also harnesses various types of available data to make an assessment about an individual’s risk of exposure to or infection with disease, forecasts the spread of disease across a network of human contacts using an epidemiological model, and then repeats the cycle by blending the forecast with new data. Such assessments might use the results of an institution’s surveillance testing, data from wearable sensors, self-reported symptoms and close contacts as recorded by smartphones, and municipalities’ disease-reporting dashboards.
    The research presented in PLOS Computational Biology is proof of concept. However, its end result would be a smart phone app that would provide an individual with a frequently updated numerical assessment (i.e., a percentage) that reflects their likelihood of having been exposed to or infected with a particular infectious disease agent, such as COVID-19.
    Such an app would be similar to existing COVID-19 exposure notification apps but more sophisticated and effective in its use of data, Schneider and his colleagues say. Those apps provide a binary exposure assessment (“yes, you have been exposed,” or, in the case of no exposure, radio silence); the new app described in the study would provide a more nuanced understanding of continually changing risks of exposure and infection as individuals come close to others and as data about infections is propagated across a continually evolving contact network. More

  • in

    Self-assembled, interlocked threads: Spinning yarn with no machine needed

    The spiral is pervasive throughout the universe — from the smallest DNA molecule to ferns and sunflowers, and from fingerprints to galaxies themselves. In science the ubiquity of this structure is associated with parsimony — that things will organize themselves in the simplest or most economical way.
    Researchers from the University of Pittsburgh and Princeton University unexpectedly discovered that this principle also applies to some non-biological systems that convert chemical energy into mechanical action — allowing two-dimensional polymer sheets to rise and rotate in spiral helices without the application of external power.
    This self-assembly into coherent three-dimensional structures represents the group’s latest contribution in the field of soft robotics and chemo-mechanical systems.
    The research was published this month in Proceedings of the National Academy of Sciences (PNAS) Nexus. Lead author is Raj Kumar Manna with Oleg E. Shklyaev, post-doctoral associates with Anna Balazs, Distinguished Professor of Chemical and Petroleum Engineering and the John A. Swanson Chair of Engineering in Pitt’s Swanson School of Engineering. Contributing author is Howard A. Stone, the Donald R. Dixon ’69 and Elizabeth W. Dixon Professor of Mechanical and Aerospace Engineering at Princeton.
    “Through computational modeling, we placed passive, uncoated polymer sheets around a circular, catalytic patch within a fluid-filled chamber. We added hydrogen peroxide to initiate a catalytic reaction, which then generated fluid flow. While one sheet alone did not spin in the solution, multiple sheets autonomously self-assembled into a tower-like structure,” Manna explained. “Then, as the tower experienced an instability, the sheets spontaneously formed an interweaving structure that rotates in the fluid.”
    As Balazs pointed out, “The whole thing resembles a thread of twisted yarn being formed by a rotating spindle, which was used to make fibers for weaving. Except, there is no spindle; the system naturally forms the intertwined, rotating structure.”
    Flow affects the sheet which affects the flow More

  • in

    Ultra-thin film creates vivid 3D images with large field of view

    Researchers have developed a new ultra-thin film that can create detailed 3D images viewable under normal illumination without any special reading devices. The images appear to float on top of the film and exhibit smooth parallax, which means they can be clearly viewed from all angles. With additional development, the new glass-free approach could be used as a visual security feature or incorporated into virtual or augmented reality devices.
    “Our ultra-thin, integrated reflective imaging film creates an image that can be viewed from a wide range of angles and appears to have physical depth,” said research team leader Su Shen from Soochow University in China. “It can be easily laminated to any surface as a tag or sticker or integrated into a transparent substrate, making it suitable for use as a security feature on banknotes or identity cards.”
    In the Optica Publishing Group journal Optics Letters, the researchers describe their new imaging film. At just 25 microns thick, the film is about twice as thick as household plastic wrap. It uses a technology known as light-field imaging, which captures the direction and intensity of all rays of light within a scene to create a 3D image.
    “Achieving glass-free 3D imaging with a large field of view, smooth parallax and a wide, focusable depth range under natural viewing conditions is one of the most exciting challenges in optics,” said Shen. “Our approach offers an innovative way to achieve vivid 3D images that cause no viewing discomfort or fatigue, are easy to see with the naked eye and are aesthetically pleasing.”
    High-density recording
    Various technical schemes have been investigated for creating the ideal 3D viewing experience, but they tend to suffer from drawbacks such as a limited viewing angle or low light efficiency. To overcome these shortcomings, the researchers developed a reflective light-field imaging film and new algorithm that allows both the position and angular information for the light field to be recorded with high density.
    The researchers also developed an economic self-releasing nanoimprinting lithography approach that can achieve the precision needed for high optical performance while using low-cost materials.The film is patterned with an array of reflective focusing elements on one side that act much like tiny cameras while the other side contains a micropattern array that encodes the image to be displayed.
    “The powerful microfabrication approach we used allowed us to make a reflective focusing that was extremely compact — measuring just tens of microns,” said Shen. “This lets the light radiance be densely collected, creating a realistic 3D effect.”
    A realistic 3D image
    The researchers demonstrated their new film by using it to create a 3D image of a cubic die that could be viewed clearly from almost any viewpoint. The resulting image measures 8 x 8 millimeters with an image depth that ranges from 0.1 to 8.0 millimeters under natural lighting conditions. They have also designed and fabricated an imaging film with a floating logo that can be used as a decorative element, for example on the back of a mobile phone.
    The researchers say that their algorithm and nanopatterning technique could be extended to other applications by creating the nanopatterns on a transparent display screen instead of a film, for example. They are also working toward commercializing the fabrication process by developing a double-sided nanoimprinting machine that would make it easier to achieve the precise alignment required between the micropatterns on each side of the film.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Personal health trackers may include smart face mask, other wearables

    For years, automotive companies have developed intelligent sensors to provide real-time monitoring of a vehicle’s health, including engine oil pressure, tire pressure and air-fuel mixture. Together, these sensors can provide an early warning system for a driver to identify a potential problem before it may need to be repaired.
    Now, in a similar vein biologically, Zheng Yan, an assistant professor in the MU College of Engineering at the University of Missouri, has recently published two studies demonstrating different ways to improve wearable bioelectronic devices and materials to provide better real-time monitoring of a person’s health, including vital signs.
    Developing a ‘smart’ face mask
    The onset of the COVID-19 pandemic has brought the idea of mask-wearing to the forefront of many people’s minds. In response, one focus of Yan’s lab has been to develop breathable soft bioelectronics. He said it was natural for him and his team to come up with the idea for integrating bioelectronics in a breathable face mask, which can monitor someone’s physiological status based on the nature of the person’s cough. Their findings were recently published in ACS Nano, a journal of the American Chemical Society.
    “Different respiratory problems lead to different cough frequencies and degrees,” Yan said. “Taking chronic obstructive pulmonary disease (COPD) as an example, the frequency of cough in the early morning is higher than that in the daytime and night. Our smart face mask can effectively monitor cough frequencies, which may assist physicians with knowing disease development and providing timely, customized interventions.”
    In addition to monitoring someone’s physiological status, the mask can also help identify proper mask wearing in public places using a bioelectronic sensor, Yan said. At this time, the mask does not have the capability to provide automatic reminders, but they would like to develop that function in the future. More

  • in

    Are babies the key to the next generation of artificial intelligence?

    Babies can help unlock the next generation of artificial intelligence (AI), according to Trinity College neuroscientists and colleagues who have just published new guiding principles for improving AI.
    The research, published today  in the journal Nature Machine Intelligence, examines the neuroscience and psychology of infant learning and distills three principles to guide the next generation of AI, which will help overcome the most pressing limitations of machine learning.
    Dr Lorijn Zaadnoordijk, Marie Skłodowska-Curie Research Fellow at Trinity College explained:
    “Artificial intelligence (AI) has made tremendous progress in the last decade, giving us smart speakers, autopilots in cars, ever-smarter apps, and enhanced medical diagnosis. These exciting developments in AI have been achieved thanks to machine learning which uses enormous datasets to train artificial neural network models. However, progress is stalling in many areas because the datasets that machines learn from must be painstakingly curated by humans. But we know that learning can be done much more efficiently, because infants don’t learn this way! They learn by experiencing the world around them, sometimes by even seeing something just once.”
    In their article “Lessons from infant learning for unsupervised machine learning,” Dr Lorijn Zaadnoordijk and Professor Rhodri Cusack, from the Trinity College Institute of Neuroscience, and Dr Tarek R. Besold from TU Eindhoven, the Netherlands, argue that better ways to learn from unstructured data are needed. For the first time, they make concrete proposals about what particular insights from infant learning can be fruitfully applied in machine learning and how exactly to apply these learnings.
    Machines, they say, will need in-built preferences to shape their learning from the beginning. They will need to learn from richer datasets that capture how the world is looking, sounding, smelling, tasting and feeling. And, like infants, they will need to have a developmental trajectory, where experiences and networks change as they “grow up.”
    Dr. Tarek R. Besold, Researcher, Philosophy & Ethics group at TU Eindhoven, said:
    “As AI researchers we often draw metaphorical parallels between our systems and the mental development of human babies and children. It is high time to take these analogies more seriously and look at the rich knowledge of infant development from psychology and neuroscience, which may help us overcome the most pressing limitations of machine learning.”
    Professor Rhodri Cusack, The Thomas Mitchell Professor of Cognitive Neuroscience, Director of Trinity College Institute of Neuroscience, added:
    “Artificial neural networks were in parts inspired by the brain. Similar to infants, they rely on learning, but current implementations are very different from human (and animal) learning. Through interdisciplinary research, babies can help unlock the next generation of AI.”
    Story Source:
    Materials provided by Trinity College Dublin. Note: Content may be edited for style and length. More