More stories

  • in

    Impenetrable optical OTP security platform

    An anticounterfeiting smart label and security platform which makes forgery fundamentally impossible has been proposed. The device accomplishes this by controlling a variety of information of light including the color, phase, and polarization in one optical device.
    A POSTECH research team — led by Professor Junsuk Rho of departments of mechanical engineering and chemical engineering, Dr. Inki Kim, and Ph.D. candidates Jaehyuck Jang and Gyeongtae Kim — has developed an encrypted hologram printing platform that works in both natural light and laser light using the metasurface, an ultra-thin optical material with the thickness of one-thousandth of a strand of human hair. The label printed with this technology can produce a holographic color image that retains a specific polarization. The researchers have labeled this a “vectorial hologram.” The findings from this study were recently published in Nature Communications.
    The metasurface devices reported so far can only modulate one property of light such as color, phase, or polarization. To overcome this limitation, the researchers have devised a pixelated bifunctional metasurface by grouping multiple metasurfaces.
    In the unit structure that is the basis of the metasurface, the research team designed a device that uses its size to control the color, the orientation angles to control the phase, and the relative angle difference and the ratio of the group — that generates the left-handed and right-handed circularly polarized light within the pixelized group — to express all polarizations of light. To freely modulate the various degrees of freedoms of light and to maximize the efficiency at the same time, the metasurface plays the roles of a resonator1 and an optical waveguide2 at the same time.
    The vectorial hologram label designed in this manner displays QR codes that contain a variety of colors to the naked eye or when scanned with a camera. Simultaneously, under laser illumination, polarization encoded 3D holographic images are rendered. This holographic image has a special polarization state for each part of the image, which sets it apart from previously reported holograms.
    The vectorial holographic color printing technology developed in this research is an optical approach to the two-level encrypted one-time password (OTP) security mechanism that generates a password required to access the current banking system that verifies the user. First, when a user scans the QR code of the meta-optical device with a smart phone, the first password composed of random numbers is generated. When this password is applied to the meta-optical device as voltage value, the secondary password is displayed as an encrypted holographic image.
    “This vectorial holographic color printing platform is more advanced than the metasurface devices reported so far, and has demonstrated that various degrees of freedoms of light can be modulated with one optical device,” explained Professor Junsuk Rho. “It is a highly perfected optical OTP device which shows promise to be an original optical encryption technology applicable in designing and analyzing meta-atoms.”
    The research team has been conducting leading research on metasurface optical devices for the past five years and the device under development this time shows much potential for commercialization in areas of optical sensors, holographic displays, security and anticounterfeiting applications.
    This study supported by the grant from the Samsung Research Funding & Incubation Center for Future Technology funded by Samsung Electronics.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Deep learning model classifies brain tumors with single MRI scan

    A team of researchers at Washington University School of Medicine have developed a deep learning model that is capable of classifying a brain tumor as one of six common types using a single 3D MRI scan, according to a study published in Radiology: Artificial Intelligence.
    “This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume,” said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology’s Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri.
    The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope.
    According to Chakrabarty, machine and deep learning approaches using MRI data could potentially automate the detection and classification of brain tumors.
    “Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination,” he said.
    To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources. In addition to the institution’s own internal data, the team obtained pre-operative, post-contrast T1-weighted MRI scans from the Brain Tumor Image Segmentation, The Cancer Genome Atlas Glioblastoma Multiforme, and The Cancer Genome Atlas Low Grade Glioma. More

  • in

    A novel virtual reality technology to make MRI a new experience

    Researchers from King’s College London have created a novel interactive VR system to be used by patients when undertaking an MRI.
    In a new paper published in Scientific Reports, the researchers say they hope this advancement will make it easier for those who find having a MRI scan challenging such as children, people with cognitive difficulties or those who suffer from claustrophobia or anxiety.
    In normal circumstances, MRI scans fail in up to 50 percent of children under 5 years of age, which means that hospitals often rely on sedative medication or even anesthesia to get children successfully scanned.
    These measures are time consuming and expensive and have their own associated risks. From a neuroscience point of view, it also means that MRI based studies of brain function are generally only ever studied in these vulnerable populations during an artificial induced sleep state so may not be representative of how the brain works in normal circumstances.
    Lead researcher Dr Kun Qian from the School of Biomedical Engineering & Imaging Sciences at King’s College London said having an MRI scan can be quite an alien experience as it involves going into a narrow tunnel, with loud and often strange noises in the background, all while having to stay as still as possible.
    “We were keen to find other ways of enabling children and vulnerable people to have an MRI scan,” Dr Qian said. More

  • in

    System trains drones to fly around obstacles at high speeds

    If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.
    But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.
    Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.
    The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.
    “At high speeds, there are intricate aerodynamics that are hard to simulate, so we use experiments in the real world to fill in those black holes to find, for instance, that it might be better to slow down first to be faster later,” says Ezra Tal, a graduate student in MIT’s Department of Aeronautics and Astronautics. “It’s this holistic approach we use to see how we can make a trajectory overall as fast as possible.”
    “These kinds of algorithms are a very valuable step toward enabling future drones that can navigate complex environments very fast,” adds Sertac Karaman, associate professor of aeronautics and astronautics, and director of the Laboratory for Information and Decision Systems at MIT. “We are really hoping to push the limits in a way that they can travel as fast as their physical limits will allow.”
    Tal, Karaman, and MIT graduate student Gilhyun Ryou have published their results in the International Journal of Robotics Research. More

  • in

    Researchers use artificial intelligence to unlock extreme weather mysteries

    From lake-draining drought in California to bridge-breaking floods in China, extreme weather is wreaking havoc. Preparing for weather extremes in a changing climate remains a challenge, however, because their causes are complex and their response to global warming is often not well understood. Now, Stanford researchers have developed a machine learning tool to identify conditions for extreme precipitation events in the Midwest, which account for over half of all major U.S. flood disasters. Published in Geophysical Research Letters, their approach is one of the first examples using AI to analyze causes of long-term changes in extreme events and could help make projections of such events more accurate.
    “We know that flooding has been getting worse,” said study lead author Frances Davenport, a PhD student in Earth system science in Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth). “Our goal was to understand why extreme precipitation is increasing, which in turn could lead to better predictions about future flooding.”
    Among other impacts, global warming is expected to drive heavier rain and snowfall by creating a warmer atmosphere that can hold more moisture. Scientists hypothesize that climate change may affect precipitation in other ways, too, such as changing when and where storms occur. Revealing these impacts has remained difficult, however, in part because global climate models do not necessarily have the spatial resolution to model these regional extreme events.
    “This new approach to leveraging machine learning techniques is opening new avenues in our understanding of the underlying causes of changing extremes,” said study co-author Noah Diffenbaugh, the Kara J Foundation Professor in the School of Earth, Energy & Environmental Sciences. “That could enable communities and decision makers to better prepare for high-impact events, such as those that are so extreme that they fall outside of our historical experience.”
    Davenport and Diffenbaugh focused on the upper Mississippi watershed and the eastern part of the Missouri watershed. The highly flood-prone region, which spans parts of nine states, has seen extreme precipitation days and major floods become more frequent in recent decades. The researchers started by using publicly available climate data to calculate the number of extreme precipitation days in the region from 1981 to 2019. Then they trained a machine learning algorithm designed for analyzing grid data, such as images, to identify large-scale atmospheric circulation patterns associated with extreme precipitation (above the 95th percentile).
    “The algorithm we use correctly identifies over 90 percent of the extreme precipitation days, which is higher than the performance of traditional statistical methods that we tested,” Davenport said.
    The trained machine learning algorithm revealed that multiple factors are responsible for the recent increase in Midwest extreme precipitation. During the 21st century, the atmospheric pressure patterns that lead to extreme Midwest precipitation have become more frequent, increasing at a rate of about one additional day per year, although the researchers note that the changes are much weaker going back further in time to the 1980s.
    However, the researchers found that when these atmospheric pressure patterns do occur, the amount of precipitation that results has clearly increased. As a result, days with these conditions are more likely to have extreme precipitation now than they did in the past. Davenport and Diffenbaugh also found that increases in the precipitation intensity on these days were associated with higher atmospheric moisture flows from the Gulf of Mexico into the Midwest, bringing the water necessary for heavy rainfall in the region.
    The researchers hope to extend their approach to look at how these different factors will affect extreme precipitation in the future. They also envision redeploying the tool to focus on other regions and types of extreme events, and to analyze distinct extreme precipitation causes, such as weather fronts or tropical cyclones. These applications will help further parse climate change’s connections to extreme weather.
    “While we focused on the Midwest initially, our approach can be applied to other regions and used to understand changes in extreme events more broadly,” said Davenport. “This will help society better prepare for the impacts of climate change.”
    Story Source:
    Materials provided by Stanford University. Original written by Rob Jordan. Note: Content may be edited for style and length. More

  • in

    Researchers develop real-time lyric generation technology to inspire song writing

    Music artists can find inspiration and new creative directions for their song writing with technology developed by Waterloo researchers.
    LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University’s Natural Language Processing Lab.
    The lab, led by Olga Vechtomova, a Waterloo Engineering professor cross-appointed in Computer Science, has been researching creative applications of AI for several years.
    The lab’s initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style.
    Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo and instrumentation to synthesize lyrics reflecting the mood and emotions expressed by live music.
    As a musician or a band plays instrumental music, the system continuously receives the raw audio clips, which the neural network processes to generate new lyric lines. The artists can then use the lines to compose their own song lyrics. More

  • in

    Natural language processing research: Signed languages

    Advancements in natural language processing (NLP) enable computers to understand what humans say and help people communicate through tools like machine translation, voice-controlled assistants and chatbots.
    But NLP research often only focuses on spoken languages, excluding the more than 200 signed languages around the world and the roughly 70 million people who might rely on them to communicate.
    Kayo Yin, a master’s student in the Language Technologies Institute, wants that to change. Yin co-authored a paper that called for NLP research to include signed languages.
    “Signed languages, even though they are a significant part of the languages used in the world, aren’t included,” Yin said. “There is a demand and an importance in having technology that can handle signed languages.”
    The paper, “Including Signed Languages in Natural Language Processing,” won the Best Theme Paper award at this month’s 59th Annual Meeting of the Association for Computational Linguistics. Yin’s co-authors included Amit Moryossef of Bar-Ilan University in Israel; Julie Hochgesang of Gallaudet University; Yoav Goldberg of Bar-Ilan University and the Allen Institute for AI; and Malihe Alikhani of the University of Pittsburgh’s School of Computing and Information.
    The authors wrote that communities relying on signed language have fought for decades both to learn and use those languages, and for them to be recognized as legitimate. More

  • in

    Physical activity protects children from the adverse effects of digital media on their weight later in adolescence

    Children’s heavy digital media use is associated with a risk of being overweight later in adolescence. Physical activity protects children from the adverse effects of digital media on their weight later in adolescence.
    A recently completed study shows that six hours of leisure-time physical activity per week at the age of 11 reduces the risk of being overweight at 14 years of age associated with heavy use of digital media.
    Obesity in children and adolescents is one of the most significant health-related challenges globally. A study carried out by the Folkhälsan Research Center and the University of Helsinki investigated whether a link exists between the digital media use of Finnish school-age children and the risk of being overweight later in adolescence. In addition, the study looked into whether children’s physical activity has an effect on this potential link.
    The results were published in the Journal of Physical Activity and Health.
    More than six hours of physical activity per week appears to reverse adverse effects of screen time
    The study involved 4,661 children from the Finnish Health in Teens (Fin-HIT) study. The participating children reported how much time they spent on sedentary digital media use and physical activity outside school hours. The study demonstrated that heavy use of digital media at 11 years of age was associated with a heightened risk of being overweight at 14 years of age in children who reported engaging in under six hours per week of physical activity in their leisure time. In children who reported being physically active for six or more hours per week, such a link was not observed.
    The study also took into account other factors potentially impacting obesity, such as childhood eating habits and the amount of sleep, as well as the amount of digital media use and physical activity in adolescence. In spite of the confounding factors, the protective role of childhood physical activity in the connection between digital media use in childhood and being overweight later in life was successfully confirmed.
    Activity according to recommendations
    “The effect of physical activity on the association between digital media use and being overweight has not been extensively investigated in follow-up studies so far,” says Postdoctoral Researcher Elina Engberg.
    Further research is needed to determine in more detail how much sedentary digital media use increases the risk of being overweight, and how much physical activity is needed, and at what intensity, to ward off such a risk. In this study, the amount of physical activity and use of digital media was reported by the children themselves, and the level of their activity was not surveyed, so there is a need for further studies.
    “A good rule of thumb is to adhere to the physical activity guidelines for children and adolescents, according to which school-aged children and adolescents should be physically active in a versatile, brisk and strenuous manner for at least 60 minutes a day in a way that suits the individual, considering their age,” says Engberg. In addition, excessive and extended sedentary activity should be avoided.
    Story Source:
    Materials provided by University of Helsinki. Note: Content may be edited for style and length. More