More stories

  • in

    System trains drones to fly around obstacles at high speeds

    If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.
    But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.
    Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.
    The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.
    “At high speeds, there are intricate aerodynamics that are hard to simulate, so we use experiments in the real world to fill in those black holes to find, for instance, that it might be better to slow down first to be faster later,” says Ezra Tal, a graduate student in MIT’s Department of Aeronautics and Astronautics. “It’s this holistic approach we use to see how we can make a trajectory overall as fast as possible.”
    “These kinds of algorithms are a very valuable step toward enabling future drones that can navigate complex environments very fast,” adds Sertac Karaman, associate professor of aeronautics and astronautics, and director of the Laboratory for Information and Decision Systems at MIT. “We are really hoping to push the limits in a way that they can travel as fast as their physical limits will allow.”
    Tal, Karaman, and MIT graduate student Gilhyun Ryou have published their results in the International Journal of Robotics Research. More

  • in

    Researchers use artificial intelligence to unlock extreme weather mysteries

    From lake-draining drought in California to bridge-breaking floods in China, extreme weather is wreaking havoc. Preparing for weather extremes in a changing climate remains a challenge, however, because their causes are complex and their response to global warming is often not well understood. Now, Stanford researchers have developed a machine learning tool to identify conditions for extreme precipitation events in the Midwest, which account for over half of all major U.S. flood disasters. Published in Geophysical Research Letters, their approach is one of the first examples using AI to analyze causes of long-term changes in extreme events and could help make projections of such events more accurate.
    “We know that flooding has been getting worse,” said study lead author Frances Davenport, a PhD student in Earth system science in Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth). “Our goal was to understand why extreme precipitation is increasing, which in turn could lead to better predictions about future flooding.”
    Among other impacts, global warming is expected to drive heavier rain and snowfall by creating a warmer atmosphere that can hold more moisture. Scientists hypothesize that climate change may affect precipitation in other ways, too, such as changing when and where storms occur. Revealing these impacts has remained difficult, however, in part because global climate models do not necessarily have the spatial resolution to model these regional extreme events.
    “This new approach to leveraging machine learning techniques is opening new avenues in our understanding of the underlying causes of changing extremes,” said study co-author Noah Diffenbaugh, the Kara J Foundation Professor in the School of Earth, Energy & Environmental Sciences. “That could enable communities and decision makers to better prepare for high-impact events, such as those that are so extreme that they fall outside of our historical experience.”
    Davenport and Diffenbaugh focused on the upper Mississippi watershed and the eastern part of the Missouri watershed. The highly flood-prone region, which spans parts of nine states, has seen extreme precipitation days and major floods become more frequent in recent decades. The researchers started by using publicly available climate data to calculate the number of extreme precipitation days in the region from 1981 to 2019. Then they trained a machine learning algorithm designed for analyzing grid data, such as images, to identify large-scale atmospheric circulation patterns associated with extreme precipitation (above the 95th percentile).
    “The algorithm we use correctly identifies over 90 percent of the extreme precipitation days, which is higher than the performance of traditional statistical methods that we tested,” Davenport said.
    The trained machine learning algorithm revealed that multiple factors are responsible for the recent increase in Midwest extreme precipitation. During the 21st century, the atmospheric pressure patterns that lead to extreme Midwest precipitation have become more frequent, increasing at a rate of about one additional day per year, although the researchers note that the changes are much weaker going back further in time to the 1980s.
    However, the researchers found that when these atmospheric pressure patterns do occur, the amount of precipitation that results has clearly increased. As a result, days with these conditions are more likely to have extreme precipitation now than they did in the past. Davenport and Diffenbaugh also found that increases in the precipitation intensity on these days were associated with higher atmospheric moisture flows from the Gulf of Mexico into the Midwest, bringing the water necessary for heavy rainfall in the region.
    The researchers hope to extend their approach to look at how these different factors will affect extreme precipitation in the future. They also envision redeploying the tool to focus on other regions and types of extreme events, and to analyze distinct extreme precipitation causes, such as weather fronts or tropical cyclones. These applications will help further parse climate change’s connections to extreme weather.
    “While we focused on the Midwest initially, our approach can be applied to other regions and used to understand changes in extreme events more broadly,” said Davenport. “This will help society better prepare for the impacts of climate change.”
    Story Source:
    Materials provided by Stanford University. Original written by Rob Jordan. Note: Content may be edited for style and length. More

  • in

    Researchers develop real-time lyric generation technology to inspire song writing

    Music artists can find inspiration and new creative directions for their song writing with technology developed by Waterloo researchers.
    LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University’s Natural Language Processing Lab.
    The lab, led by Olga Vechtomova, a Waterloo Engineering professor cross-appointed in Computer Science, has been researching creative applications of AI for several years.
    The lab’s initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style.
    Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo and instrumentation to synthesize lyrics reflecting the mood and emotions expressed by live music.
    As a musician or a band plays instrumental music, the system continuously receives the raw audio clips, which the neural network processes to generate new lyric lines. The artists can then use the lines to compose their own song lyrics. More

  • in

    Natural language processing research: Signed languages

    Advancements in natural language processing (NLP) enable computers to understand what humans say and help people communicate through tools like machine translation, voice-controlled assistants and chatbots.
    But NLP research often only focuses on spoken languages, excluding the more than 200 signed languages around the world and the roughly 70 million people who might rely on them to communicate.
    Kayo Yin, a master’s student in the Language Technologies Institute, wants that to change. Yin co-authored a paper that called for NLP research to include signed languages.
    “Signed languages, even though they are a significant part of the languages used in the world, aren’t included,” Yin said. “There is a demand and an importance in having technology that can handle signed languages.”
    The paper, “Including Signed Languages in Natural Language Processing,” won the Best Theme Paper award at this month’s 59th Annual Meeting of the Association for Computational Linguistics. Yin’s co-authors included Amit Moryossef of Bar-Ilan University in Israel; Julie Hochgesang of Gallaudet University; Yoav Goldberg of Bar-Ilan University and the Allen Institute for AI; and Malihe Alikhani of the University of Pittsburgh’s School of Computing and Information.
    The authors wrote that communities relying on signed language have fought for decades both to learn and use those languages, and for them to be recognized as legitimate. More

  • in

    Physical activity protects children from the adverse effects of digital media on their weight later in adolescence

    Children’s heavy digital media use is associated with a risk of being overweight later in adolescence. Physical activity protects children from the adverse effects of digital media on their weight later in adolescence.
    A recently completed study shows that six hours of leisure-time physical activity per week at the age of 11 reduces the risk of being overweight at 14 years of age associated with heavy use of digital media.
    Obesity in children and adolescents is one of the most significant health-related challenges globally. A study carried out by the Folkhälsan Research Center and the University of Helsinki investigated whether a link exists between the digital media use of Finnish school-age children and the risk of being overweight later in adolescence. In addition, the study looked into whether children’s physical activity has an effect on this potential link.
    The results were published in the Journal of Physical Activity and Health.
    More than six hours of physical activity per week appears to reverse adverse effects of screen time
    The study involved 4,661 children from the Finnish Health in Teens (Fin-HIT) study. The participating children reported how much time they spent on sedentary digital media use and physical activity outside school hours. The study demonstrated that heavy use of digital media at 11 years of age was associated with a heightened risk of being overweight at 14 years of age in children who reported engaging in under six hours per week of physical activity in their leisure time. In children who reported being physically active for six or more hours per week, such a link was not observed.
    The study also took into account other factors potentially impacting obesity, such as childhood eating habits and the amount of sleep, as well as the amount of digital media use and physical activity in adolescence. In spite of the confounding factors, the protective role of childhood physical activity in the connection between digital media use in childhood and being overweight later in life was successfully confirmed.
    Activity according to recommendations
    “The effect of physical activity on the association between digital media use and being overweight has not been extensively investigated in follow-up studies so far,” says Postdoctoral Researcher Elina Engberg.
    Further research is needed to determine in more detail how much sedentary digital media use increases the risk of being overweight, and how much physical activity is needed, and at what intensity, to ward off such a risk. In this study, the amount of physical activity and use of digital media was reported by the children themselves, and the level of their activity was not surveyed, so there is a need for further studies.
    “A good rule of thumb is to adhere to the physical activity guidelines for children and adolescents, according to which school-aged children and adolescents should be physically active in a versatile, brisk and strenuous manner for at least 60 minutes a day in a way that suits the individual, considering their age,” says Engberg. In addition, excessive and extended sedentary activity should be avoided.
    Story Source:
    Materials provided by University of Helsinki. Note: Content may be edited for style and length. More

  • in

    Overcoming the limitations of scanning electron microscopy with AI

    What if a super-resolution imaging technique used in the latest 8K premium TVs is applied to scanning electron microscopy, essential equipment for materials research?
    A joint research team from POSTECH and the Korea Institute of Materials Science (KIMS) applied deep learning to the scanning electron microscopy (SEM) to develop a super-resolution imaging technique that can convert a low-resolution electron backscattering diffraction (EBSD) microstructure images obtained from conventional analysis equipment into super-resolution images. The findings from this study were recently published in the npj Computational Materials.
    In modern-day materials research, SEM images play a crucial role in developing new materials, from microstructure visualization and characterization, and in numerical material behavior analysis. However, acquiring high-quality microstructure image data may be exhaustive or highly time-consuming due to the hardware limitations of the SEM. This may affect the accuracy of subsequent material analysis, and therefore, it is paramount to overcome the technical limitations of the equipment.
    To this, the joint research team developed a faster and more accurate microstructure imaging technique using deep learning. In particular, by using a convolutional neural network, the resolution of the existing microstructure image was enhanced by 4 times, 8 times, and 16 times, which reduces the imaging time up to 256 times compared to the conventional SEM system.
    In addition, super-resolution imaging verified that the morphological details of the microstructure can be restored with high accuracy through microstructure characterization and finite element analysis.
    “Through the EBSD technique developed in this study, we anticipate the time it takes to develop new materials will be drastically reduced,” explained Professor Hyoung Seop Kim of POSTECH who led the research.
    This research was conducted with the support from the Mid-career Researcher Program of the National Research Foundation of Korea, the AI Graduate School Program of the Institute for Information & Communications Technology Promotion (IITP), and Phase 4 of the Brain Korea 21 Program of the Ministry of Education, and with the support from the Korea Materials Research Institute.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Unlocking the AI algorithm ‘black box’ – new machine learning technology to find out what makes plants and humans tick

    The inner 24-hour cycles — or circadian rhythms — are key to maintaining human, plant and animal health, which could provide valuable insight into how broken clocks impact health.
    Circadian rhythms, such as the sleep-wake cycle, are innate to most living organisms and critical to life on Earth. The word circadian originates from the Latin phrase ‘circa diem’ which means ‘around a day’.
    Biologically, the circadian clock temporally orchestrates physiology, biochemistry, and metabolism across the 24-hour day-night cycle. This is why being out of kilter can affect our fitness levels, our health, or our ability to survive. For example, experiencing jet lag is a chronobiological problem — our body clocks are out of sync because the normal external cues such as light or temperature have changed.
    The circadian clock isn’t unique to humans. In plants, an accurate clock helps to regulate flowering and is crucial to synchronising metabolism and physiology with the rising and setting sun. Understanding circadian rhythms can help to improve plant growth and yields, not to mention revealing new avenues for tackling human diseases.
    Beyond plants
    For this latest research, the team applied ML to predict complex temporal circadian gene expression patterns in model plant Arabidopsis thaliana. Taking newly generated datasets, published temporal datasets, and Arabidopsis genomes, the team of scientists trained ML models to make predictions about circadian gene regulation and expression patterns. More

  • in

    Brain connectivity can build better AI

    A new study shows that artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.
    By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN). An ANN is a computing system consisting of multiple input and output units, much like the biological brain. A team of researchers from The Neuro (Montreal Neurological Institute-Hospital) and the Quebec Artificial Intelligence Institute trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment.
    This is a unique approach in two ways. Previous work on brain connectivity, also known as connectomics, focused on describing brain organization, without looking at how it actually performs computations and functions. Secondly, traditional ANNs have arbitrary structures that do not reflect how real brain networks are organized.By integrating brain connectomics into the construction of ANN architectures, researchers hoped to both learn how the wiring of the brain supports specific cognitive skills, and to derive novel design principles for artificial networks.
    They found that ANNs with human brain connectivity, known as neuromorphic neural networks, performed cognitive memory tasks more flexibly and efficiently than other benchmark architectures. The neuromorphic neural networks were able to use the same underlying architecture to support a wide range of learning capacities across multiple contexts.
    “The project unifies two vibrant and fast-paced scientificdisciplines,” says Bratislav Misic, a researcher at The Neuro and the paper’s senior author. “Neuroscience and AI share common roots, but have recently diverged. Using artificial networks will help us to understand how brain structure supports brain function. In turn, using empirical data to make neural networks will reveal design principles for building better AI. So, the two will help inform each other and enrich our understanding of the brain.”
    This study, published in the journal Nature Machine Intelligence on Aug. 9, 2021, was funded with the help of the Canada First Research Excellence Fund, awarded to McGill University for the Healthy Brains, Healthy Lives initiative, the Natural Sciences and Engineering Research Council of Canada, Fonds de Recherche du Quebec — Santé, Canadian Institute for Advanced Research, Canada Research Chairs, Fonds de Recherche du Quebec — Nature et Technologies, and Centre UNIQUE (Union of Neuroscience and Artificial Intelligence).
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More