More stories

  • in

    AI models can now continually learn from new data on intelligent edge devices like smartphones and sensors

    Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on “edge devices” that work independently from central computing resources.
    Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user’s writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.
    To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).
    The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.
    This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.
    “Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing this innovation. More

  • in

    New algorithms help four-legged robots run in the wild

    A team led by the University of California San Diego has developed a new system of algorithms that enables four-legged robots to walk and run on challenging terrain while avoiding both static and moving obstacles.
    In tests, the system guided a robot to maneuver autonomously and swiftly across sandy surfaces, gravel, grass, and bumpy dirt hills covered with branches and fallen leaves without bumping into poles, trees, shrubs, boulders, benches or people. The robot also navigated a busy office space without bumping into boxes, desks or chairs.
    The work brings researchers a step closer to building robots that can perform search and rescue missions or collect information in places that are too dangerous or difficult for humans.
    The team will present its work at the 2022 International Conference on Intelligent Robots and Systems (IROS), which will take place from Oct. 23 to 27 in Kyoto, Japan.
    The system provides a legged robot more versatility because of the way it combines the robot’s sense of sight with another sensing modality called proprioception, which involves the robot’s sense of movement, direction, speed, location and touch — in this case, the feel of the ground beneath its feet.
    Currently, most approaches to train legged robots to walk and navigate rely either on proprioception or vision, but not both at the same time, said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.
    “In one case, it’s like training a blind robot to walk by just touching and feeling the ground. And in the other, the robot plans its leg movements based on sight alone. It is not learning two things at the same time,” said Wang. “In our work, we combine proprioception with computer vision to enable a legged robot to move around efficiently and smoothly — while avoiding obstacles — in a variety of challenging environments, not just well-defined ones.”
    The system that Wang and his team developed uses a special set of algorithms to fuse data from real-time images taken by a depth camera on the robot’s head with data from sensors on the robot’s legs. This was not a simple task. “The problem is that during real-world operation, there is sometimes a slight delay in receiving images from the camera,” explained Wang, “so the data from the two different sensing modalities do not always arrive at the same time.”
    The team’s solution was to simulate this mismatch by randomizing the two sets of inputs — a technique the researchers call multi-modal delay randomization. The fused and randomized inputs were then used to train a reinforcement learning policy in an end-to-end manner. This approach helped the robot to make decisions quickly during navigation and anticipate changes in its environment ahead of time, so it could move and dodge obstacles faster on different types of terrains without the help of a human operator.
    Moving forward, Wang and his team are working on making legged robots more versatile so that they can conquer even more challenging terrains. “Right now, we can train a robot to do simple motions like walking, running and avoiding obstacles. Our next goals are to enable a robot to walk up and down stairs, walk on stones, change directions and jump over obstacles.”
    The team has released their code online at:
    Story Source:
    Materials provided by University of California – San Diego. Original written by Liezel Labios. Note: Content may be edited for style and length. More

  • in

    'Game-changing' study offers a powerful computer-modeling approach to cell simulations

    A milestone report from the University of Kansas appearing this week in the Proceedings of the National Academy of Sciences proposes a new technique for modeling molecular life with computers.
    According to lead author Ilya Vakser, director of the Computational Biology Program and Center for Computational Biology and professor of molecular biosciences at KU, the investigation into computer modeling of life processes is a major step toward creating a working simulation of a living cell at atomic resolution. The advance promises new insights into the fundamental biology of a cell, as well as faster and more precise treatment of human disease.
    “It is about tens or hundreds of thousands of times faster than the existing atomic resolution techniques,” Vakser said. “This provides unprecedented opportunities to characterize physiological mechanisms that now are far beyond the reach of computational modeling, to get insights into cellular mechanisms and to use this knowledge to improve our ability to treat diseases.”
    Until now, a major hurdle to modeling cells via computer has been how to approach proteins and their interactions that lie at the heart of cellular processes. To date, established techniques for modeling protein interactions have depended on either “protein docking” or “molecular simulation.”
    According to the investigators, both approaches have advantages and drawbacks. While protein docking algorithms are great for sampling spatial coordinates, they do not account for the “time coordinate,” or dynamics of protein interactions. By contrast, molecular simulations model dynamics well, but these simulations are too slow or low-resolution.
    “Our proof-of-concept study bridges the two modeling methodologies, developing an approach that can reach unprecedented simulation timescales at all-atom resolution,” the authors wrote. More

  • in

    Stretchy, bio-inspired synaptic transistor can enhance, weaken device memories

    Robotics and wearable devices might soon get a little smarter with the addition of a stretchy, wearable synaptic transistor developed by Penn State engineers. The device works like neurons in the brain to send signals to some cells and inhibit others in order to enhance and weaken the devices’ memories.
    Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and associate professor of biomedical engineering and of materials science and engineering, the team designed the synaptic transistor to be integrated in robots or wearables and use artificial intelligence to optimize functions. The details were published on Sept. 29 in Nature Electronics.
    “Mirroring the human brain, robots and wearable devices using the synaptic transistor can use its artificial neurons to ‘learn’ and adapt their behaviors,” Yu said. “For example, if we burn our hand on a stove, it hurts, and we know to avoid touching it next time. The same results will be possible for devices that use the synaptic transistor, as the artificial intelligence is able to ‘learn’ and adapt to its environment.”
    According to Yu, the artificial neurons in the device were designed to perform like neurons in the ventral tegmental area, a tiny segment of the human brain located in the uppermost part of the brain stem. Neurons process and transmit information by releasing neurotransmitters at their synapses, typically located at the neural cell ends. Excitatory neurotransmitters trigger the activity of other neurons and are associated with enhancing memories, while inhibitory neurotransmitters reduce the activity of other neurons and are associated with weakening memories.
    “Unlike all other areas of the brain, neurons in the ventral tegmental area are capable of releasing both excitatory and inhibitory neurotransmitters at the same time,” Yu said. “By designing the synaptic transistor to operate with both synaptic behaviors simultaneously, fewer transistors are needed compared to conventional integrated electronics technology, which simplifies the system architecture and allows the device to conserve energy.”
    To model soft, stretchy biological tissues, the researchers used stretchable bilayer semiconductor materials to fabricate the device, allowing it to stretch and twist while in use, according to Yu. Conventional transistors, on the other hand, are rigid and will break when deformed.
    “The transistor is mechanically deformable and functionally reconfigurable, yet still retains its functions when stretched extensively,” Yu said. “It can attach to a robot or wearable device to serve as their outermost skin.”
    In addition to Yu, other contributors include Hyunseok Shim and Shubham Patel, Penn State Department of Engineering Science and Mechanics; Yongcao Zhang, the University of Houston Materials Science and Engineering Program; Faheem Ershad, Penn State Department of Biomedical Engineering and University of Houston Department of Biomedical Engineering; Binghao Wang, School of Electronic Science and Engineering, Southeast University and Department of Chemistry and the Materials Research Center, Northwestern University; Zhihua Chen, Flexterra Inc.; Tobin J. Marks, Department of Chemistry and the Materials Research Center, Northwestern University; Antonio Facchetti, Flexterra Inc. and Northwestern University’s Department of Chemistry and Materials Research Center.
    The Office of Naval Research, the Air Force Office of Scientific Research and the National Science Foundation supported this work.
    Story Source:
    Materials provided by Penn State. Original written by Mariah Chuprinski. Note: Content may be edited for style and length. More

  • in

    Social media use linked to developing depression regardless of personality

    Researchers in public policy and education recently found that young adults who use more social media are significantly more likely to develop depression within six months, regardless of personality type.
    Published in the Journal of Affective Disorders Reports, the study, “Associations between social media use, personality structure, and development of depression,” was co-authored by Renae Merrill, a doctoral student in the Public Policy Program at the University of Arkansas.
    Merrill wrote the paper with dean of the College of Public Health and Human Sciences at Oregon State University, Brian Primack, and Chunhua Cao, an assistant professor in the College of Education at the University of Alabama.
    “Previous research has linked the development of depression with numerous factors,” the authors noted. “However, the literature has been lacking in studies that focus on how various personality characteristics may interact with social media use and depression. This new study addressed these important research questions, finding strong and linear associations of depression across all personality traits.”
    Among the study’s findings was that people with high agreeableness were 49 percent less likely to become depressed than people with low agreeableness. Additionally, those with high neuroticism were twice as likely to develop depression than those with low neuroticism when using more than 300 minutes of social media per day. More importantly, for each personality trait, social media use was strongly associated with the development of depression.
    The sample of more than 1,000 U.S. adults between the ages of 18 to 30 was from 2018 data collected by Primack and his colleagues at the University of Pittsburgh.
    Depression was measured using the Patient Health Questionnaire. Social media was measured by asking participants how much daily time was spent using popular social media platforms, and personality was measured using the Big Five Inventory, which assessed openness, conscientiousness, extraversion, agreeableness and neuroticism.
    The authors suggest that problematic social comparison can enhance negative feelings of oneself and others, which could explain how risk of depression increases with increased social media use. Engaging primarily in negative content can also enhance these feelings. And lastly, engaging in more social media reduces opportunities for in-person interactions and activities outside of the home.
    Depression has been noted as the leading cause of disability and mortality worldwide. This makes these findings even more pronounced for creating health interventions and prevention efforts.
    “Findings from this study are important during a time of technology expansion and integration,” Merrill said. “Connecting to people virtually may increase the risk of miscommunication or misperception that leads to relationship difficulties and potential risk for developing mental health problems.”
    “People have innate emotional needs for social connection and understanding,” Merrill added. “For example, social media experiences can be improved by becoming more aware of our emotions and our connection with others in various life circumstances. This awareness helps improve relationship quality by simply reaching shared meaning and understanding through more effective communication and concern for others and ourselves. Despite our differences, we have the ability to create a culture of empathy and kindness.”
    Research support was received by the Fine Foundation.
    Story Source:
    Materials provided by University of Arkansas. Note: Content may be edited for style and length. More

  • in

    Study shows how math, science identity in students affects college, career outcomes

    If you ask someone if they are a math or science person, they may quickly tell you yes or no. It turns out that how people answer that question in ninth grade and even earlier not only can tell you what subjects they prefer in school, but how likely they are to go on to study STEM subjects in college and work in those fields as adults. The results of a new study from the University of Kansas suggest the importance of fostering positive attitudes toward math and science early in students’ life to address gender and socioeconomic gaps in STEM.
    KU researchers analyzed a nationwide data set that asked students if they consider themselves a math and/or science person in ninth grade in 2009. The survey then followed up with those students in 11th grade to ask the same question, then three years after graduation to see who had enrolled in science, technology, engineering and math (STEM) majors, and whether they intended to have a related career when they turned 30. The results not only support the importance of student attitudes on academic outcomes, they also suggest efforts should be focused more on cultivating positive attitudes earlier in student careers, before they get to college, where most of such efforts happen currently.
    Rafael Quintana, assistant professor of educational psychology, and Argun Saatcioglu, professor of educational policy and sociology, both at KU, conducted a study in which they analyzed data from the High School Longitudinal Study of 2009. The data set includes responses from more than 21,000 students from about 940 schools across the United States. The study was published in the journal Socius: Sociological Research for a Dynamic World.
    Results showed that the odds of enrolling in a STEM major were 1.78 times larger for students with a science identity in ninth grade and 1.66 times larger for those with a math identity than those who did not identify with the subjects. The odds of expecting a career in STEM was 1.69 times larger and 1.6 times larger for those with high science and math identities, respectively.
    Those numbers are illustrative of how having positive experiences with math and science early can be influential both in higher education and later in life, the researchers said.
    “What do we mean when we say education has long-lasting effects? That’s something we want to think about longitudinally,” Quintana said. “Those early experiences get ‘under the skin,’ as they are related to later outcomes independently of how these attitudes developed later. What this suggests is one, the importance of identity beliefs for career-related decisions, and two, that early experiences can have long-lasting, potentially irreversible effects.”
    The data also showed that, when controlling for all other variables, the odds of expecting a career in a STEM field was about 50% lower for women than men and that there was a significant interaction between science identity in school and gender when predicting STEM occupation. In other words, it was more consequential for men to identify with science in ninth grade, as they were more likely to go on to a career in the sciences. Research has long noted a gender gap and socioeconomic inequalities in STEM, but most efforts have focused on how to address them among college students. While those efforts are just, Quintana said, the study results suggest it is important to take measures to address math and science inequities earlier in life as well.
    Schools can play a long-term role in helping students believe they can have a career in STEM and visualize such a possibility. By providing equitable access to math and science programs, they can also provide chances to those who may not otherwise get them, the researchers said.
    “We want schools to matter and have a consequential effect,” Saatcioglu said. “If you can get kids thinking they are a math or science person through positive experiences, that can have long-term effects. If you can get students to feel that way, it can be beneficial. The key in this study was Rafael was able to isolate the long-term effects of attitudes from ninth grade.”
    The attitudes students hold in early high school are key, as they have a cascading effect.
    “For example, individuals’ self- perceptions can affect the courses they take, the effort and time they spend on specific subjects and the interests and aspirations they develop,” the authors wrote. “These attitudes and behaviors can shape individuals career trajectories independently of their future identity beliefs. This ramification of causal effects is what generates the cascading and potentially irreversible consequences of early-life experiences.”
    Quintana, who uses longitudinal data analysis to study problems in education and human development, said he also hopes to revisit the data in the future to see where those in the data set are now, and how many are still working in STEM fields. Such analysis could also be applied to understand other early educational experiences such as bullying and how they influence later choices, attitudes and career pathways. More

  • in

    Climate change could turn some blue lakes to green or brown

    Some picturesque blue lakes may not be so blue in the future, thanks to climate change.

    In the first global tally of lake color, researchers estimate that roughly one-third of Earth’s lakes are blue. But, should average summer air temperatures rise by a few degrees, some of those crystal waters could turn a murky green or brown, the team reports in the Sept. 28 Geophysical Research Letters.

    The changing hues could alter how people use those waters and offer clues about the stability of lake ecosystems. Lake color depends in part on what’s in the water, but factors such as water depth and surrounding land use also matter. Compared with blue lakes, green or brown lakes have more algae, sediment and organic matter, says Xiao Yang, a hydrologist at Southern Methodist University in Dallas.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Yang and colleagues used satellite photos from 2013 to 2020 to analyze the color of more than 85,000 lakes around the world. Because storms and seasons can temporarily affect a lake’s color, the researchers focused on the most frequent color observed for each lake over the seven-year period. The researchers also created an interactive online map that can be used to explore the colors of these lakes.

    The approach is “super cool,” says Dina Leech, an aquatic ecologist at Longwood University in Farmville, Va., who was not involved with the study. These satellite data are “just so powerful.”

    The scientists then looked at local climates during that time to see how they may be linked to lake color around the world. For many small or remote water bodies, records of temperature and precipitation don’t exist. Instead, the researchers also relied on climate “hindcasts” calculated for every spot on the globe, which are pieced together from relatively sparse records. 

    Lakes in places with average summer air temperatures that were below 19° Celsius were more likely to be blue than lakes with warmer summers, the researchers found. But up to 14 percent of the blue lakes they studied are near that threshold. If average summer temperatures increase another 3 degrees Celsius — an amount that scientists think is plausible by the end of the century — those 3,800 lakes could turn green or brown (SN: 8/9/21). That’s because warmer water helps algae bloom more, which changes the properties of the water, giving it a green-brown tint, Yang says.

    Extrapolating beyond this sample of lakes is a bit tricky. “We don’t even know how many lakes there are in the world,” says study coauthor Catherine O’Reilly, an aquatic ecologist at Illinois State University in Normal. Many lakes are too small to reliably detect via satellite, but by some estimates, tens of thousands of larger lakes could lose their blue hue.

    If some lakes do become less blue, people will probably lose some of the resources they have come to value, O’Reilly says. Lakes are often used for drinking water, food or recreation. If the water is more clogged with algae, it could be unappealing for play or more costly to clean for drinking.

    But the color changes wouldn’t necessarily mean that the lakes are any less healthy. “[Humans] don’t value lots of algae in a lake, but if you’re a certain type of fish species, you might be like ‘this is great,’” O’Reilly says.

    Lake color can hint at the stability of a lake’s ecosystem, with shifting shades indicating changing conditions for the critters living in the water. One benefit of the new study is that it gives scientists a baseline for assessing how climate change is affecting Earth’s freshwater resources. Continued monitoring of lakes could help scientists detect future changes.

    “[The study] sets a marker that we can compare future results to,” says Mike Pace, an aquatic ecologist at the University of Virginia in Charlottesville, who was not involved with the study. “That’s, to me, the great power of this study.” More

  • in

    BESSY II: Localization of d-electrons in transition metals determined

    Transition metals and non-ferrous metals such as copper, nickel and cobalt are not only suitable as materials in engineering and technology, but also for a wide range of applications in electrochemistry and catalysis. Their chemical and physical properties are related to the occupation of the outer d-orbital shells around the atomic nuclei. The energetic levels of the electrons as well as their localisation or delocalisation can be studied at the X-ray source BESSY II, which offers powerful synchrotron radiation.
    Copper, Nickel, Cobalt
    The team of the Uppsala-Berlin Joint Lab (UBjL) around Prof. Alexander Föhlisch and Prof. Nils Mårtensson has now published new results on copper, nickel and cobalt samples. They confirmed known findings for copper, whose d-electrons are atomically localised, and for nickel, in which localised electrons coexist with delocalised electrons. In the case of the element cobalt, which is used for batteries and as an alloy in fuel cells, however, previous findings were contradictory because the measurement accuracy was not sufficient to make clear statements.
    Spectroscopy combined with highly sensitive detectors
    At BESSY II the Uppsala-Berlin joint Lab has set up an instrument which enables measurements with the necessary precision. To determine electronic localisation or delocalisation, Auger photo-electron coincidence spectroscopy (APECS) is used. APECS requires the newly developed “Angle resolved Time of Flight” (ArTOF) electron spectrometers, whose detection efficiency exceeds that of standard hemispherical analysers by orders of magnitude. Equipped with two ArTOF electron spectrometers, the CoESCA@UE52-PGM end station supervised by UBjL scientist Dr. Danilo Kühn is unique worldwide.
    Analysing (catalytical) materials
    In the case of the element cobalt, the measurements now revealed that the d-electrons of cobalt can be regarded as highly delocalised. “This is an important step for a quantitative determination of electronic localisation on a variety of materials, catalysts and (electro)chemical processes,” Föhlisch points out.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More