More stories

  • in

    Which role models are best for STEM? Researchers offer recommendations in new analysis

    An analysis of the effect role models have on students’ motivation in studying STEM subjects points to new ways to deploy these leaders in order to encourage learning across different populations. The recommendations provide a resource for parents, teachers, and policymakers seeking to maximize role models’ impact in diversifying the fields of science, technology, engineering, and mathematics.
    “STEM fields fail to attract and retain women as well as racial and ethnic minorities in numbers proportional to their share of the population,” explains Andrei Cimpian, a professor in New York University’s Department of Psychology and the senior author of the paper, hich appears in the International Journal of STEM Education. “A popular method to diversify the STEM workforce has been to introduce students to STEM role models, but less clear is how effective this approach is — simply because it’s not certain which role models resonate with different student populations.”
    “Our recommendations, based on an analysis of over 50 studies, are aimed at ensuring that STEM role models are motivating for students of all backgrounds and demographics,” adds lead author Jessica Gladstone, an NYU postdoctoral fellow at the time of the study and now a researcher at Virginia Commonwealth University.
    Marian Wright Edelman, founder and president emerita of the Children’s Defense Fund, popularized the phrase “You can’t be what you can’t see,” which emphasized the importance of having role models with whom diverse populations could identify.
    While many have claimed that exposing students to role models is an effective tool for diversifying STEM fields, the evidence supporting this position is mixed. Moreover, the researchers note, the argument is a vague one, leaving open questions about under what conditions and for which populations role models can be useful for this purpose.
    Gladstone and Cimpian sought to bring more clarity to this important issue by reframing the question being asked. Rather than asking “Are role models effective?,” they asked a more specific — and potentially more informative — question: “Which role models are effective for which students?”
    In addressing it, they reviewed 55 studies on students’ STEM motivation as a function of several key features of role models — their perceived competence, their perceived similarity to students, and the perceived attainability of their success. They also examined how features of the students themselves, such as their gender, race/ethnicity, age, and identification with STEM, modulate the effectiveness of role models. More

  • in

    'My robot is a softie': Physical texture influences judgments of robot personality

    Researchers have found that the physical texture of robots influenced perceptions of robot personality. Furthermore, first impressions of robots, based on physical appearance alone, could influence the relationship between physical texture and robot personality formation. This work could facilitate the development of robots with perceived personalities that match user expectations.
    Impressions of a robot’s personality can be influenced by the way it looks, sounds, and feels. But now, researchers from Japan have found specific causal relationships between impressions of robot personality and body texture.
    In a study published in Advanced Robotics, researchers from Osaka University and Kanazawa University have revealed that a robot’s physical texture interacts with elements of its appearance in a way that influences impressions of its personality.
    Body texture, such as softness or elasticity, is an important consideration in the design of robots meant for interactive functions. In addition, appearance can modulate whether a person anticipates a robot to be friendly, likable, or capable, among other characteristics.
    However, the ways in which people perceive the physical texture and the personality of robots have only been examined independently. As a result, the relationships between these two factors is unclear, something the researchers aimed to address.
    “The mechanisms of impression formation should be quantitatively and systematically investigated,” says lead author of the study Naoki Umeda. “Because various factors contribute to personality impression, we wanted to investigate how specific robot body properties promote or deteriorate specific kinds of impressions.”
    To do this, the researchers asked adult participants to view, touch, and evaluate six different inactive robots that were humanoid to varying degrees. The participants were asked to touch the arm of the robots. For each robot, four fake arms had been constructed; these were made of silicone rubber and prepared in such a way that their elasticity varied, thus providing differing touch sensations. The causal relationships between the physical textures of the robot arms and the participant perceptions were then evaluated.
    “The results confirmed our expectations,” explains Hisashi Ishihara, senior author. “We found that the impressions of the personalities of the robots varied according to the texture of the robot arms, and that there were specific relationships among certain parameters.”
    The researchers also found that the first impressions of the robots, made before the participants touched them, could modulate one of the effects.
    “We found that the impression of likability was strengthened when the participant anticipated that the robot would engage in peaceful emotional verbal communication. This suggests that both first impressions and touch sensations are important considerations for social robot designers focused on perceived robot personality,” says Ishihara.
    Given that many robots are designed for physical interaction with humans — for instance those used in therapy or clinical settings — the texture of the robot body is an important consideration. A thorough understanding of the physical factors that influence user impressions of robots will enable researchers to design robots that optimize user comfort. This is especially important for robots employed for advanced communication, because user comfort will influence the quality of communication, and thus the utility of the robotic system.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    COVID-19 mobile robot could detect and tackle social distancing breaches

    A new strategy to reduce the spread of COVID-19 employs a mobile robot that detects people in crowds who are not observing social-distancing rules, navigates to them, and encourages them to move apart. Adarsh Jagan Sathyamoorthy of the University of Maryland, College Park, and colleagues present these findings in the open-access journal PLOS ONE on Dec. 1, 2021.
    Previous research has shown that staying at least two meters apart from others can reduce the spread of COVID-19. Technology-based methods — such as strategies using WiFi and Bluetooth — hold promise to help detect and discourage lapses in social distancing. However, many such approaches require participation from individuals or existing infrastructure, so robots have emerged as a potential tool for addressing social distancing in crowds.
    Now, Sathyamoorthy and colleagues have developed a novel way to use an autonomous mobile robot for this purpose. The robot can detect breaches and navigate to them using its own Red Green Blue — Depth (RGB-D) camera and 2-D LiDAR (Light Detection and Ranging) sensor, and can tap into an existing CCTV system, if available. Once it reaches the breach, the robot encourages people to move apart via text that appears on a mounted display.
    The robot uses a novel system to sort people who have breached social distancing rules into different groups, prioritize them according to whether they are standing still or moving, and then navigate to them. This system employs a machine-learning method known as Deep Reinforcement Learning and Frozone, an algorithm previously developed by several of the same researchers to help robots navigate crowds.
    The researchers tested their method by having volunteers act out social-distancing breach scenarios while standing still, walking, or moving erratically. Their robot was able to detect and address most of the breaches that occurred, and CCTV enhanced its performance.
    The robot also uses a thermal camera that can detect people with potential fevers, aiding contact-tracing efforts, while also incorporating measures to ensure privacy protection and de-identification.
    Further research is needed to validate and refine this method, such as by exploring how the presence of robots impacts people’s behavior in crowds.
    The authors add: “A lot of healthcare workers and security personnel had to put their health at risk to serve the public during the COVID-19 pandemic. Our work’s core objective is to provide them with tools to safely and efficiently serve their communities.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Engineers create perching bird-like robot

    Like snowflakes, no two branches are alike. They can differ in size, shape and texture; some might be wet or moss-covered or bursting with offshoots. And yet birds can land on just about any of them. This ability was of great interest to the labs of Stanford University engineers Mark Cutkosky and David Lentink — now at University of Groningen in the Netherlands — which have both developed technologies inspired by animal abilities.
    “It’s not easy to mimic how birds fly and perch,” said William Roderick, PhD ’20, who was a graduate student in both labs. “After millions of years of evolution, they make takeoff and landing look so easy, even among all of the complexity and variability of the tree branches you would find in a forest.”
    Years of study on animal-inspired robots in the Cutkosky Lab and on bird-inspired aerial robots in the Lentink Lab enabled the researchers to build their own perching robot, detailed in a paper published Dec. 1 in Science Robotics. When attached to a quadcopter drone, their “stereotyped nature-inspired aerial grasper,” or SNAG, forms a robot that can fly around, catch and carry objects and perch on various surfaces. Showing the potential versatility of this work, the researchers used it to compare different types of bird toe arrangements and to measure microclimates in a remote Oregon forest.
    A bird bot in the forest
    In the researchers’ previous studies of parrotlets — the second smallest parrot species — the diminutive birds flew back and forth between special perches while being recorded by five high-speed cameras. The perches — representing a variety of sizes and materials, including wood, foam, sandpaper and Teflon — also contained sensors that captured the physical forces associated with the birds’ landings, perching and takeoff.
    “What surprised us was that they did the same aerial maneuvers, no matter what surfaces they were landing on,” said Roderick, who is lead author of the paper. “They let the feet handle the variability and complexity of the surface texture itself.” This formulaic behavior seen in every bird landing is why the “S” in SNAG stands for “stereotyped.”
    Just like the parrotlets, SNAG approaches every landing in the same way. But, in order to account for the size of the quadcopter, SNAG is based on the legs of a peregrine falcon. In place of bones, it has a 3D-printed structure — which took 20 iterations to perfect — and motors and fishing line stand-in for muscles and tendons. More

  • in

    Record-breaking simulations of large-scale structure formation in the Universe

    Current simulations of cosmic structure formation do not accurately reproduce the properties of ghost-like particles called neutrinos that have been present in the Universe since its beginning. But now, a research team from Japan has devised an approach that solves this problem.
    In a study published this month in SC ’21: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, researchers at the University of Tsukuba, Kyoto University, and the University of Tokyo report simulations that precisely follow the dynamics of such cosmic relic neutrinos. This study was selected as a finalist for the 2021 ACM Gordon Bell Prize, which recognizes outstanding achievement in high-performance computing.
    Neutrinos are much lighter than all other known particles, but their exact mass remains a mystery. Measuring this mass could help scientists develop theories that go beyond the standard model of particle physics and test explanations for how the Universe evolved. One promising way to pin down this mass is to study the impact of cosmic relic neutrinos on large-scale structure formation using simulations and compare the results with observations. But these simulations need to be extremely accurate.
    “Standard simulations use techniques known as particle-based N-body methods, which have two main drawbacks when it comes to massive neutrinos,” explains Dr. Naoki Yoshida, Principal Investigator at the Kavli Institute for the Physics and Mathematics of the Universe, the University of Tokyo. “First, the simulation results are susceptible to random fluctuations called shot noise. And second, these particle-based methods cannot accurately reproduce collisionless damping — a key process in which fast-moving neutrinos suppress the growth of structure in the Universe.”
    To avoid these issues, the researchers followed the dynamics of the massive neutrinos by directly solving a central equation in plasma physics known as the Vlasov equation. Unlike previous studies, they solved this equation in full six-dimensional phase space, which means that all six dimensions associated with space and velocity were considered. The team coupled this Vlasov simulation with a particle-based N-body simulation of cold dark matter — the main component of matter in the Universe. They performed their hybrid simulations on the supercomputer Fugaku at the RIKEN Center for Computational Science.
    “Our largest simulation self-consistently combines the Vlasov simulation on 400 trillion grids with 330 billion-body calculations, and it accurately reproduces the complex dynamics of cosmic neutrinos,” says lead author of the study, Professor Koji Yoshikawa. “Moreover, the time-to-solution for our simulation is substantially shorter than that for the largest N-body simulations, and the performance scales extremely well with up to 147,456 nodes (7 million CPU cores) on Fugaku.”
    In addition to helping determine the neutrino mass, the researchers suggest that their scheme could be used to study, for example, phenomena involving electrostatic and magnetized plasma and self-gravitating systems.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Thriving in non-equilibrium

    Equilibrium may be hard to achieve in our lives, but it is the standard state of nature.
    From the perspective of chemistry and physics, equilibrium is a bit dull — at least to Cheng-Chien Chen, assistant professor of physics at the University of Alabama at Birmingham. His research tries to engineering new states of matter and control these states by probing the possibilities of non-equilibrium.
    “One of our main goals is to see if, when we drive the electron system to non-equilibrium, we can stabilize new phases that are absent in equilibrium, but that can become dominant at non-equilibrium,” Chen said. “This is one of the holy grails in non-equilibrium studies.”
    Recently, with support from the National Science Foundation (NSF), Chen has been studying the effects of pump probe spectroscopy, which uses ultrashort laser pulses to excite (pump) the electrons in a sample, generating a non-equilibrium state, while a weaker beam (probe) monitors the pump-induced changes.
    Chen’s theoretical work suggests it is possible to generate superconductivity at higher temperature than previously possible using this method, opening the door to revolutionary new electronics and energy devices.
    Writing in Physical Review Letters in 2018, Chen and collaborator Yao Wang from Clemson University showed that it was possible to generate d-wave superconductivity and make it the dominant phase using pump-probe systems. More

  • in

    Deep learning dreams up new protein structures

    Just as convincing images of cats can be created using artificial intelligence, new proteins can now be made using similar tools. In a report in Nature, researchers describe the development of a neural network that “hallucinates” proteins with new, stable structures.
    Proteins, which are string-like molecules found in every cell, spontaneously fold into intricate three-dimensional shapes. These folded shapes are key to nearly every biological process, including cellular development, DNA repair, and metabolism. But the complexity of protein shapes makes them difficult to study. Biochemists often use computers to predict how protein strings, or sequences, might fold. In recent years, deep learning has revolutionized the accuracy of this work.
    “For this project, we made up completely random protein sequences and introduced mutations into them until our neural network predicted that they would fold into stable structures,” said co-lead author Ivan Anishchenko, He is an acting instructor of biochemisty at the University of Washington School of Medicine and a researcher in David Baker’s laboratory at the UW Medicine Institute for Protein Design.
    “At no point did we guide the software toward a particular outcome,” Anishchenko said, ” These new proteins are just what a computer dreams up.”
    In the future, the team believes it should be possible to steer the artificial intelligence so that it generates new proteins with useful features.
    “We’d like to use deep learning to design proteins with function, including protein-based drugs, enzymes, you name it,” said co-lead author Sam Pellock, a postdoctoral scholar in the Baker lab.
    The research team, which included scientists from UW Medicine, Harvard University, and Rensselaer Polytechnic Institute (RPI), generated two thousand new protein sequences that were predicted to fold. Over 100 of these were produced in the laboratory and studied. Detailed analysis on three such proteins confirmed that the shapes predicted by the computer were indeed realized in the lab.
    “Our NMR [nuclear magnetic resonance] studies, along with X-ray crystal structures determined by the University of Washington team, demonstrate the remarkable accuracy of protein designs created by the hallucination approach,” said co-author Theresa Ramelot, a senior research scientist at RPI in Troy, New York.
    Gaetano Montelione, a co-author and professor of chemistry and chemical biology at RPI, noted. “The hallucination approach builds on observations we made together with the Baker lab revealing that protein structure prediction with deep learning can be quite accurate even for a single protein sequence with no natural relatives. The potential to hallucinate brand new proteins that bind particular biomolecules or form desired enzymatic active sites is very exciting.”
    “This approach greatly simplifies protein design,” said senior author David Baker, a professor of biochemistry at the UW School of Medicine who received a 2021 Breakthrough Prize in Life Sciences. “Before, to create a new protein with a particular shape, people first carefully studied related structures in nature to come up with a set of rules that were then applied in the design process. New sets of rules were needed for each new type of fold. Here, by using a deep-learning network that already captures general principles of protein structure, we eliminate the need for fold-specific rules and open up the possibility of focusing on just the functional parts of a protein directly.”
    “Exploring how to best use this strategy for specific applications is now an active area of research, and this is where I expect the next breakthroughs,” said Baker.
    Funding was provided by the National Science Foundation, National Institutes of Health, Department of Energy, Open Philanthropy, Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, Audacious Project, Washington Research Foundation, Novo Nordisk Foundation, and Howard Hughes Medical Institute. The authors also acknowledge computing resources from the University of Washington and Rosetta@Home volunteers. More

  • in

    Machine learning helps mathematicians make new connections

    For the first time, mathematicians have partnered with artificial intelligence to suggest and prove new mathematical theorems. The work was done in a collaboration between the University of Oxford, the University of Sydney in Australia and DeepMind, Google’s artificial intelligence sister company.
    While computers have long been used to generate data for mathematicians, the task of identifying interesting patterns has relied mainly on the intuition of the mathematicians themselves. However, it’s now possible to generate more data than any mathematician can reasonably expect to study in a lifetime. Which is where machine learning comes in.
    A paper, published today in Nature, describes how DeepMind was set the task of discerning patterns and connections in the fields of knot theory and representation theory. To the surprise of the mathematicians, new connections were suggested; the mathematicians were then able to examine these connections and prove the conjecture suggested by the AI. These results suggest that machine learning can complement mathematical research, guiding intuition about a problem.
    Using the patterns identified by machine learning, mathematicians from the University of Oxford discovered a surprising connection between algebraic and geometric invariants of knots, establishing a completely new theorem in the field. The University of Sydney, meanwhile, used the connections made by the AI to bring them close to proving an old conjecture about Kazhdan-Lusztig polynomials, which has been unsolved for 40 years.
    Professor Andras Juhasz, of the Mathematical Institute at the University of Oxford and co-author on the paper, said: ‘Pure mathematicians work by formulating conjectures and proving these, resulting in theorems. But where do the conjectures come from?
    ‘We have demonstrated that, when guided by mathematical intuition, machine learning provides a powerful framework that can uncover interesting and provable conjectures in areas where a large amount of data is available, or where the objects are too large to study with classical methods.’
    Professor Marc Lackeby, of the Mathematical Institute at the University of Oxford and co-author, said: ‘It has been fascinating to use machine learning to discover new and unexpected connections between different areas of mathematics. I believe that the work that we have done in Oxford and in Sydney in collaboration with DeepMind demonstrates that machine learning can be a genuinely useful tool in mathematical research.’
    Professor Geordie Williamson, Professor of Mathematics at the University of Sydney and director of the Sydney Mathematical Research Institute and co-author, said: ‘AI is an extraordinary tool. This work is one of the first times it has demonstrated its usefulness for pure mathematicians, like me.
    ‘Intuition can take us a long way, but AI can help us find connections the human mind might not always easily spot.’
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More