More stories

  • in

    'Self-aware' materials build the foundation for living structures

    From the biggest bridges to the smallest medical implants, sensors are everywhere, and for good reason: The ability to sense and monitor changes before they become problems can be both cost-saving and life-saving.
    To better address these potential threats, the Intelligent Structural Monitoring and Response Testing (iSMaRT) Lab at the University of Pittsburgh Swanson School of Engineering has designed a new class of materials that are both sensing mediums and nanogenerators, and are poised to revolutionize the multifunctional material technology big and small.
    The research, recently published in Nano Energy, describes a new metamaterial system that acts as its own sensor, recording and relaying important information about the pressure and stresses on its structure. The so-called “self-aware metamaterial” generates its own power and can be used for a wide array of sensing and monitoring applications.
    The most innovative facet of the work is its scalability: the same design works at both nanoscale and megascale simply by tailoring the design geometry.
    “There is no doubt that the next generation materials need to be multifunctional, adaptive and tunable.” said Amir Alavi, assistant professor of civil and environmental engineering and bioengineering, who leads the iSMaRT Lab. “You can’t achieve these features with natural materials alone — you need hybrid or composite material systems in which each constituent layer offers its own functionality. The self-aware metamaterial systems that we’ve invented can offer these characteristics by fusing advanced metamaterial and energy harvesting technologies at multiscale, whether it’s a medical stent, shock absorber or an airplane wing.”
    While nearly all of the existing self-sensing materials are composites that rely on different forms of carbon fibers as sensing modules, this new concept offers a completely different, yet efficient, approach to creating sensor and nanogenerator material systems. The proposed concept relies on performance-tailored design and assembly of material microstructures. More

  • in

    Harmonious electronic structure leads to enhanced quantum materials

    The electronic structure of metallic materials determines the behavior of electron transport. Magnetic Weyl semimetals have a unique topological electronic structure — the electron’s motion is dynamically linked to its spin. These Weyl semimetals have come to be the most exciting quantum materials that allow for dissipationless transport, low power operation, and exotic topological fields that can accelerate the motion of the electrons in new directions. The compounds Co3Sn2S2 and Co2MnGa [1-4], recently discovered by the Felser group, have shown some of the most prominent effects due to a set of two topological bands.
    Researchers at the Max Planck Institute for Chemical Physics of Solids in Dresden, the University of South Florida in the USA, and co-workers have discovered a new mechanism in magnetic compounds that couples multiple topological bands. The coupling can significantly enhance the effects of quantum phenomena. One such effect is the anomalous Hall effect that arises with spontaneous symmetry breaking time-reversal fields that cause a transverse acceleration to electron currents. The effects observed and predicted in single crystals of Co3Sn2S2 and Co2MnGa display a sizable increase compared to conventional magnets.
    In the current publication, we explored the compounds XPt3, where we predicted an anomalous Hall effect nearly twice the size of the previous compounds. The large effect is due to sets of entangled topological bands with the same chirality that synergistically accelerates charged particles. Interestingly, the chirality of the bands couple to the magnetization direction and determine the direction of the acceleration of the charged particles. This chirality can be altered by chemical substitution. Our theoretical results of CrPt3 show the maximum effect, where MnPt3 significantly reduced the effect due to the change in the order of the chiral bands.
    Advanced thins films of the CrPt3 were grown at the Max Planck Institute. We found in various films a pristine anomalous Hall effect, robust against disorder and variation of temperature. The result is a strong indication that the topological character dominates even at finite temperatures. The results show to be near twice as large as any intrinsic effect measured in thin films. The advantage of thin films is the ease of integration into quantum devices with an interplay of other freedoms, such as charge, spin, and heat. XPt3 films show possible utilization for Hall sensors, charge-to-spin conversion in electronic devices, and charge-to-heat conversion in thermoelectric devices with such a strong response.
    [1] Enke Liu et al., Nat. Phys. 14, 1125 (2018).
    [2] Kaustuv Manna et al., Phys. Rev. X 8, 041045 (2018).
    [3] D. F. Liu, et al. Science 365, 1282-1285 (2019).
    [4] Noam Morali et al. Science 365, 1286-1291 (2019).
    [5] Anastasios Markou et al., Commun. Phys. 4, 104 (2021).
    Story Source:
    Materials provided by Max Planck Institute for Chemical Physics of Solids. Note: Content may be edited for style and length. More

  • in

    How AI could alert firefighters of imminent danger

    Firefighting is a race against time. Exactly how much time? For firefighters, that part is often unclear. Building fires can turn from bad to deadly in an instant, and the warning signs are frequently difficult to discern amid the mayhem of an inferno.
    Seeking to remove this major blind spot, researchers at the National Institute of Standards and Technology (NIST) have developed P-Flash, or the Prediction Model for Flashover. The artificial-intelligence-powered tool was designed to predict and warn of a deadly phenomenon in burning buildings known as flashover, when flammable materials in a room ignite almost simultaneously, producing a blaze only limited in size by available oxygen. The tool’s predictions are based on temperature data from a building’s heat detectors, and, remarkably, it is designed to operate even after heat detectors begin to fail, making do with the remaining devices.
    The team tested P-Flash’s ability to predict imminent flashovers in over a thousand simulated fires and more than a dozen real-world fires. Research, just published in the Proceedings of the AAAI Conference on Artificial Intelligence, suggests the model shows promise in anticipating simulated flashovers and shows how real-world data helped the researchers identify an unmodeled physical phenomenon that if addressed could improve the tool’s forecasting in actual fires. With further development, P-Flash could enhance the ability of firefighters to hone their real-time tactics, helping them save building occupants as well as themselves.
    Flashovers are so dangerous in part because it’s challenging to see them coming. There are indicators to watch, such as increasingly intense heat or flames rolling across the ceiling. However, these signs can be easy to miss in many situations, such as when a firefighter is searching for trapped victims with heavy equipment in tow and smoke obscuring the view. And from the outside, as firefighters approach a scene, the conditions inside are even less clear.
    “I don’t think the fire service has many tools technology-wise that predict flashover at the scene,” said NIST researcher Christopher Brown, who also serves as a volunteer firefighter. “Our biggest tool is just observation, and that can be very deceiving. Things look one way on the outside, and when you get inside, it could be quite different.”
    Computer models that predict flashover based on temperature are not entirely new, but until now, they have relied on constant streams of temperature data, which are obtainable in a lab but not guaranteed during a real fire. More

  • in

    A new direction of topological research is ready for take off

    In a joint effort, ct.qmat scientists from Dresden, Rostock, and Würzburg have accomplished non-Hermitian topological states of matter in topolectric circuits. The latter acronym refers to topological and electrical, giving a name to the realization of synthetic topological matter in electric circuit networks. The main motif of topological matter is its role in hosting particularly stable and robust features immune to local perturbations, which might be a pivotal ingredient for future quantum technologies. The current ct.qmat results promise a knowledge transfer from electric circuits to alternative optical platforms, and have just been published in Physical Review Letters.
    Topological defect tuning in non-Hermitian systems
    At the center of the currently reported work is the circuit realization of parity-time (PT) symmetry, as it has been previously intensely studied in optics. The ct.qmat team have employed the PT symmetry to still make the open circuit system with gain and loss share a large amount of features with an isolated system. This is a core insight in order to design topological defect states in a compensatingly dissipative and accumulative setting. It is accomplished through non-Hermitian PT topolectric circuits.
    Potential paradigm change in synthetic topological matter
    “This research project has enabled us to create a joint team effort between all locations of the Cluster of Excellence ct.qmat towards topological matter. Topolectric circuits create an experimental and theoretical inspiration for new avenues of topological matter, and might have a particular bearing on future applications in photonics. The flexibility, cost-efficiency, and versatility of topolectric circuits is unprecedented, and might constitute a paradigm change in the field of synthetic topological matter,” summarizes the Würzburg scientist and study director Ronny Thomale.
    Next stop: applications
    Having built a one-dimensional version of a PT symmetry topolectric circuit with a linear dimension of 30 unit cells, the next step towards technology envisioned by the research team is to take on PT symmetric circuits in two dimensions and as such about 1000 coupled circuit unit cells. Eventually, the insight gained through topolectric circuits may establish one milestone that could make light-controlled computers possible. They would be much faster and more energy-efficient than today’s electron-controlled models.
    People involved
    Besides the cluster members based at Julius-Maximilians-Universität Würzburg (JMU) and the Leibnitz Institute for Solid State and Materials Research Dresden (IFW), the scientists around Professor Alexander Szameit from the University of Rostock are also involved in the publication. The Cluster of Excellence ct.qmat cooperates with Szameit’s group in the field of topological photonics.
    Story Source:
    Materials provided by University of Würzburg. Original written by Katja Lesser. Note: Content may be edited for style and length. More

  • in

    Researchers fine-tune control over AI image generation

    Researchers from North Carolina State University have developed a new state-of-the-art method for controlling how artificial intelligence (AI) systems create images. The work has applications for fields from autonomous robotics to AI training.
    At issue is a type of AI task called conditional image generation, in which AI systems create images that meet a specific set of conditions. For example, a system could be trained to create original images of cats or dogs, depending on which animal the user requested. More recent techniques have built on this to incorporate conditions regarding an image layout. This allows users to specify which types of objects they want to appear in particular places on the screen. For example, the sky might go in one box, a tree might be in another box, a stream might be in a separate box, and so on.
    The new work builds on those techniques to give users more control over the resulting images, and to retain certain characteristics across a series of images.
    “Our approach is highly reconfigurable,” says Tianfu Wu, co-author of a paper on the work and an assistant professor of computer engineering at NC State. “Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it. For example, users could have the AI create a mountain scene. The users could then have the system add skiers to that scene.”
    In addition, the new approach allows users to have the AI manipulate specific elements so that they are identifiably the same, but have moved or changed in some way. For example, the AI might create a series of images showing skiers turn toward the viewer as they move across the landscape.
    “One application for this would be to help autonomous robots ‘imagine’ what the end result might look like before they begin a given task,” Wu says. “You could also use the system to generate images for AI training. So, instead of compiling images from external sources, you could use this system to create images for training other AI systems.”
    The researchers tested their new approach using the COCO-Stuff dataset and the Visual Genome dataset. Based on standard measures of image quality, the new approach outperformed the previous state-of-the-art image creation techniques.
    “Our next step is to see if we can extend this work to video and three-dimensional images,” Wu says.
    Training for the new approach requires a fair amount of computational power; the researchers used a 4-GPU workstation. However, deploying the system is less computationally expensive.
    “We found that one GPU gives you almost real-time speed,” Wu says.
    “In addition to our paper, we’ve made our source code for this approach available on GitHub. That said, we’re always open to collaborating with industry partners.”
    The work was supported by the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451; by the U.S. Army Research Office, under grant W911NF1810295; and by the Administration for Community Living, under grant 90IFDV0017-01-00.
    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Turbulence in interstellar gas clouds reveals multi-fractal structures

    In interstellar dust clouds, turbulence must first dissipate before a star can form through gravity. A German-French research team has now discovered that the kinetic energy of the turbulence comes to rest in a space that is very small on cosmic scales, ranging from one to several light-years in extent. The group also arrived at new results in the mathematical method: Previously, the turbulent structure of the interstellar medium was described as self-similar — or fractal. The researchers found that it is not enough to describe the structure mathematically as a single fractal, a self-similar structure as known from the Mandelbrot set. Instead, they added several different fractals, so-called multifractals. The new methods can thus be used to resolve and represent structural changes in astronomical images in detail. Applications in other scientific fields such as atmospheric research is also possible.
    The German-French programme GENESIS (Generation of Structures in the Interstellar Medium) is a cooperation between the University of Cologne’s Institute for Astrophysics, LAB at the University of Bordeaux and Geostat/INRIA Institute Bordeaux. In a highlight publication of the journal Astronomy & Astrophysics, the research team presents the new mathematical methods to characterize turbulence using the example of the Musca molecular cloud in the constellation of Musca.
    Stars form in huge interstellar clouds composed mainly of molecular hydrogen — the energy reservoir of all stars. This material has a low density, only a few thousand to several tens of thousands of particles per cubic centimetre, but a very complex structure with condensations in the form of ‘clumps’ and ‘filaments’, and eventually ‘cores’ from which stars form by gravitational collapse of the matter.
    The spatial structure of the gas in and around clouds is determined by many physical processes, one of the most important of which is interstellar turbulence. This arises when energy is transferred from large scales, such as galactic density waves or supernova explosions, to smaller scales. Turbulence is known from flows in which a liquid or gas is ‘stirred’, but can also form vortices and exhibit brief periods of chaotic behaviour, called intermittency. However, for a star to form, the gas must come to rest, i.e., the kinetic energy must dissipate. After that, gravity can exert enough force to pull the hydrogen clouds together and form a star. Thus, it is important to understand and mathematically describe the energy cascade and the associated structural change.
    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More

  • in

    The role of computer voice in the future of speech-based human-computer interaction

    In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.
    As humans, we primarily communicate vocally and aurally. We convey not just linguistic information, but also the complexities of our emotional states and personalities. Aspects of our voice such as tone, rhythm, and pitch are vital to the way we are perceived. In other words, the way we say things matters.
    With advances in technology and the introduction of social robots, conversational agents, and voice assistants into our lives, we are expanding our interactions to include computer agents, interfaces, and environments. Research on these technologies can be found across the fields of human-agent interaction (HAI), human-robot interaction (HRI), human-computer interaction (HCI), and human-machine communication (HMC), depending on the kind of technology under study. Many studies have analyzed the impact of computer voice on user perception and interaction. However, these studies are spread across different types of technologies and user groups and focus on different aspects of voice.
    In this regard, a group of researchers from Tokyo Institute of Technology (Tokyo Tech), Japan, RIKEN Center for Advanced Intelligence Project (AIP), Japan, and gDial Inc., Canada, have now compiled findings from several studies in these fields with the intention to provide a framework that can guide future design and research on computer voice. As lead researcher Associate Professor Katie Seaborn from Tokyo Tech (Visiting Researcher and former Postdoctoral Researcher at RIKEN AIP) explains, “Voice assistants, smart speakers, vehicles that can speak to us, and social robots are already here. We need to know how best to design these technologies to work with us, live with us, and match our needs and desires. We also need to know how they have influenced our attitudes and behaviors, especially in subtle and unseen ways.”
    The team’s survey considered peer-reviewed journal papers and proceedings-based conference papers where the focus was on the user perception of agent voice. The source materials encompassed a wide variety of agent, interface, and environment types and technologies, with the majority being “bodyless” computer voices, computer agents, and social robots. Most of the user responses documented were from university students and adults. From these papers, the researchers were able to observe and map patterns and draw conclusions regarding the perceptions of agent voice in a variety of interaction contexts.
    The results showed that users anthropomorphized the agents that they interacted with and preferred interactions with agents that matched their personality and speaking style. There was a preference for human voices over synthetic ones. The inclusion of vocal fillers such as the use of pauses and terms like “I mean…” and “um” improved the interaction. In general, the survey found that people preferred human-like, happy, empathetic voices with higher pitches. However, these preferences were not static; for instance, user preference for voice gender changed over time from masculine voices to more feminine ones. Based on these findings, the researchers were able to formulate a high-level framework to classify different types of interactions across various computer-based technologies.
    The researchers also considered the effect of the body, or morphology and form factor, of the agent, which could take the form of a virtual or physical character, display or interface, or even an object or environment. They found that users tended to perceive agents better when the agents were embodied and when the voice “matched” the body of the agent.
    The field of human-computer interaction, particularly that of voice-based interaction, is a burgeoning one that continues to evolve almost daily. As such, the team’s survey provides an essential starting point for the study and creation of new and existing technologies in voice-based human-agent interaction (vHAI). “The research agenda that emerged from this work is expected to guide how voice-based agents, interfaces, systems, spaces, and experiences are developed and studied in the years to come,” Prof. Seaborn concludes, summing up the importance of their findings. More

  • in

    Candy-like models used to make STEM accessible to visually impaired students

    About 36 million people have blindness including 1 million children. Additionally, 216 million people experience moderate to severe visual impairment. However, STEM (science, technology, engineering and math) education maintains a reliance on three-dimensional imagery for education. Most of this imagery is inaccessible to students with blindness. A breakthrough study by Bryan Shaw, Ph.D., professor of chemistry and biochemistry at Baylor University, aims to make science more accessible to people who are blind or visually impaired through small, candy-like models.
    The Baylor-led study, published May 28 in the journal Science Advances, uses millimeter-scale gelatin models — similar to gummy bears — to improve visualization of protein molecules using oral stereognosis, or visualization of 3D shapes via the tongue and lips. The goal of the study was to create smaller, more practical tactile models of 3D imagery depicting protein molecules. The protein molecules were selected because their structures are some of the most numerous, complex and high-resolution 3D images presented throughout STEM education.
    “Your tongue is your finest tactile sensor — about twice as sensitive as the finger tips — but it is also a hydrostat, similar to an octopus arm. It can wiggle into grooves that your fingers won’t touch, but nobody really uses the tongue or lips in tactile learning. We thought to make very small, high-resolution 3D models, and visualize them by mouth,” Shaw said.
    The study included 396 participants in total — 31 fourth- and fifth-graders as well as 365 college students. Mouth, hands and eyesight were tested at identifying specific structures. All students were blindfolded during the oral and manual tactile model testing.
    Each participant was given three minutes to assess or visualize the structure of a study protein with their fingertips, followed by one minute with a test protein. After the four minutes, they were asked whether the test protein was the same or a different model than the initial study protein. The entire process was repeated using the mouth to discern shape instead of the fingers.
    Students recognized struc¬tures by mouth at 85.59% accuracy, similar to recognition by eyesight using computer animation. Testing involved identical edible gelatin models and nonedible 3D-printed models. Gelatin models were correctly identified at rates comparable to the nonedible models. More