More stories

  • in

    A new direction of topological research is ready for take off

    In a joint effort, ct.qmat scientists from Dresden, Rostock, and Würzburg have accomplished non-Hermitian topological states of matter in topolectric circuits. The latter acronym refers to topological and electrical, giving a name to the realization of synthetic topological matter in electric circuit networks. The main motif of topological matter is its role in hosting particularly stable and robust features immune to local perturbations, which might be a pivotal ingredient for future quantum technologies. The current ct.qmat results promise a knowledge transfer from electric circuits to alternative optical platforms, and have just been published in Physical Review Letters.
    Topological defect tuning in non-Hermitian systems
    At the center of the currently reported work is the circuit realization of parity-time (PT) symmetry, as it has been previously intensely studied in optics. The ct.qmat team have employed the PT symmetry to still make the open circuit system with gain and loss share a large amount of features with an isolated system. This is a core insight in order to design topological defect states in a compensatingly dissipative and accumulative setting. It is accomplished through non-Hermitian PT topolectric circuits.
    Potential paradigm change in synthetic topological matter
    “This research project has enabled us to create a joint team effort between all locations of the Cluster of Excellence ct.qmat towards topological matter. Topolectric circuits create an experimental and theoretical inspiration for new avenues of topological matter, and might have a particular bearing on future applications in photonics. The flexibility, cost-efficiency, and versatility of topolectric circuits is unprecedented, and might constitute a paradigm change in the field of synthetic topological matter,” summarizes the Würzburg scientist and study director Ronny Thomale.
    Next stop: applications
    Having built a one-dimensional version of a PT symmetry topolectric circuit with a linear dimension of 30 unit cells, the next step towards technology envisioned by the research team is to take on PT symmetric circuits in two dimensions and as such about 1000 coupled circuit unit cells. Eventually, the insight gained through topolectric circuits may establish one milestone that could make light-controlled computers possible. They would be much faster and more energy-efficient than today’s electron-controlled models.
    People involved
    Besides the cluster members based at Julius-Maximilians-Universität Würzburg (JMU) and the Leibnitz Institute for Solid State and Materials Research Dresden (IFW), the scientists around Professor Alexander Szameit from the University of Rostock are also involved in the publication. The Cluster of Excellence ct.qmat cooperates with Szameit’s group in the field of topological photonics.
    Story Source:
    Materials provided by University of Würzburg. Original written by Katja Lesser. Note: Content may be edited for style and length. More

  • in

    Researchers fine-tune control over AI image generation

    Researchers from North Carolina State University have developed a new state-of-the-art method for controlling how artificial intelligence (AI) systems create images. The work has applications for fields from autonomous robotics to AI training.
    At issue is a type of AI task called conditional image generation, in which AI systems create images that meet a specific set of conditions. For example, a system could be trained to create original images of cats or dogs, depending on which animal the user requested. More recent techniques have built on this to incorporate conditions regarding an image layout. This allows users to specify which types of objects they want to appear in particular places on the screen. For example, the sky might go in one box, a tree might be in another box, a stream might be in a separate box, and so on.
    The new work builds on those techniques to give users more control over the resulting images, and to retain certain characteristics across a series of images.
    “Our approach is highly reconfigurable,” says Tianfu Wu, co-author of a paper on the work and an assistant professor of computer engineering at NC State. “Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it. For example, users could have the AI create a mountain scene. The users could then have the system add skiers to that scene.”
    In addition, the new approach allows users to have the AI manipulate specific elements so that they are identifiably the same, but have moved or changed in some way. For example, the AI might create a series of images showing skiers turn toward the viewer as they move across the landscape.
    “One application for this would be to help autonomous robots ‘imagine’ what the end result might look like before they begin a given task,” Wu says. “You could also use the system to generate images for AI training. So, instead of compiling images from external sources, you could use this system to create images for training other AI systems.”
    The researchers tested their new approach using the COCO-Stuff dataset and the Visual Genome dataset. Based on standard measures of image quality, the new approach outperformed the previous state-of-the-art image creation techniques.
    “Our next step is to see if we can extend this work to video and three-dimensional images,” Wu says.
    Training for the new approach requires a fair amount of computational power; the researchers used a 4-GPU workstation. However, deploying the system is less computationally expensive.
    “We found that one GPU gives you almost real-time speed,” Wu says.
    “In addition to our paper, we’ve made our source code for this approach available on GitHub. That said, we’re always open to collaborating with industry partners.”
    The work was supported by the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451; by the U.S. Army Research Office, under grant W911NF1810295; and by the Administration for Community Living, under grant 90IFDV0017-01-00.
    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Turbulence in interstellar gas clouds reveals multi-fractal structures

    In interstellar dust clouds, turbulence must first dissipate before a star can form through gravity. A German-French research team has now discovered that the kinetic energy of the turbulence comes to rest in a space that is very small on cosmic scales, ranging from one to several light-years in extent. The group also arrived at new results in the mathematical method: Previously, the turbulent structure of the interstellar medium was described as self-similar — or fractal. The researchers found that it is not enough to describe the structure mathematically as a single fractal, a self-similar structure as known from the Mandelbrot set. Instead, they added several different fractals, so-called multifractals. The new methods can thus be used to resolve and represent structural changes in astronomical images in detail. Applications in other scientific fields such as atmospheric research is also possible.
    The German-French programme GENESIS (Generation of Structures in the Interstellar Medium) is a cooperation between the University of Cologne’s Institute for Astrophysics, LAB at the University of Bordeaux and Geostat/INRIA Institute Bordeaux. In a highlight publication of the journal Astronomy & Astrophysics, the research team presents the new mathematical methods to characterize turbulence using the example of the Musca molecular cloud in the constellation of Musca.
    Stars form in huge interstellar clouds composed mainly of molecular hydrogen — the energy reservoir of all stars. This material has a low density, only a few thousand to several tens of thousands of particles per cubic centimetre, but a very complex structure with condensations in the form of ‘clumps’ and ‘filaments’, and eventually ‘cores’ from which stars form by gravitational collapse of the matter.
    The spatial structure of the gas in and around clouds is determined by many physical processes, one of the most important of which is interstellar turbulence. This arises when energy is transferred from large scales, such as galactic density waves or supernova explosions, to smaller scales. Turbulence is known from flows in which a liquid or gas is ‘stirred’, but can also form vortices and exhibit brief periods of chaotic behaviour, called intermittency. However, for a star to form, the gas must come to rest, i.e., the kinetic energy must dissipate. After that, gravity can exert enough force to pull the hydrogen clouds together and form a star. Thus, it is important to understand and mathematically describe the energy cascade and the associated structural change.
    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More

  • in

    The role of computer voice in the future of speech-based human-computer interaction

    In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.
    As humans, we primarily communicate vocally and aurally. We convey not just linguistic information, but also the complexities of our emotional states and personalities. Aspects of our voice such as tone, rhythm, and pitch are vital to the way we are perceived. In other words, the way we say things matters.
    With advances in technology and the introduction of social robots, conversational agents, and voice assistants into our lives, we are expanding our interactions to include computer agents, interfaces, and environments. Research on these technologies can be found across the fields of human-agent interaction (HAI), human-robot interaction (HRI), human-computer interaction (HCI), and human-machine communication (HMC), depending on the kind of technology under study. Many studies have analyzed the impact of computer voice on user perception and interaction. However, these studies are spread across different types of technologies and user groups and focus on different aspects of voice.
    In this regard, a group of researchers from Tokyo Institute of Technology (Tokyo Tech), Japan, RIKEN Center for Advanced Intelligence Project (AIP), Japan, and gDial Inc., Canada, have now compiled findings from several studies in these fields with the intention to provide a framework that can guide future design and research on computer voice. As lead researcher Associate Professor Katie Seaborn from Tokyo Tech (Visiting Researcher and former Postdoctoral Researcher at RIKEN AIP) explains, “Voice assistants, smart speakers, vehicles that can speak to us, and social robots are already here. We need to know how best to design these technologies to work with us, live with us, and match our needs and desires. We also need to know how they have influenced our attitudes and behaviors, especially in subtle and unseen ways.”
    The team’s survey considered peer-reviewed journal papers and proceedings-based conference papers where the focus was on the user perception of agent voice. The source materials encompassed a wide variety of agent, interface, and environment types and technologies, with the majority being “bodyless” computer voices, computer agents, and social robots. Most of the user responses documented were from university students and adults. From these papers, the researchers were able to observe and map patterns and draw conclusions regarding the perceptions of agent voice in a variety of interaction contexts.
    The results showed that users anthropomorphized the agents that they interacted with and preferred interactions with agents that matched their personality and speaking style. There was a preference for human voices over synthetic ones. The inclusion of vocal fillers such as the use of pauses and terms like “I mean…” and “um” improved the interaction. In general, the survey found that people preferred human-like, happy, empathetic voices with higher pitches. However, these preferences were not static; for instance, user preference for voice gender changed over time from masculine voices to more feminine ones. Based on these findings, the researchers were able to formulate a high-level framework to classify different types of interactions across various computer-based technologies.
    The researchers also considered the effect of the body, or morphology and form factor, of the agent, which could take the form of a virtual or physical character, display or interface, or even an object or environment. They found that users tended to perceive agents better when the agents were embodied and when the voice “matched” the body of the agent.
    The field of human-computer interaction, particularly that of voice-based interaction, is a burgeoning one that continues to evolve almost daily. As such, the team’s survey provides an essential starting point for the study and creation of new and existing technologies in voice-based human-agent interaction (vHAI). “The research agenda that emerged from this work is expected to guide how voice-based agents, interfaces, systems, spaces, and experiences are developed and studied in the years to come,” Prof. Seaborn concludes, summing up the importance of their findings. More

  • in

    Candy-like models used to make STEM accessible to visually impaired students

    About 36 million people have blindness including 1 million children. Additionally, 216 million people experience moderate to severe visual impairment. However, STEM (science, technology, engineering and math) education maintains a reliance on three-dimensional imagery for education. Most of this imagery is inaccessible to students with blindness. A breakthrough study by Bryan Shaw, Ph.D., professor of chemistry and biochemistry at Baylor University, aims to make science more accessible to people who are blind or visually impaired through small, candy-like models.
    The Baylor-led study, published May 28 in the journal Science Advances, uses millimeter-scale gelatin models — similar to gummy bears — to improve visualization of protein molecules using oral stereognosis, or visualization of 3D shapes via the tongue and lips. The goal of the study was to create smaller, more practical tactile models of 3D imagery depicting protein molecules. The protein molecules were selected because their structures are some of the most numerous, complex and high-resolution 3D images presented throughout STEM education.
    “Your tongue is your finest tactile sensor — about twice as sensitive as the finger tips — but it is also a hydrostat, similar to an octopus arm. It can wiggle into grooves that your fingers won’t touch, but nobody really uses the tongue or lips in tactile learning. We thought to make very small, high-resolution 3D models, and visualize them by mouth,” Shaw said.
    The study included 396 participants in total — 31 fourth- and fifth-graders as well as 365 college students. Mouth, hands and eyesight were tested at identifying specific structures. All students were blindfolded during the oral and manual tactile model testing.
    Each participant was given three minutes to assess or visualize the structure of a study protein with their fingertips, followed by one minute with a test protein. After the four minutes, they were asked whether the test protein was the same or a different model than the initial study protein. The entire process was repeated using the mouth to discern shape instead of the fingers.
    Students recognized struc¬tures by mouth at 85.59% accuracy, similar to recognition by eyesight using computer animation. Testing involved identical edible gelatin models and nonedible 3D-printed models. Gelatin models were correctly identified at rates comparable to the nonedible models. More

  • in

    A helping hand for working robots

    Until now, competing types of robotic hand designs offered a trade-off between strength and durability. One commonly used design, employing a rigid pin joint that mimics the mechanism in human finger joints, can lift heavy payloads, but is easily damaged in collisions, particularly if hit from the side. Meanwhile, fully compliant hands, typically made of molded silicone, are more flexible, harder to break, and better at grasping objects of various shapes, but they fall short on lifting power.
    The DGIST research team investigated the idea that a partially-compliant robot hand, using a rigid link connected to a structure known as a Crossed Flexural Hinge (CFH), could increase the robot’s lifting power while minimizing damage in the event of a collision. Generally, a CFH is made of two strips of metal arranged in an X-shape that can flex or bend in one position while remaining rigid in others, without creating friction.
    “Smart industrial robots and cooperative robots that interact with humans need both resilience and strength,” says Dongwon Yun, who heads the DGIST BioRobotics and Mechatronics Lab and led the research team. “Our findings show the advantages of both a rigid structure and a compliant structure can be combined, and this will overcome the shortcomings of both.”
    The team 3D-printed the metal strips that serve as the CFH joints connecting segments in each robotic finger, which allow the robotic fingers to curve and straighten similar to a human hand. The researchers demonstrated the robotic hand’s ability to grasp different objects, including a box of tissues, a small fan and a wallet. The CFH-jointed robot hand was shown to have 46.7 percent more shock absorption than a pin joint-oriented robotic hand. It was also stronger than fully compliant robot hands, with the ability to hold objects weighing up to four kilograms.
    Further improvements are needed before robots with these partially-compliant hands are able to go to work alongside or directly with humans. The researchers note that additional analysis of materials is required, as well as field experiments to pinpoint the best practical applications.
    “The industrial and healthcare settings where robots are widely used are dynamic and demanding places, so it’s important to keep improving robots’ performance,” says DGIST engineering Ph.D. student Junmo Yang, the first paper author.
    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Electrons waiting for their turn: New model explains 3D quantum material

    This new 3D effect can be the foundation for topological quantum phenomena, which are believed to be particularly robust and therefore promising candidates for extremely powerful quantum technologies. These results have just been published in the scientific journal Nature Communications.
    Dr. Tobias Meng and Dr. Johannes Gooth are early career researchers in the Würzburg-Dresdner Cluster of Excellence ct.qmat that researches topological quantum materials since 2019. They could hardly believe the findings of a recent publication in “Nature” claiming that electrons in the topological metal zirconium pentatelluride (ZrTe5) move only in two-dimensional planes, despite the fact that the material is three-dimensional. Meng and Gooth therefore started their own research and experiments on the material ZrTe5. Meng from the Technische Universität Dresden (TUD) developed the theoretical model, Gooth from the Max Planck Institute for Chemical Physics of Solids designed the experiments. Seven measurements with different techniques always lead to the same conclusion.
    Electrons waiting for their turn
    The research by Meng and Gooth paints a new picture of how the Hall effect works in three-dimensional materials. The scientists believe that electrons move through the metal along three-dimensional paths, but their electric transport can still appear as two-dimensional. In the topological metal zirconium pentatelluride, this is possible because a fraction of the electrons is still waiting to be activated by an external magnetic field.
    “The way electrons move is consistent in all of our measurements, and similar to what is otherwise known from the two-dimensional quantum Hall effects. But our electrons move upwards in spirals, rather than being confined to a circular motion in planes. This is an exciting difference to the quantum Hall effect and to the proposed scenarios for what happens in the material ZrTe5,” comments Meng on the genesis of their new scientific model. “This only works because not all electrons move at all times. Some remain still, as if they were queuing up. Only when an external magnetic field is applied do they become active.”
    Experiments confirm the model
    For their experiments, the scientists cooled the topological quantum material down to -271 degree Celsius and applied an external magnetic field. Then, they performed electric and thermoelectric measurements by sending currents through the sample, studied its thermodynamics by analysing the magnetic properties of the material, and applied ultrasound. They even used X-ray, Raman and electronic spectroscopy to look into the inner workings of the material. “But none of our seven measurements hinted at the electrons moving only two-dimensionally,” explains Meng, head of the Emmy Noether group for Quantum Design at TUD and leading theorist in the present project. “Our model is in fact surprisingly simple, and still explains all the experimental data perfectly.”
    Outlook for topological quantum materials in 3D
    The Nobel-prize-winning quantum Hall effect was discovered in 1980 and describes the stepwise conduction of current in a metal. It is a cornerstone of topological physics, a field that has experienced a surge since 2005 due to its promises for the functional materials of the 21st century. To date, however, the quantum Hall effect has only been observed in two-dimensional metals. The scientific results of the present publication enlarge the understanding of how three-dimensional materials behave in magnetic fields. The cluster members Meng and Gooth intend to further persue this new research direction: “We definitely want to investigate the queueing behavior of electrons in 3D metals in more detail,” says Meng.
    Story Source:
    Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length. More

  • in

    When to release free and paid apps for maximal revenue

    Researchers from Tulane University and University of Maryland published a new paper in the Journal of Marketing that examines the dynamic interplay between free and paid versions of an app over its lifetime and suggests a possible remedy for the failure of apps.
    The study, forthcoming in the Journal of Marketing, is titled “Managing the Versioning Decision over an App’s Lifetime” and is authored by Seoungwoo Lee, Jie Zhang, and Michel Wedel.
    Is it really over for paid mobile apps? The mobile app industry is unique because free apps are much more prevalent than paid apps in most app categories, contrary to many other product markets where free products primarily play a supportive role to the paid products. Apps have been trending toward the free version in the past decade, such that in July 2020, 96% of apps on the Google Play platform were free. However, 63% of the free apps had fewer than a thousand downloads per month and 60% of app publishers generated less than $500 per month in 2015.
    Are there ways for paid apps to make free apps more profitable? And how can app publishers improve profitability by strategically deploying or eliminating the paid and free versions of an app over its lifetime? To answer these questions, the research team investigated app publisher’s decisions to offer the free, paid, or both versions of an app by considering the dynamic interplays between the free and paid versions. The findings offer valuable insights for app publishers on how to manage the versioning decision over an app’s lifetime.
    First, the researchers demonstrate how the free and paid versions influence each other’s current demand, future demand, and in-app revenues. They find that either version’s cumulative user base stimulates future demand for both versions via social influence, yet simultaneously offering both versions hurts the demand for each other in the current period. Also, the presence of a paid version reduces the in-app purchase rate and active user base and, therefore, the in-app purchase and advertising revenues of a free app, but the presence of a free version appears to have little negative impact on the paid version of an app. Therefore, app publishers should be mindful of the paid version’s negative impact on the free version. In general, simultaneously offering both versions helps a publisher achieve cost savings via economies of scale, but it reduces revenues from each version compared to when either version is offered alone.
    Second, analyses show that the most common optimal launch strategy is to offer the paid version first. Paid apps can generate download revenues from the first day of sales, while in-app revenues from either version rely on a sizeable user base which takes time to build. So, publishers can rely on paid apps to generate operating capital and recuperate development and launch costs much more quickly. Nonetheless, there are variations across app categories, which are related to differences in apps’ abilities to monetize from different revenue sources. For example, the percentage of utility apps that should launch a paid app is particularly high because they have a lower ability to monetize the free app through in-app purchase items and advertising. In contrast, entertainment apps should mostly launch a free version because they have high availability of in-app ad networks and in-app purchase items.
    Third, the optimal versioning decisions and their evolutionary patterns change over an app’s ages and vary by app category. The evolutionary patterns of optimal versioning decisions show that, for most apps, the relative profitability of the free version tends to increase with app age while that of the paid version tends to decline. Therefore, the profitability of simultaneously offering both versions tends to increase with app age until a certain point, after which the free-only version will take over as the most common optimal versioning decision, which occurs about 1.5 years after launch on average for the (relatively more successful) apps in the data. Also, there is substantial cross-category variations in the versioning evolution patterns. For example, unlike for the other categories examined, the optimal versioning decision for most utility apps in our data is to stay with the paid-only option throughout an app’s lifetime.
    This research reveals the dynamic interplay between free and paid versions of an app over its lifetime and suggests a possible remedy for the failure of apps. As the researchers explain, “Many apps that start out with a free version fail because they cannot generate enough revenue to sustain early-stage operations. We urge app publishers to pay close attention to the interplay between free and paid app versions and to improve the profitability of free apps by strategically deploying or eliminating their paid version counterparts over an app’s lifetime.”
    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More