More stories

  • in

    Physicists work to prevent information loss in quantum computing

    Nothing exists in a vacuum, but physicists often wish this weren’t the case. If the systems that scientists study could be completely isolated from the outside world, things would be a lot easier.
    Take quantum computing. It’s a field that’s already drawing billions of dollars in support from tech investors and industry heavyweights including IBM, Google and Microsoft. But if the tiniest vibrations creep in from the outside world, they can cause a quantum system to lose information.
    For instance, even light can cause information leaks if it has enough energy to jiggle the atoms within a quantum processor chip.
    “Everyone is really excited about building quantum computers to answer really hard and important questions,” said Joe Kitzman, a doctoral student at Michigan State University. “But vibrational excitations can really mess up a quantum processor.”
    But, with new research published in the journal Nature Communications, Kitzman and his colleagues are showing that these vibrations need not be a hindrance. In fact, they could benefit quantum technology.
    “If we can understand how the vibrations couple with our system, we can use that as a resource and a tool for creating and stabilizing some types of quantum states,” Kitzman said.

    What that means is that researchers can use these results to help mitigate information lost by quantum bits, or qubits (pronounced “q bits”).
    Conventional computers rely on a clear-cut binary logic. Bits encode information by taking on one of two distinct possible states, often denoted as zero or one. Qubits, however, are more flexible and can exist in states that are simultaneously both zero and one.
    Although that may sound like cheating, it’s well within the rules of quantum mechanics. Still, this feature should give quantum computers valuable advantages over conventional computers for certain problems in a variety of areas, including science, finance and cybersecurity.
    Beyond its implications for quantum technology, the MSU-led team’s report also helps set the stage for future experiments to better explore quantum systems in general.
    “Ideally, you want to separate your system from the environment, but the environment is always there,” said Johannes Pollanen, the Jerry Cowen Endowed Chair of Physics in the MSU Department of Physics and Astronomy. “It’s almost like junk you don’t want to deal with, but you can learn all kinds of cool stuff about the quantum world when you do.”
    Pollanen also leads the Laboratory for Hybrid Quantum Systems, of which Kitzman is a member, in the College of Natural Science. For the experiments led by Pollanen and Kitzman, the team built a system consisting of a superconducting qubit and what are known as surface acoustic wave resonators.

    These qubits are one of the most popular varieties among companies developing quantum computers. Mechanical resonators are used in many modern communications devices, including cellphones and garage door openers, and now, groups like Pollanen’s are putting them to work in emerging quantum technology.
    The team’s resonators allowed the researchers to tune the vibrations experienced by qubits and understand how the mechanical interaction between the two influenced the fidelity of quantum information.
    “We’re creating a paradigm system to understand how this information is scrambled,” said Pollanen. “We have control over the environment, in this case, the mechanical vibrations in the resonator, as well as the qubit.”
    “If you can understand how these environmental losses affect the system, you can use that to your advantage,” Kitzman said. “The first step in solving a problem is understanding it.”
    MSU is one of only a few places equipped and staffed to perform experiments on these coupled qubit-mechanical resonator devices, Pollanen said, and the researchers are excited to use their system for further exploration. The team also included scientists from the Massachusetts Institute of Technology and the Washington University in St. Louis. More

  • in

    A foundation that fits just right gives superconducting nickelates a boost

    Researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University say they’ve found a way to make thin films of an exciting new nickel oxide superconductor that are free of extended defects.
    Not only does this improve the material’s ability to conduct electricity with no loss, they said, but it also allows them to discover its true nature and properties, both in and out of the superconducting state, for the first time.
    Their first look at a superconducting nickel oxide, or nickelate, that does not have defects revealed that it is more like the cuprates – which hold the world’s high-temperature record for unconventional superconductivity at normal pressures — than previously thought. For instance, when the nickelate is tweaked to optimize its superconductivity and then heated above its superconducting temperature, its resistance to the flow of electric current increases in a linear fashion, just as in cuprates.
    Those striking similarities, they said, may mean these two very different materials achieve superconductivity in much the same way.
    It’s the latest step in a 35-year quest to develop superconductors that can operate at close to room temperature, which would revolutionize electronics, transportation, power transmission and other technologies by allowing them to operate without energy-wasting electrical resistance.
    The research team, led by Harold Hwang, director of the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC, described their work today in the journal Nature.

    “Nickelate films are really unstable, and until now our efforts to stabilize them on top of other materials have produced defects that are like speed bumps for electrons,” said Kyuho Lee, a SIMES postdoctoral researcher who contributed to the discovery of superconductivity in nickelates four years ago and has been working on them ever since.
    “These quality issues have led to many debates and open questions about nickelate properties, with research groups reporting widely varying results,” Lee said. “So eliminating the defects is a significant breakthrough. It means we can finally address the underlying physics behind these materials and behind unconventional superconductivity in general.”
    Jenga chemistry and a just-right fit
    The defects, which are a bit like misaligned zipper teeth, arise from the same innovative process that allowed Hwang’s team to create and stabilize a nickelate film in the first place.
    They started by making a common material known as perovskite. They “doped” it to change its electrical conductivity, then exposed it to a chemical that deftly removed layers of oxygen atoms from its molecular structure, much like removing a stick from a tower of Jenga blocks. With the oxygen layers gone, the film settled into a new structure — known as an infinite-layer nickelate -that can host superconductivity.

    The atomic latticework of this new structure occupied a slightly bigger surface area than the original. With this in mind, they had built the film on a foundation, or substrate, that would be a good fit for the finished, spread-out product, Lee said.
    But it didn’t match the atomic lattice of the starting material, which developed defects as it tried to fit comfortably onto the substrate — and those imperfections carried through to the finished nickelate.
    Hwang said it’s as if two friends of different sizes had to share a coat. If the coat fit the smaller friend perfectly, the larger one would have a hard time zipping it up. If it fit the larger friend perfectly, it would hang like a tent on the smaller one and let the cold in. An in-between size might not be the best fit for either of them, but it’s close enough to keep them both warm and happy.
    That’s the solution Lee and his colleagues pursued.
    In a series of meticulous experiments, they used a substrate that was like the in-between coat. The atomic structure of its surface was a close enough fit for both the starting and ending materials that the finished nickelate came out defect-free. Lee said the team is already starting to see some interesting physics in the nickelate now that the system is much cleaner.
    “What this means,” Hwang said, “is that we are getting closer and closer to measuring the intrinsic properties of these materials. And by sharing the details of how to make defect-free nickelates, we hope to benefit the field as a whole.”
    Researchers from Cornell University contributed to this work, which was funded by the DOE Office of Science and the Gordon and Betty Moore Foundation’s Emergent Phenomena in Quantum Systems Initiative. More

  • in

    Supercomputer used to simulate winds that cause clear air turbulence

    A research group from Nagoya University has accurately simulated air turbulence occurring on clear days around Tokyo using Japan’s fastest supercomputer. They then compared their findings with flight data to create a more accurate predictive model. The research was reported in the journal Geophysical Research Letters.
    Although air turbulence is usually associated with bad weather, an airplane cabin can shake violently even on a sunny and cloudless day. Known as clear air turbulence (CAT), these turbulent air movements can occur in the absence of any visible clouds or other atmospheric disturbances. Although the exact mechanisms that cause CAT are not fully understood, it is believed to be primarily driven by wind shear and atmospheric instability.
    CAT poses a high risk to aviation safety. The sudden turbulence on an otherwise calm day can lead to passenger and crew member injuries, aircraft damage, and disruptions to flight operations. Pilots rely on reports from other aircraft, weather radar, and atmospheric models to anticipate and avoid areas of potential turbulence. However, since CAT shows no visible indicators, such as clouds or storms, it is particularly challenging to detect and forecast.
    As winds swirl and circulate creating sudden changes in airflow, eddies are created that can shake an aircraft. Therefore, to better understand CAT, scientists model it using large-eddy simulation (LES), a computational fluid dynamics technique used to simulate these turbulent flows. However, despite its importance to research on air turbulence, one of the greatest challenges of LES is the computational cost. Simulating the complex interactions involved in LES requires high levels of computing power.
    To elaborately simulate the process of turbulence generation using high-resolution LES, the research group from Nagoya University turned to an exascale computer called the Fugaku supercomputer. It is a high-performance computing system, currently ranked as the world’s second fastest supercomputer.
    Using Fugaku’s immense computational power, Dr. Ryoichi Yoshimura of Nagoya University in collaboration with Dr. Junshi Ito and others at Tohoku University, performed an ultra-high-resolution simulation of the CAT above Tokyo’s Haneda airport in winter caused by low pressure and a nearby mountain range.
    They found that the wind speed disturbance was caused by the collapse of the Kelvin-Helmholtz instability wave, a specific type of instability that occurs the interface between two layers of air with different velocities. As one layer has higher velocity than the other, it creates a wave-like effect as it pulls at the lower velocity layer. As the atmospheric waves grow from the west and collapse in the east, this phenomenon creates several fine vortices, creating turbulence.
    After making their computations, the group needed to confirm whether their simulated vortices were consistent with real-world data. “Around Tokyo, there is a lot of observational data available to validate our results,” said Yoshimura. “There are many airplanes flying over the airports, which results in many reports of turbulence and the intensity of shaking. Atmospheric observations by a balloon near Tokyo were also used. The shaking data recorded at that time was used to show that the calculations were valid.”
    “The results of this research should lead to a deeper understanding of the principle and mechanism of turbulence generation by high-resolution simulation and allow us to investigate the effects of turbulence on airplanes in more detail,” said Yoshimura. “Since significant turbulence has been shown to occur in the limited 3D region, routing without flying in the region is possible by adjusting flight levels if the presence of active turbulence is known in advance. LES would provide a smart way of flying by providing more accurate turbulence forecasts and real-time prediction.” More

  • in

    Pump powers soft robots, makes cocktails

    The hottest drink of the summer may be the SEAS-colada. Here’s what you need to make it: gin, pineapple juice, coconut milk and a dielectric elastomer actuator-based soft peristaltic pump. Unfortunately, the last component can only be found in the lab of Robert Wood, the Harry Lewis and Marlyn McGrath Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences.
    At least, for now.
    Wood and his team designed the pump to solve a major challenge in soft robotics — how to replace traditionally bulky and rigid power components with soft alternatives.
    Over the past several years, Wood’s Microrobotics Lab at SEAS has been developing soft analogues of traditionally rigid robotic components, including valves and sensors. In fluid-driven robotic systems, pumps control the pressure or flow of the liquid that powers the robot’s movement. Most pumps available today for soft robotics are either too large and rigid to fit onboard, not powerful enough for actuation or only work with specific fluids.
    Wood’s team developed a compact, soft pump with adjustable pressure flow versatile enough to pump a variety of fluids with varying viscosity, including gin, juice, and coconut milk, and powerful enough to power soft haptic devices and a soft robotic finger.
    The pump’s size, power and versatility opens up a range of possibilities for soft robots in a variety of applications, including food handling, manufacturing, and biomedical therapeutics.

    The research was published recently in Science Robotics.
    Peristaltic pumps are widely used in industry. These simple machines use motors to compress a flexible tube, creating a pressure differential that forces liquid through the tube. These types of pumps are especially useful in biomedical applications because the fluid doesn’t touch any component of the pump itself.
    “Peristaltic pumps can deliver liquids with a wide range of viscosities, particle-liquid suspensions, or fluids such as blood, which are challenging for other types of pumps,” said first author Siyi Xu, a former graduate student at SEAS and current postdoctoral fellow in Wood’s lab.
    Building off previous research, Xu and the team designed electrically powered dielectric elastomer actuators (DEAs) to act as the pump’s motor and rollers. These soft actuators have ultra-high power density, are lightweight, and can run for hundreds of thousands of cycles.
    The team designed an array of DEAs that coordinate with each other, compressing a millimeter-sized channel in a programmed sequence to produce pressure waves.

    The result is a centimeter-sized pump small enough to fit on board a small soft robot and powerful enough to actuate movement, with controllable pressure, flow rate, and flow direction.
    “We also demonstrated that we could actively tune the output from continuous flow to droplets by varying the input voltages and the outlet resistance, in our case the diameter of the blunt needle,” said Xu. “This capability may allow the pump to be useful not only for robotics but also for microfluidic applications.”
    “The majority of soft robots contain rigid components somewhere along their drivetrain,” said Wood. “This topic started as an effort to swap out one of those key pieces, the pump, with a soft alternative. But along the way we realized that compact soft pumps may have far greater utility, for example in biomedical settings for drug delivery or implantable therapeutic devices.”
    The research was co-authored by Cara M. Nunez and Mohammad Souri. It was supported by the National Science Foundation under grant CMMI-1830291.
    Video: https://youtu.be/knC9HJ6K-sU More

  • in

    Training robots how to learn, make decisions on the fly

    Mars rovers have teams of human experts on Earth telling them what to do. But robots on lander missions to moons orbiting Saturn or Jupiter are too far away to receive timely commands from Earth. Researchers in the Departments of Aerospace Engineering and Computer Science at the University of Illinois Urbana-Champaign developed a novel learning-based method so robots on extraterrestrial bodies can make decisions on their own about where and how to scoop up terrain samples.
    “Rather than simulating how to scoop every possible type of rock or granular material, we created a new way for autonomous landers to learn how to learn to scoop quickly on a new material it encounters,” said Pranay Thangeda, a Ph.D. student in the Department of Aerospace Engineering.
    “It also learns how to adapt to changing landscapes and their properties, such as the topology and the composition of the materials,” he said.
    Using this method, Thangeda said a robot can learn how to scoop a new material with very few attempts. “If it makes several bad attempts, it learns it shouldn’t scoop in that area and it will try somewhere else.”
    The proposed deep Gaussian process model is trained on the offline database with deep meta-learning with controlled deployment gaps, which repeatedly splits the training set into mean-training and kernel-training and learns kernel parameters to minimize the residuals from the mean models. In deployment, the decision-maker uses the trained model and adapts it to the data acquired online.
    One of the challenges for this research is the lack of knowledge about ocean worlds like Europa.

    “Before we sent the recent rovers to Mars, orbiters gave us pretty good information about the terrain features,” Thangeda said. “But the best image we have of Europa has a resolution of 256 to 340 meters per pixel, which is not clear enough to ascertain features.”
    Thangeda’s adviser Melkior Ornik said, “All we know is that Europa’s surface is ice, but it could be big blocks of ice or much finer like snow. We also don’t know what’s underneath the ice.”
    For some trials, the team hid material under a layer of something else. The robot only sees the top material and thinks it might be good to scoop. “When it actually scoops and hits the bottom layer, it learns it is unscoopable and moves to a different area,” Thangeda said.
    NASA wants to send battery-powered rovers rather than nuclear to Europa because, among other mission-specific considerations, it is critical to minimize the risk of contaminating ocean worlds with potentially hazardous materials.
    “Although nuclear power supplies have a lifespan of months, batteries have about a 20-day lifespan. We can’t afford to waste a few hours a day to send messages back and forth. This provides another reason why the robot’s autonomy to make decisions on its own is vital,” Thangeda said.

    This method of learning to learn is also unique because it allows the robot to use vision and very little on-line experience to achieve high-quality scooping actions on unfamiliar terrains — significantly outperforming non-adaptive methods and other state-of-the-art meta-learning methods.
    From these 12 materials and terrains made of a unique composition of one or more materials, a database of 6,700 was created.
    The team used a robot in the Department of Computer Science at Illinois. It is modeled after the arm of a lander with sensors to collect scooping data on a variety of materials, from 1-millimeter grains of sand to 8-centimeter rocks, as well as different volume materials such as shredded cardboard and packing peanuts. The resulting database in the simulation contains 100 points of knowledge for each of 67 different terrains, or 6,700 total points.
    “To our knowledge, we are the first to open source a large-scale dataset on granular media,” Thangeda said. “We also provided code to easily access the dataset so others can start using it in their applications.”
    The model the team created will be deployed at NASA’s Jet Propulsion Laboratory’s Ocean World Lander Autonomy Testbed.
    “We’re interested in developing autonomous robotic capabilities on extraterrestrial surfaces, and in particular challenging extraterrestrial surfaces,” Ornik said. “This unique method will help inform NASA’s continuing interest in exploring ocean worlds.
    “The value of this work is in adaptability and transferability of knowledge or methods from Earth to an extraterrestrial body, because it is clear that we will not have a lot of information before the lander gets there. And because of the short battery lifespan, we won’t have a long time for the learning process. The lander might last for just a few days, then die, so learning and making decisions autonomously is extremely beneficial.”
    The open-source dataset is available at: drillaway.github.io/scooping-dataset.html. More

  • in

    Researcher turns one of the basic rules of construction upside down

    An Aston University researcher has turned one of the basic rules of construction on its head.
    For centuries a hanging chain has been used as an example to explain how masonry arches stand.
    Structural engineers are familiar with seventeenth-century scientist Robert Hooke’s theory that a hanging chain will mirror the shape of an upstanding rigid arch.
    However, research from Aston University’s College of Engineering and Physical Sciences, shows that this common-held belief is incorrect because, regardless of the similarities, the hanging chain and the arch are two incompatible mechanical systems.
    Dr Haris Alexakis used the transition in science from Newtonian to Lagrangian mechanics, that led to the development of modern physics and mathematics, to prove this with mathematical rigour.
    In his paper Vector analysis and the stationary potential energy for assessing equilibrium
    of curved masonry structures he revisits the equilibrium of the hanging chain and the arch, explaining that the two systems operate in different spatial frameworks. One consequence of this is that the hanging chain requires only translational force to be in equilibrium whereas the inverted arch needs both translational and rotational. As a result, the solutions are always different.

    Dr Alexakis’s analysis unearthed subtle inconsistencies in the way Hooke’s analogy has been interpreted and applied over the centuries for the design and safety assessment of arches, and highlights its practical limitations.
    He said: “The analogy between inverted hanging chains and the optimal shape of masonry arches is a concept deeply rooted in our structural analysis practices.
    “Curved structures have enabled masons, engineers, and architects to carry heavy loads and cover large spans with the use of low-tensile strength materials for centuries, while creating the marvels of the world’s architectural heritage.
    “Despite the long history of these practices, finding optimal structural forms and assessing the stability and safety of curved structures remains as topical as ever. This is due to an increasing interest to preserve heritage structures and reduce material use in construction, while replacing steel and concrete with low-carbon natural materials.”
    His paper, which is published in the journal Mathematics and Mechanics of Solids, suggests a new structural analysis method based on the principle of stationary potential energy which would be faster, more flexible and help calculate more complex geometries.
    As a result, analysts won’t need to consider equilibrium of each individual block or describe geometrically the load path of thrust forces to obtain a rigorous solution.
    Dr Alexakis added: “The analysis tools discussed in the paper will enable us to assess the condition and safety of heritage structures and build more sustainable curved structures, like vaults and shells.
    “The major advantage of these structures, apart from having appealing aesthetics, is that they can have reduced volume, and can be made of economic, low-tensile-strength and low-carbon natural materials, contributing to net zero construction.” More

  • in

    World’s largest association of computing professionals issues Principles for Generative AI Technologies

    In response to major advances in Generative AI technologies — as well as the significant questions these technologies pose in areas including intellectual property, the future of work, and even human safety — the Association for Computing Machinery’s global Technology Policy Council (ACM TPC) has issued “Principles for the Development, Deployment, and Use of Generative AI Technologies.”
    Drawing on the deep technical expertise of computer scientists in the United States and Europe, the ACM TPC statement outlines eight principles intended to foster fair, accurate, and beneficial decision-making concerning generative and all other AI technologies. Four of the principles are specific to Generative AI, and an additional four principles are adapted from the TPC’s 2022 “Statement on Principles for Responsible Algorithmic Systems.”
    The Introduction to the new Principles advances the core argument that “the increasing power of Generative AI systems, the speed of their evolution, broad application, and potential to cause significant or even catastrophic harm, means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.”
    The document then sets out these eight instrumental principles, outlined here in abbreviated form:
    Generative AI-Specific Principles Limits and guidance on deployment and use: In consultation with all stakeholders, law and regulation should be reviewed and applied as written or revised to limit the deployment and use of Generative AI technologies when required to minimize harm. No high-risk AI system should be allowed to operate without clear and adequate safeguards, including a “human in the loop” and clear consensus among relevant stakeholders that the system’s benefits will substantially outweigh its potential negative impacts. One approach is to define a hierarchy of risk levels, with unacceptable risk at the highest level and minimal risk at the lowest level. Ownership: Inherent aspects of how Generative AI systems are structured and function are not yet adequately accounted for in intellectual property (IP) law and regulation. Personal data control: Generative AI systems should allow a person to opt out of their data being used to train a system or facilitate its generation of information. Correctability: Providers of Generative AI systems should create and maintain public repositories where errors made by the system can be noted and, optionally, corrections made.Adapted Prior Principles Transparency: Any application or system that utilizes Generative AI should conspicuously disclose that it does so to the appropriate stakeholders. Auditability and contestability: Providers of Generative AI systems should ensure that system models, algorithms, data, and outputs can be recorded where possible (with due consideration to privacy), so that they may be audited and/or contested in appropriate cases. Limiting environmental impact: Given the large environmental impact of Generative AI models, we recommend that consensus on methodologies be developed to measure, attribute, and actively reduce such impact. Heightened security and privacy: Generative AI systems are susceptible to a broad range of new security and privacy risks, including new attack vectors and malicious data leaks, among others.”Our field needs to tread carefully with the development of Generative AI because this is a new paradigm that goes significantly beyond previous AI technology and applications,” explained Ravi Jain, Chair of the ACM Technology Policy Council’s Working Group on Generative AI and lead author of the Principles. “Whether you celebrate Generative AI as a wonderful scientific advancement or fear it, everyone agrees that we need to develop this technology responsibly. In outlining these eight instrumental principles, we’ve tried to consider a wide range of areas where Generative AI might have an impact. These include aspects that have not been covered as much in the media, including environmental considerations and the idea of creating public repositories where errors in a system can be noted and corrected.”
    “These are guidelines, but we must also build a community of scientists, policymakers, and industry leaders who will work together in the public interest to understand the limits and risks of Generative AI as well as its benefits. ACM’s position as the world’s largest association for computing professionals makes us well-suited to foster that consensus and look forward to working with policy makers to craft the regulations by which Generative AI should be developed, deployed, but also controlled,” added James Hendler, Professor at Rensselaer Polytechnic Institute and Chair of ACM’s Technology Policy Council.
    “Principles for the Development, Deployment, and Use of Generative AI Technologies” was jointly produced and adopted by ACM’s US Technology Policy Committee (USTPC) and Europe Technology Policy Committee (Europe TPC).
    Lead authors of this document for USTPC were Ravi Jain, Jeanna Matthews, and Alejandro Saucedo. Important contributions were made by Harish Arunachalam, Brian Dean, Advait Deshpande, Simson Garfinkel, Andrew Grosso, Jim Hendler, Lorraine Kisselburgh, Srivatsa Kundurthy, Marc Rotenberg, Stuart Shapiro, and Ben Shneiderman. Assistance also was provided by Ricardo Baeza-Yates, Michel Beaudouin-Lafon, Vint Cerf, Charalampos Chelmis, Paul DeMarinis, Nicholas Diakopoulos, Janet Haven, Ravi Iyer, Carlos E. Jimenez-Gomez, Mark Pastin, Neeti Pokhriyal, Jason Schmitt, and Darryl Scriven. More

  • in

    Acoustics researchers decompose sound accurately into its three basic components

    Researchers have been looking for ways to decompose sound into its basic ingredients for over 200 years. In the 1820s, French scientist Joseph Fourier proposed that any signal, including sounds, can be built using sufficiently many sine waves. These waves sound like whistles, each have their own frequency, level and start time, and are the basic building blocks of sound.
    However, some sounds, such as the flute and a breathy human voice, may require hundreds or even thousands of sines to exactly imitate the original waveform. This comes from the fact that such sounds contain a less harmonical, more noisy structure, where all frequencies occur at once. One solution is to divide sound into two types of components, sines and noise, with a smaller number of whistling sine waves and combined with variable noises, or hisses, to complete the imitation.
    Even this ‘complete’ two-component sound model has issues with the smoothing of the beginnings of sound events, such as consonants in voice or drum sounds in music. A third component, named transient, was introduced around the year 2000 to help model the sharpness of such sounds. Transients alone sound like clicks. From then on, sound has been often divided into three components: sines, noise, and transients.
    The three-component model of sines, noise and transients has now been refined by researchers at Aalto University Acoustics Lab, using ideas from auditory perception, fuzzy logic, and perfect reconstruction.
    Decomposition mirrors the way we hear sounds
    Doctoral researcher Leonardo Fierro and professor Vesa Välimäki realized the way that people hear the different components and separate whistles, clicks, and hisses is important. If a click gets spread in time, it starts to ring and sound noisier; by contrast, focusing on very brief sounds might cause some loss of tonality.

    This insight from auditory perception was coupled with fuzzy logic: at any moment, part of the sound can belong to each of the three classes of sines, transients or noise, not just one of them. With the goal of perfect reconstruction, Fierro optimized the way sound is decomposed.
    In the enhanced method, sines and transients are two opposite characteristics of sound, and the sound is not allowed to belong to both classes at the same time. However, any of two opposite component types can still occur simultaneously with noise. Thus, the idea of fuzzy logic is present in a restricted way. The noise works as a fuzzy link between the sines and transients, describing all the nuances of the sound that are not captured by simple clicks and whistles. ‘It’s like finding the missing piece of a puzzle to connect those two parts that did not fit together before,’ says Fierro.
    This enhanced decomposition method was compared with previous methods in a listening test. Eleven experienced listeners were individually asked to audit several short music excepts and the components extracted from them using different methods.
    The new method emerged as the winning way to decompose most sounds, based on the listeners’ ratings. Only when there is a strong vibrato in a musical sound, such as in a singing voice or the violin, all decomposition methods struggle, and in these cases some previous methods are superior.
    A test use case for the new decomposition method is the time-scale modification of sound, especially slowing down of music. This was tested in a preference listening test against the lab’s own previous method, which was selected as the best academic technique in a comparative study a few years ago. Again, Fierro’s new method was a clear winner.
    ‘The new sound decomposition method opens many exciting possibilities in sound processing,’ says professor Välimäki. ‘The slowing down of sound is currently our main interest. It is striking that for example in sports news, the slow-motion videos are always silent. The reason is probably that the sound quality in current slow-down audio tools is not good enough. We have already started developing better time-scale modification methods, which use a deep neural network to help stretch some components.’
    The high-quality sound decomposition also enables novel types of music remixing techniques. One of them leads to distortion-free dynamic range compression. Namely, the transient component often contains the loudest peaks in the sound waveform, so simply reducing the level of the transient component and mixing it back with the others can limit the peak-to-peak value of audio.
    Leonardo Fierro demonstrates how the “SiTraNo” app can be used to break sound into its atoms — in this case himself rapping, in this video: https://youtu.be/nZldIAYzzOs More