More stories

  • in

    Software speeds up drug development

    Proteins not only carry out the functions that are critical for the survival of cells, but also influence the development and progression of diseases. To understand their role in health and disease, researchers study the three-dimensional atomic structure of proteins using both experimental and computational methods.
    Over 75 percent of proteins present at the surface of our cells are covered by glycans. These sugar-like molecules form very dynamic protective shields around the proteins. However, the mobility and variability of the sugars make it difficult to determine how these shields behave, or how they influence the binding of drug molecules.
    Mateusz Sikora, the project leader and head of the Dioscuri Centre for Modelling of Posttranslational Modifications, and his team in Krakow and partners at the Max Planck Institute of Biophysics in Frankfurt am Main, Germany, have addressed this challenge by using computers, working together with scientists at Inserm in Paris, Academia Sinica in Tapei and the University of Bremen. Their powerful new algorithm GlycoSHIELD enables a fast but realistic modeling of the sugar chains present on protein surfaces. Reducing computing hours and therefore power consumption by several orders of magnitude compared to conventional simulation tools, GlycoSHIELD paves the path towards green computing.
    From thousands of hours to a few minutes
    Protective glycan shields strongly influence how proteins interact with other molecules such as therapeutic drugs. For example, the sugar layer on the spike protein of the coronavirus hides the virus from the immune system by making it difficult for natural or vaccine-induced antibodies to recognize the virus. The sugar shields therefore play an important role in drug and vaccine development. Pharmaceutical research could benefit from routinely predicting their morphology and dynamics. Until now, however, forecasting the structure of sugar layers using computer simulations was only possible with expert knowledge on special supercomputers. In many cases, thousands or even millions of computing hours were required.
    With GlycoSHIELD, Sikora’s team provides a fast, environmentally friendly open source alternative. “Our approach reduces resources, computing time and the technical expertise needed,” says Sikora. “Anyone can now calculate the arrangement and dynamics of sugar molecules on proteins on their personal computer within minutes, without the need of expert knowledge and high-performance computers. Furthermore, this new way of making calculations is very energy efficient.” The software can not only be used in research, but could also be helpful for the development of drugs or vaccines, for example in immunotherapy for cancer.
    A jigsaw puzzle made of sugar
    How did the team manage to achieve such a high increase in efficiency? The authors created and analyzed a library of thousands of most likely 3D poses of the most common forms of sugar chains on proteins found in humans and microorganisms. Using long simulations and experiments, they found that for a reliable prediction of glycan shields, it is sufficient that the attached sugars do not collide with membranes or parts of the protein.
    The algorithm is based on these findings. “GlyoSHIELD users only have to specify the protein and the locations where the sugars are attached. Our software then puzzles them on the protein surface in the most likely arrangement,” explains Sikora. “We could reproduce the sugar shields of the spike protein accurately: they look exactly as what we see in the experiments!” With GlycoSHIELD it is now possible to supplement new as well as existing protein structures with sugar information. The scientists also used GlycoSHIELD to reveal the pattern of the sugars on the GABAA receptor, an important target for sedatives and anesthetics. More

  • in

    Umbrella for atoms: The first protective layer for 2D quantum materials

    The race to create increasingly faster and more powerful computer chips continues as transistors, their fundamental components, shrink to ever smaller and more compact sizes. In a few years, these transistors will measure just a few atoms across — by which point, the miniaturization of the silicon technology currently used will have reached its physical limits. Consequently, the quest for alternative materials with entirely new properties is crucial for future technological advancements.
    Back in 2021, scientists from the Cluster of Excellence ct.qmat — Complexity and Topology in Quantum Matter at the universities JMU Würzburg and TU Dresden made a significant discovery: topological quantum materials such as indenene, which hold great promise for ultrafast, energy-efficient electronics. The resulting, extremely thin quantum semiconductors are composed of a single atom layer — in indenene’s case, indium atoms — and act as topological insulators, conducting electricity virtually without resistance along their edges.
    “Producing such a single atomic layer requires sophisticated vacuum equipment and a specific substrate material. To utilize this two-dimensional material in electronic components, it would need to be removed from the vacuum environment. However, exposure to air, even briefly, leads to oxidation, destroying its revolutionary properties and rendering it useless,” explains experimental physicist Professor Ralph Claessen, ct.qmat’s Würzburg spokesperson.
    The ct.qmat Würzburg team has now managed to solve this problem. Their results have been published in the journal Nature Communications.
    In Search of a Protective Coating
    “We dedicated two years to finding a method to protect the sensitive indenene layer from environmental elements using a protective coating. The challenge was ensuring that this coating did not interact with the indenene layer,” explains Cedric Schmitt, one of Claessen’s doctoral students involved in the project. This interaction is problematic because when different types of atoms — from the protective layer and the semiconductor, for instance — meet, they react chemically at the atomic level, changing the material. This isn’t a problem with conventional silicon chips, which comprise multiple atomic layers, leaving sufficient layers unaffected and hence still functional.
    “A semiconductor material consisting of a single atomic layer such as indenene would normally be compromised by a protective film. This posed a seemingly insurmountable challenge that piqued our research curiosity,” says Claessen. The search for a viable protective layer led them to explore van der Waals materials, named after the Dutch physicist Johannes Diderik van der Waals (1837-1923). Claessen explains: “These two-dimensional van der Waals atomic layers are characterized by strong internal bonds between their atoms, while only weakly bonding to the substrate. This concept is akin to how pencil lead made of graphite — a form of carbon with atoms arranged in honeycomb layers — writes on paper. The layers of graphene can be easily separated. We aimed to replicate this characteristic.”
    Success!

    Using sophisticated ultrahigh vacuum equipment, the Würzburg team experimented with heating silicon carbide (SiC) as a substrate for indenene, exploring the conditions needed to form graphene from it. “Silicon carbide consists of silicon and carbon atoms. Heating it causes the carbon atoms to detach from the surface and form graphene,” says Schmitt, elucidating the laboratory process. “We then vapor-deposited indium atoms, which are immersed between the protective graphene layer and the silicon carbide substrate. This is how the protective layer for our two-dimensional quantum material indenene was formed.”
    Umbrella Unfurled
    For the first time globally, Claessen and his team at ct.qmat’s Würzburg branch successfully crafted a functional protective layer for a two-dimensional quantum semiconductor material without compromising its extraordinary quantum properties. After analyzing the fabrication process, they thoroughly tested the layer’s protective capabilities against oxidation and corrosion. “It works! The sample can even be exposed to water without being affected in any way,” says Claessen with delight. “The graphene layer acts like an umbrella for our indenene.”
    Toward Atomic Layer Electronics
    This breakthrough paves the way for applications involving highly sensitive semiconductor atomic layers. The manufacture of ultrathin electronic components requires them to be processed in air or other chemical environments. This has been made possible thanks to the discovery of this protective mechanism. The team in Würzburg is now focused on identifying more van der Waals materials that can serve as protective layers — and they already have a few prospects in mind. The snag is that despite graphene’s effective protection of atomic monolayers against environmental factors, its electrical conductivity poses a risk of short circuits. The Würzburg scientists are working on overcoming these challenges and creating the conditions for tomorrow’s atomic layer electronics. More

  • in

    Researchers use AI, Google street view to predict household energy costs on large scale

    Low-income households in the United States are bearing an energy burden that is three times that of the average household, according to the U.S. Department of Energy.
    In total, more than 46 million U.S. households carry a significant energy burden — meaning they pay more than 6 percent of their gross income for basic energy expenses such as cooling and heating their homes.
    Passive design elements like natural ventilation can play a pivotal role in reducing energy consumption. By harnessing ambient energy sources like sunlight and wind, they can create a more comfortable environment at little or no cost. However, data on passive design is scarce, making it difficult to assess the energy savings on a large scale.
    To address that need, an interdisciplinary team of experts from the University of Notre Dame, in collaboration with faculty at the University of Maryland and University of Utah, have found a way to use artificial intelligence to analyze a household’s passive design characteristics and predict its energy expenses with more than 74 percent accuracy.
    By combining their findings with demographic data including poverty levels, the researchers have created a comprehensive model for predicting energy burden across 1,402 census tracts and nearly 300,000 households in the Chicago metropolitan area. Their research was published this month in the journal Building and Environment.
    The results yield invaluable insights for policymakers and urban planners, said Ming Hu, associate dean for research, scholarship and creative work in the School of Architecture, allowing them to identify neighborhoods that are most vulnerable — and paving the way toward smart and sustainable cities.
    “When families cannot afford air conditioning or heat, it can lead to dire health risks,” Hu said. “And these risks are only exacerbated by climate change, which is expected to increase both the frequency and intensity of extreme temperature events. There is an urgent and real need to find low-cost, low-tech solutions to help reduce energy burden and to help families prepare for and adapt to our changing climate.”
    In addition to Hu, who is a concurrent associate professor in the College of Engineering, the Notre Dame research team includes Chaoli Wang, a professor of computer science and engineering; Siyuan Yao, a doctoral student in the Department of Computer Science and Engineering; Siavash Ghorbany, a doctoral student in the Department of Civil and Environmental Engineering and Earth Science; and Matthew Sisk, an associate professor of the practice in the Lucy Family Institute for Data and Society.

    Their research, which was funded by the Lucy Institute as part of the Health Equity Data Lab, focused on three of the most influential factors in passive design: the size of windows in the dwelling, the types of windows (operable or fixed) and the percent of the building that has proper shading.
    Using a convolutional neural network, the team analyzed Google Street View images of residential buildings in Chicago and then performed different machine learning methods to find the best prediction model. Their results show that passive design characteristics are associated with average energy burden and are essential for prediction models.
    “The first step toward mitigating the energy burden for low-income families is to get a better understanding of the issue and to be able to measure and predict it,” Ghorbany said. “So, we asked, ‘What if we could use everyday tools and technologies like Google Street View, combined with the power of machine learning, to gather this information?’ We hope it will be a positive step toward energy justice in the United States.”
    The resulting model is easily scalable and far more efficient than previous methods of energy auditing, which required researchers to go building by building through an area.
    Over the next few months, the team will work with Notre Dame’s Center for Civic Innovation to evaluate residences in the local South Bend and Elkhart communities. Being able to use this model to quickly and efficiently get information to the organizations who can help local families is an exciting next step for this work, Sisk said.
    “When you have an increased energy burden, where is that money being taken away from? Is it being taken from educational opportunities or nutritious food? Is it then contributing to that population becoming more disenfranchised as time goes on?” Sisk said. “When we look at systemic issues like poverty, there is no one thing that will fix it. But when there’s a thread we can pull, when there are actionable steps that can start to make it a little bit better, that’s really powerful.”
    The researchers are also working toward including additional passive design characteristics in the analysis, such as insulation, cool roofs and green roofs. And eventually, they hope to scale the project up to evaluate and address energy burden disparities at the national level.

    For Hu, the project is emblematic of the University’s commitments to both sustainability and helping a world in need.
    “This is an issue of environmental justice. And this is what we do so well at Notre Dame — and what we should be doing,” she said. “We want to use advancements like AI and machine learning not just because they are cutting-edge technologies, but for the common good.” More

  • in

    AI technique ‘decodes’ microscope images, overcoming fundamental limit

    Atomic force microscopy, or AFM, is a widely used technique that can quantitatively map material surfaces in three dimensions, but its accuracy is limited by the size of the microscope’s probe. A new AI technique overcomes this limitation and allows microscopes to resolve material features smaller than the probe’s tip.
    The deep learning algorithm developed by researchers at the University of Illinois Urbana-Champaign is trained to remove the effects of the probe’s width from AFM microscope images. As reported in the journal Nano Letters, the algorithm surpasses other methods in giving the first true three-dimensional surface profiles at resolutions below the width of the microscope probe tip.
    “Accurate surface height profiles are crucial to nanoelectronics development as well as scientific studies of material and biological systems, and AFM is a key technique that can measure profiles noninvasively,” said Yingjie Zhang, a U. of I. materials science & engineering professor and the project lead. “We’ve demonstrated how to be even more precise and see things that are even smaller, and we’ve shown how AI can be leveraged to overcome a seemingly insurmountable limitation.”
    Often, microscopy techniques can only provide two-dimensional images, essentially providing researchers with aerial photographs of material surfaces. AFM provides full topographical maps accurately showing the height profiles of the surface features. These three-dimensional images are obtained by moving a probe across the material’s surface and measuring its vertical deflection.
    If surface features approach the size of the probe’s tip — about 10 nanometers — then they cannot be resolved by the microscope because the probe becomes too large to “feel out” the features. Microscopists have been aware of this limitation for decades, but the U. of I. researchers are the first to give a deterministic solution.
    “We turned to AI and deep learning because we wanted to get the height profile — the exact roughness — without the inherent limitations of more conventional mathematical methods,” said Lalith Bonagiri, a graduate student in Zhang’s group and the study’s lead author.
    The researchers developed a deep learning algorithm with an encoder-decoder framework. It first “encodes” raw AFM images by decomposing them into abstract features. After the feature representation is manipulated to remove the undesired effects, it is then “decoded” back into a recognizable image.

    To train the algorithm, the researchers generated artificial images of three-dimensional structures and simulated their AFM readouts. The algorithm was then constructed to transform the simulated AFM images with probe-size effects and extract the underlying features.
    “We actually had to do something nonstandard to achieve this,” Bonagiri said. “The first step of typical AI image processing is to rescale the brightness and contrast of the images against some standard to simplify comparisons. In our case, though, the absolute brightness and contrast is the part that’s meaningful, so we had to forgo that first step. That made the problem much more challenging.”
    To test their algorithm, the researchers synthesized gold and palladium nanoparticles with known dimensions on a silicon host. The algorithm successfully removed the probe tip effects and correctly identified the three-dimensional features of the nanoparticles.
    “We’ve given a proof-of-concept and shown how to use AI to significantly improve AFM images, but this work is only the beginning,” Zhang said. “As with all AI algorithms, we can improve it by training it on more and better data, but the path forward is clear.”
    The experiments were carried out in the Carl R. Woese Institute for Genomic Biology and the Materials Research Laboratory at the U. of I.
    Support was provided by the National Science Foundation and the Arnold and Mabel Beckman Foundation. More

  • in

    New AI model could streamline operations in a robotic warehouse

    Hundreds of robots zip back and forth across the floor of a colossal robotic warehouse, grabbing items and delivering them to human workers for packing and shipping. Such warehouses are increasingly becoming part of the supply chain in many industries, from e-commerce to automotive production.
    However, getting 800 robots to and from their destinations efficiently while keeping them from crashing into each other is no easy task. It is such a complex problem that even the best path-finding algorithms struggle to keep up with the breakneck pace of e-commerce or manufacturing.
    In a sense, these robots are like cars trying to navigate a crowded city center. So, a group of MIT researchers who use AI to mitigate traffic congestion applied ideas from that domain to tackle this problem.
    They built a deep-learning model that encodes important information about the warehouse, including the robots, planned paths, tasks, and obstacles, and uses it to predict the best areas of the warehouse to decongest to improve overall efficiency.
    Their technique divides the warehouse robots into groups, so these smaller groups of robots can be decongested faster with traditional algorithms used to coordinate robots. In the end, their method decongests the robots nearly four times faster than a strong random search method.
    In addition to streamlining warehouse operations, this deep learning approach could be used in other complex planning tasks, like computer chip design or pipe routing in large buildings.
    “We devised a new neural network architecture that is actually suitable for real-time operations at the scale and complexity of these warehouses. It can encode hundreds of robots in terms of their trajectories, origins, destinations, and relationships with other robots, and it can do this in an efficient manner that reuses computation across groups of robots,” says Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

    Wu, senior author of a paper on this technique, is joined by lead author Zhongxia Yan, a graduate student in electrical engineering and computer science. The work will be presented at the International Conference on Learning Representations.
    Robotic Tetris
    From a bird’s eye view, the floor of a robotic e-commerce warehouse looks a bit like a fast-paced game of “Tetris.”
    When a customer order comes in, a robot travels to an area of the warehouse, grabs the shelf that holds the requested item, and delivers it to a human operator who picks and packs the item. Hundreds of robots do this simultaneously, and if two robots’ paths conflict as they cross the massive warehouse, they might crash.
    Traditional search-based algorithms avoid potential crashes by keeping one robot on its course and replanning a trajectory for the other. But with so many robots and potential collisions, the problem quickly grows exponentially.
    “Because the warehouse is operating online, the robots are replanned about every 100 milliseconds. That means that every second, a robot is replanned 10 times. So, these operations need to be very fast,” Wu says.

    Because time is so critical during replanning, the MIT researchers use machine learning to focus the replanning on the most actionable areas of congestion — where there exists the most potential to reduce the total travel time of robots.
    Wu and Yan built a neural network architecture that considers smaller groups of robots at the same time. For instance, in a warehouse with 800 robots, the network might cut the warehouse floor into smaller groups that contain 40 robots each.
    Then, it predicts which group has the most potential to improve the overall solution if a search-based solver were used to coordinate trajectories of robots in that group.
    An iterative process, the overall algorithm picks the most promising robot group with the neural network, decongests the group with the search-based solver, then picks the next most promising group with the neural network, and so on.
    Considering relationships
    The neural network can reason about groups of robots efficiently because it captures complicated relationships that exist between individual robots. For example, even though one robot may be far away from another initially, their paths could still cross during their trips.
    The technique also streamlines computation by encoding constraints only once, rather than repeating the process for each subproblem. For instance, in a warehouse with 800 robots, decongesting a group of 40 robots requires holding the other 760 robots as constraints. Other approaches require reasoning about all 800 robots once per group in each iteration.
    Instead, the researchers’ approach only requires reasoning about the 800 robots once across all groups in each iteration.
    “The warehouse is one big setting, so a lot of these robot groups will have some shared aspects of the larger problem. We designed our architecture to make use of this common information,” she adds.
    They tested their technique in several simulated environments, including some set up like warehouses, some with random obstacles, and even maze-like settings that emulate building interiors.
    By identifying more effective groups to decongest, their learning-based approach decongests the warehouse up to four times faster than strong, non-learning-based approaches. Even when they factored in the additional computational overhead of running the neural network, their approach still solved the problem 3.5 times faster.
    In the future, the researchers want to derive simple, rule-based insights from their neural model, since the decisions of the neural network can be opaque and difficult to interpret. Simpler, rule-based methods could also be easier to implement and maintain in actual robotic warehouse settings.
    This work was supported by Amazon and the MIT Amazon Science Hub. More

  • in

    Researchers look at environmental impacts of AI tools

    As artificial intelligence (AI) is increasingly used in radiology, researchers caution that it’s essential to consider the environmental impact of AI tools, according to a focus article published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    Health care and medical imaging significantly contribute to the greenhouse gas (GHG) emissions fueling global climate change. AI tools can improve both the practice of and sustainability in radiology through optimized imaging protocols resulting in shorter scan times, improved scheduling efficiency to reduce patient travel, and the integration of decision-support tools to reduce low-value imaging. But there is a downside to AI utilization.
    “Medical imaging generates a lot of greenhouse gas emissions, but we often don’t think about the environmental impact of associated data storage and AI tools,” said Kate Hanneman, M.D., M.P.H., vice chair of research and associate professor at the University of Toronto and deputy lead of sustainability at the Joint Department of Medical Imaging, Toronto General Hospital. “The development and deployment of AI models consume large amounts of energy, and the data storage needs in medical imaging and AI are growing exponentially.”
    Dr. Hanneman and a team of researchers looked at the benefits and downsides of incorporating AI tools into radiology. AI offers the potential to improve workflows, accelerate image acquisition, reduce costs and improve the patient experience. However, the energy required to develop AI tools and store the associated data significantly contributes to GHG.
    “We need to do a balancing act, bridging to the positive effects while minimizing the negative impacts,” Dr. Hanneman said. “Improving patient outcomes is our ultimate goal, but we want to do that while using less energy and generating less waste.”
    Developing AI models requires large amounts of training data that health care institutions must store along with the billions of medical images generated annually. Many health systems use cloud storage, meaning the data is stored off-site and accessed electronically when needed.
    “Even though we call it cloud storage, data are physically housed in centers that typically require large amounts of energy to power and cool,” Dr. Hanneman said. “Recent estimates suggest that the total global GHG emissions from all data centers is greater than the airline industry, which is absolutely staggering.”
    The location of a data center has a massive impact on its sustainability, especially if it’s in a cooler climate or in an area where renewable energy sources are available.

    To minimize the overall environmental impact of data storage, the researchers recommended sharing resources and, where possible, collaborating with other providers and partners to distribute the expended energy more broadly.
    To decrease GHG emissions from data storage and the AI model development process, the researchers also offered other suggestions. These included exploring computationally efficient AI algorithms, selecting hardware that requires less energy, using data compression techniques, removing redundant data, implementing tiered storage systems and partnering with providers that use renewable energy.
    “Departments that manage their cloud storage can take immediate action by choosing a sustainable partner,” she said.
    Dr. Hanneman said although challenges and knowledge gaps remain, including limited data on radiology specific GHG emissions, resource constraints and complex regulations, she hopes sustainability will become a quality metric in the decision-making process around AI and radiology.
    “Environmental costs should be considered along with financial costs in health care and medical imaging,” she said. “I believe AI can help us improve sustainability if we apply the tools judiciously. We just need to be mindful and aware of its energy usage and GHG emissions.” More

  • in

    Diamonds are a chip’s best friend

    New technologies aim to produce high-purity synthetic crystals that become excellent semiconductors when doped with impurities as electron donors or acceptors of other elements. Researchers led by Kyoto University has now determined the magnitude of the spin-orbit interaction in acceptor-bound excitons in a semiconductor. They broke through the energy resolution limit of conventional luminescence measurements by directly observing the fine structure of bound excitons in boron-doped blue diamond, using optical absorption.
    Besides being “a girl’s best friend,” diamonds have broad industrial applications, such as in solid-state electronics. New technologies aim to produce high-purity synthetic crystals that become excellent semiconductors when doped with impurities as electron donors or acceptors of other elements.
    These extra electrons — or holes — do not participate in atomic bonding but sometimes bind to excitons — quasi-particles consisting of an electron and an electron hole — in semiconductors and other condensed matter. Doping may cause physical changes, but how the exciton complex — a bound state of two positively-charged holes and one negatively-charged electron — manifests in diamonds doped with boron has remained unconfirmed. Two conflicting interpretations exist of the exciton’s structure.
    An international team of researchers led by Kyoto University has now determined the magnitude of the spin-orbit interaction in acceptor-bound excitons in a semiconductor.
    “We broke through the energy resolution limit of conventional luminescence measurements by directly observing the fine structure of bound excitons in boron-doped blue diamond, using optical absorption,” says team leader Nobuko Naka of KyotoU’s Graduate School of Science.
    “We hypothesized that, in an exciton, two positively charged holes are more strongly bound than an electron-and-hole pair,” adds first author Shinya Takahashi. “This acceptor-bound exciton structure yielded two triplets separated by a spin-orbit splitting of 14.3 meV, supporting the hypothesis.”
    Luminescence resulting from thermal excitation can be used to observe high-energy states, but this current measurement method broadens spectral lines and blurs ultra-fine splitting.
    Instead, Naka’s team cooled the diamond crystal to cryogenic temperatures, obtaining nine peaks on the deep-ultraviolet absorption spectrum, compared to the usual four using luminescence. In addition, the researchers developed an analytical model including the spin-orbit effect to predict the energy positions and absorption intensities.
    “In future studies, we are considering the possibility of measuring absorption under external fields, leading to further line splitting and validation due to changes in symmetry,” says Université Paris-Saclay’s Julien Barjon.
    “Our results provide useful insights into spin-orbit interactions in systems beyond solid-state materials, such as atomic and nuclear physics. A deeper understanding of materials may improve the performance of diamond devices, such as light-emitting diodes, quantum emitters, and radiation detectors,” notes Naka. More

  • in

    Pythagoras was wrong: there are no universal musical harmonies, new study finds

    The tone and tuning of musical instruments has the power to manipulate our appreciation of harmony, new research shows. The findings challenge centuries of Western music theory and encourage greater experimentation with instruments from different cultures.
    According to the Ancient Greek philosopher Pythagoras, ‘consonance’ — a pleasant-sounding combination of notes — is produced by special relationships between simple numbers such as 3 and 4. More recently, scholars have tried to find psychological explanations, but these ‘integer ratios’ are still credited with making a chord sound beautiful, and deviation from them is thought to make music ‘dissonant’, unpleasant sounding.
    But researchers from Cambridge University, Princeton and the Max Planck Institute for Empirical Aesthetics, have now discovered two key ways in which Pythagoras was wrong.
    Their study, published in Nature Communications, shows that in normal listening contexts, we do not actually prefer chords to be perfectly in these mathematical ratios.
    “We prefer slight amounts of deviation. We like a little imperfection because this gives life to the sounds, and that is attractive to us,” said co-author, Dr Peter Harrison, from Cambridge University’s Faculty of Music and Director of its Centre for Music and Science.
    The researchers also found that the role played by these mathematical relationships disappears when you consider certain musical instruments that are less familiar to Western musicians, audiences and scholars. These instruments tend to be bells, gongs, types of xylophones and other kinds of pitched percussion instruments. In particular, they studied the ‘bonang’, an instrument from the Javanese gamelan built from a collection of small gongs.
    “When we use instruments like the bonang, Pythagoras’s special numbers go out the window and we encounter entirely new patterns of consonance and dissonance,” Dr Harrison said.

    “The shape of some percussion instruments means that when you hit them, and they resonate, their frequency components don’t respect those traditional mathematical relationships. That’s when we find interesting things happening.”
    “Western research has focused so much on familiar orchestral instruments, but other musical cultures use instruments that, because of their shape and physics, are what we would call ‘inharmonic’.
    The researchers created an online laboratory in which over 4,000 people from the US and South Korea participated in 23 behavioural experiments. Participants were played chords and invited to give each a numeric pleasantness rating or to use a slider to adjust particular notes in a chord to make it sound more pleasant. The experiments produced over 235,000 human judgments.
    The experiments explored musical chords from different perspectives. Some zoomed in on particular musical intervals and asked participants to judge whether they preferred them perfectly tuned, slightly sharp or slightly flat. The researchers were surprised to find a significant preference for slight imperfection, or ‘inharmonicity’. Other experiments explored harmony perception with Western and non-Western musical instruments, including the bonang.
    Instinctive appreciation of new kinds of harmony
    The researchers found that the bonang’s consonances mapped neatly onto the particular musical scale used in the Indonesian culture from which it comes. These consonances cannot be replicated on a Western piano, for instance, because they would fall between the cracks of the scale traditionally used.

    “Our findings challenge the traditional idea that harmony can only be one way, that chords have to reflect these mathematical relationships. We show that there are many more kinds of harmony out there, and that there are good reasons why other cultures developed them,” Dr Harrison said.
    Importantly, the study suggests that its participants — not trained musicians and unfamiliar with Javanese music — were able to appreciate the new consonances of the bonang’s tones instinctively.
    “Music creation is all about exploring the creative possibilities of a given set of qualities, for example, finding out what kinds of melodies can you play on a flute, or what kinds of sounds can you make with your mouth,” Harrison said.
    “Our findings suggest that if you use different instruments, you can unlock a whole new harmonic language that people intuitively appreciate, they don’t need to study it to appreciate it. A lot of experimental music in the last 100 years of Western classical music has been quite hard for listeners because it involves highly abstract structures that are hard to enjoy. In contrast, psychological findings like ours can help stimulate new music that listeners intuitively enjoy.”
    Exciting opportunities for musicians and producers
    Dr Harrison hopes that the research will encourage musicians to try out unfamiliar instruments and see if they offer new harmonies and open up new creative possibilities.
    “Quite a lot of pop music now tries to marry Western harmony with local melodies from the Middle East, India, and other parts of the world. That can be more or less successful, but one problem is that notes can sound dissonant if you play them with Western instruments.
    “Musicians and producers might be able to make that marriage work better if they took account of our findings and considered changing the ‘timbre’, the tone quality, by using specially chosen real or synthesised instruments. Then they really might get the best of both worlds: harmony and local scale systems.”
    Harrison and his collaborators are exploring different kinds of instruments and follow-up studies to test a broader range of cultures. In particular, they would like to gain insights from musicians who use ‘inharmonic’ instruments to understand whether they have internalised different concepts of harmony to the Western participants in this study. More