More stories

  • in

    Endless forms most beautiful: Why evolution favors symmetry

    From sunflowers to starfish, symmetry appears everywhere in biology. This isn’t just true for body plans — the molecular machines keeping our cells alive are also strikingly symmetric. But why? Does evolution have a built-in preference for symmetry?
    An international team of researchers believe so, and have combined ideas from biology, computer science and mathematics to explain why. As they report in PNAS, symmetric and other simple structures emerge so commonly because evolution has an overwhelming preference for simple “algorithms” — that is, simple instruction sets or recipes for producing a given structure.
    “Imagine having to tell a friend how to tile a floor using as few words as possible,” says Iain Johnston, a professor at the University of Bergen and author on the study. “You wouldn’t say: put diamonds here, long rectangles here, wide rectangles here. You’d say something like: put square tiles everywhere. And that simple, easy recipe gives a highly symmetric outcome.”
    The team used computational modeling to explore how this preference comes about in biology. They showed that many more possible genomes describe simple algorithms than more complex ones. As evolution searches over possible genomes, simple algorithms are more likely to be discovered — as are, in turn, the more symmetric structures that they produce. The scientists then connected this evolutionary picture to a deep result from the theoretical discipline of algorithmic information theory.
    “These intuitions can be formalized in the field of algorithmic information theory, which provides quantitative predictions for the bias towards descriptive simplicity,” says Ard Louis, professor at the University of Oxford and corresponding author on the study.
    The study’s key theoretical idea can be illustrated by a twist on a famous thought experiment in evolutionary biology, which pictures a room full of monkeys trying to write a book by typing randomly on a keyboard. Imagine the monkeys are instead trying to write a recipe. Each is far more likely to randomly hit the letters required to spell out a short, simple recipe than a long, complicated one. If we then follow any recipes the monkeys have produced — our metaphor for producing biological structures from genetic information — we will produce simple outcomes much more often than complicated ones.
    The scientists show that a wide range of biological structures and systems, from proteins to RNA and signaling networks, adopt algorithmically simple structures with probabilities as predicted by this theory. Going forward, they plan to investigate the predictions that their theory makes for biases in larger-scale developmental processes.
    Story Source:
    Materials provided by The University of Bergen. Note: Content may be edited for style and length. More

  • in

    Chemical reaction design goes virtual

    Researchers aim to streamline the time- and resource-intensive process of screening ligands during catalyst design by using virtual ligands.
    Researchers at the Institute for Chemical Reaction Design and Discovery and Hokkaido University have developed a virtual ligand-assisted (VLA) screening method, which could drastically reduce the amount of trial and error required in the lab during transition metal catalyst development. The method, published in the journal ACS Catalysis, may also lead to the discovery of unconventional catalyst designs outside the scope of chemists’ intuition.
    Ligands are molecules that are bonded to the central metal atom of a catalyst, and they affect the activity and selectivity of a catalyst. Finding the optimal ligand to catalyze a specific target reaction can be like finding a needle in a haystack. The VLA screening method provides a way to efficiently search that haystack, surveying a broad range of values for different properties to identify the features of ligands that should be most promising. This narrows down the search area for chemists in the lab and has the potential to greatly accelerate the reaction design process.
    This new work utilizes virtual ligands, which mimic the presence of real ligands; however, instead of being described by many individual constituent atoms — such as carbon or nitrogen — virtual ligands are described using only two metrics: their steric, or space-filling, properties and their electronic properties. Researchers developed approximations that describe each of these effects with a single parameter. Using this simplified description of a ligand enabled researchers to evaluate ligands in a computationally efficient way over a large range of values for these two effects. The result is a “contour map” that shows what combination of steric and electronic effects a ligand should have in order to best catalyze a specific reaction. Chemists can then focus on only testing real ligands that fit these criteria.
    Researchers used monodentate phosphorus (III) virtual ligands as a test group and verified their models for the electronic and steric properties of the virtual ligands against values calculated for corresponding real ligands.
    The VLA screening method was then employed to design ligands for a test reaction in which a CHO group and a hydrogen atom can be added to a double bond in two different possible configurations. The reaction pathway was evaluated for 20 virtual ligand cases (consisting of different assigned values for the electronic and steric parameters) to create a contour map that shows a visual trend for what types of ligands can be expected to result in a highly selective reaction.
    Computer models of real ligands were designed based on parameters extracted from the contour map and then evaluated computationally. The selectivity values predicted via the VLA screening method matched well with the values computed for the models of real ligands, showing the viability of the VLA screening method to provide guidance that aids in rational ligand design.
    Beyond saving valuable time and resources, corresponding author Satoshi Maeda anticipates the creation of powerful reaction prediction systems by combining the VLA screening method with other computational techniques.
    “Ligand screening is a pivotal process in the development of transition metal catalysis. As the VLA screening can be conducted in silico, it would save a lot of time and resources in the lab. We believe that this method not only streamlines finding an optimal ligand from a given library of ligands, but also stimulates researchers to explore the untapped chemical space of ligands,” commented corresponding author Satoshi Maeda. “Furthermore, we also expect that by combining this method with our reaction prediction technology using the Artificial Force Induced Reaction method, a new computer-driven discovery scheme of transition metal catalysis can be realized.”
    Story Source:
    Materials provided by Hokkaido University. Note: Content may be edited for style and length. More

  • in

    The next generation of robots will be shape-shifters

    Physicists have discovered a new way to coat soft robots in materials that allow them to move and function in a more purposeful way. The research, led by the UK’s University of Bath, is described today in Science Advances.
    Authors of the study believe their breakthrough modelling on ‘active matter’ could mark a turning point in the design of robots. With further development of the concept, it may be possible to determine the shape, movement and behaviour of a soft solid not by its natural elasticity but by human-controlled activity on its surface.
    The surface of an ordinary soft material always shrinks into a sphere. Think of the way water beads into droplets: the beading occurs because the surface of liquids and other soft material naturally contracts into the smallest surface area possible — i.e. a sphere. But active matter can be designed to work against this tendency. An example of this in action would be a rubber ball that’s wrapped in a layer of nano-robots, where the robots are programmed to work in unison to distort the ball into a new, pre-determined shape (say, a star).
    It is hoped that active matter will lead to a new generation of machines whose function will come from the bottom up. So, instead of being governed by a central controller (the way today’s robotic arms are controlled in factories), these new machines would be made from many individual active units that cooperate to determine the machine’s movement and function. This is akin to the workings of our own biological tissues, such as the fibres in heart muscle.
    Using this idea, scientists could design soft machines with arms made of flexible materials powered by robots embedded in their surface. They could also tailor the size and shape of drug delivery capsules, by coating the surface of nanoparticles in a responsive, active material.. This in turn could have a dramatic effect on how a drug interacts with cells in the body.
    Work on active matter challenges the assumption that the energetic cost of the surface of a liquid or soft solid must always be positive because a certain amount of energy is always necessary to create a surface.
    Dr Jack Binysh, study first author, said: “Active matter makes us look at the familiar rules of nature — rules like the fact that surface tension has to be positive — in a new light. Seeing what happens if we break these rules, and how we can harness the results, is an exciting place to be doing research.”
    Corresponding author Dr Anton Souslov added: “This study is an important proof of concept and has many useful implications. For instance, future technology could produce soft robots that are far squishier and better at picking up and manipulating delicate materials.”
    For the study, the researchers developed theory and simulations that described a 3D soft solid whose surface experiences active stresses. They found that these active stresses expand the surface of the material, pulling the solid underneath along with it, and causing a global shape change. The researchers found that the precise shape adopted by the solid could then be tailored by altering the elastic properties of the material.
    In the next phase of this work — which has already begun — the researchers will apply this general principle to design specific robots, such as soft arms or self-swimming materials. They will also look at collective behaviour — for example, what happens when you have many active solids, all packed together.
    This work was a collaboration between the Universities of Bath and Birmingham. It was funded by the Engineering and Physical Sciences Research Council (EPSRC) through New Investigator Award no. EP/T000961/1.
    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Brain-based computing chips not just for AI anymore

    With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
    The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
    “Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
    In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
    The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
    “These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
    Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.” More

  • in

    Labeling key to success of software company innovations

    Companies in the software industry, where novel ideas are prized, use linguistic tactics to develop new labels for their innovations to stay ahead of competitors. Using language to signal that something is “new and different” is an important tool for success, University of California, Davis, research suggests.
    Category innovation during a study period from 1990 to 2002 included words and phrases such as “platform” and “supply chain management” — market categories that are now established.
    “There is an association between companies that use category innovation and their likelihood to IPO, suggesting category innovation is part of successful firm strategies,” said Elizabeth George Pontikes of the UC Davis Graduate School of Management who is the author of the study.
    The article, “Category Innovation in the Software Industry, 1990-2002,” was published in Strategic Management Journal in January. Pontikes looked at more than 400 labels used in news releases about innovations by more than 4,000 different software firms over 12 years. Researchers also interviewed 12 executives and venture capitalists in the software industry.
    Category innovation, as defined in the study, is a practice that involves firms claiming a new category label to describe the market they are in. A firm may do this to differentiate from rivals, or to try to become a market leader or even a “category king.”
    One executive interviewed for the study described the “tag management” label, for example, as something that “wasn’t super innovative, but it was labeling it … so it was strategic the way we were thinking about it.”
    The research found that 75% of the labels analyzed when they were new only had one or two firms using them in the first two years, when it is traditionally difficult to determine if innovations even have a nascent market. Those labels don’t gain traction until the second year of innovation, the research showed.
    Firms sometimes engage in category innovation by borrowing and recasting a little-known term or are unaware another firm had claimed the label, Pontikes said.
    Story Source:
    Materials provided by University of California – Davis. Original written by Karen Michele Nikos-Rose. Note: Content may be edited for style and length. More

  • in

    Magnetism helps electrons vanish in high-temp superconductors

    Superconductors — metals in which electricity flows without resistance — hold promise as the defining material of the near future, according to physicist Brad Ramshaw, and are already used in medical imaging machines, drug discovery research and quantum computers being built by Google and IBM.
    However, the super-low temperatures conventional superconductors need to function — a few degrees above absolute zero — make them too expensive for wide use.
    In their quest to find more useful superconductors, Ramshaw, the Dick & Dale Reis Johnson Assistant Professor of physics in the College of Arts and Sciences (A&S), and colleagues have discovered that magnetism is key to understanding the behavior of electrons in “high-temperature” superconductors. With this finding, they’ve solved a 30-year-old mystery surrounding this class of superconductors, which function at much higher temperatures, greater than 100 degrees above absolute zero. Their paper, “Fermi Surface Transformation at the Pseudogap Critical Point of a Cuprate Superconductor,” published in Nature Physics March 10.
    “We’d like to understand what makes these high-temperature superconductors work and engineer that property into some other material that is easier to adopt in technologies,” Ramshaw said.
    A central mystery to high-temperature superconductors is what happens with their electrons, Ramshaw said.
    “All metals have electrons, and when a metal becomes a superconductor, the electrons pair up with each other,” he said. “We measure something called the ‘Fermi surface,’ which you can think of as a map showing where all the electrons are in a metal.”
    To study how electrons pair up in high-temperature superconductors, researchers continuously change the number of electrons through a process known as chemical doping. In high-temperature superconductors, at a certain “critical point,” electrons seem to vanish from the Fermi surface map, Ramshaw said. More

  • in

    Physicists show how frequencies can easily be multiplied without special circuitry

    A new discovery by physicists at Martin Luther University Halle-Wittenberg (MLU) could make certain components in computers and smartphones obsolete. The team has succeeded in directly converting frequencies to higher ranges in a common magnetic material without the need for additional components. Frequency multiplication is a fundamental process in modern electronics. The team reports on its research in the latest issue of Science.
    Digital technologies and devices are already responsible for about ten percent of global electricity consumption, and the trend is rising sharply. “It is therefore necessary to develop more efficient components for information processing,” says Professor Georg Woltersdorf, a physicist from MLU.
    Non-linear electronic circuits are typically used to generate the high-frequency gigahertz signals needed to operate today’s devices. The team at MLU has now found a way to do this within a magnetic material without the electronic components that are usually used for this. Instead, the magnetization is excited by a low-frequency megahertz source. Using the newly discovered effect, the source generates several frequency components, each of which is a multiple of the excitation frequency. These cover a range of six octaves and reach up to several gigahertz. “This is like hitting the lowest note on a piano while also hearing the corresponding harmonic tones of the higher octaves,” explains Woltersdorf.
    The surprising effect of frequency multiplication is explained by synchronized switching of the dynamic magnetization on a micron scale. “Different areas do not switch at the same time. Instead, they are triggered by adjacent areas just like in a falling row of dominoes,” explains first author Chris Körner from the Institute of Physics at MLU.
    The discovery could also help make digital technologies more energy efficient in the future. It is also important for new applications. Today’s microelectronics use electron charges as information carriers. A major disadvantage of this method is that the electric charge transport releases heat and therefore requires a lot of energy. Spin electronics could provide a promising solution. In addition to using the electron’s charge, it also uses its magnetic moment, or so-called spin. Its properties open the possibility to significantly improve the energy efficiency. The newly discovered effect could enable space-saving and efficient frequency sources for spin electronics in the gigahertz range.
    The study was funded in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation).
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    Using cell phone GNSS Networks to monitor crustal deformation

    A paper published February 9 in Earth Planets and Space by Japanese Earth Science researchers analyzed the potential of a dense Global Navigation Satellite System (GNSS) network, which is installed at cell phone base stations, to monitor crustal deformation as an early warning indicator of seismic activity. The results showed that data from a cell phone network can rival the precision of data from a government-run GNSS network, while providing more complete geographic coverage.
    Crustal deformation is monitored around plate boundaries, active faults, and volcanoes to assess the accumulation of strains that lead to significant seismic events. GNSS networks have been constructed worldwide in areas that are vulnerable to volcanoes and earthquakes, such as in Hawai’i, California, and Japan. Data from these networks can be analyzed in real time to serve in tsunami forecasting and earthquake early warning systems.
    Japan’s GNSS network (GEONET) is operated by the Geospatial Information Authority of Japan. While GEONET has been fundamental in earth science research, its layout of 20-25 kilometers on average between sites limits monitoring of crustal deformation for some areas. For example, magnitude 6-7 earthquakes on active faults in inland Japan have fault lengths of 20-40 kilometers; the GEONET site spacing is slightly insufficient to measure their deformation with suitable precision for use in predictive models.
    However, Japanese cell phone carriers have constructed GNSS networks to improve locational information for purposes like automated driving. The new study examines the potential of a GNSS network built by the carrier SoftBank Corporation to play a role in monitoring crustal deformation. With 3300 sites in Japan, this private company oversees 2.5 times the number of sites as the government GEONET system.
    “By utilizing these observation networks, we aim to understand crustal deformation phenomena in higher resolution and to search for unknown phenomena that have not been found so far,” explained study author Yusaku Ohta, a geoscientist and assistant professor at the Graduate School of Science, Tohoku University.
    The study used raw data provided by SoftBank GNSS from cell phone base stations to evaluate its quality in monitoring crustal deformation. Two datasets were analyzed, one from a seismically quiet nine-day period in September of 2020 in Japan’s Miyagi Prefecture, the other from a nine-day period that included a 7.3 magnitude earthquake off the Fukushima coast on February 13, 2021, in Fukushima Prefecture.
    The study authors found that SoftBank’s dense GNSS network can monitor crustal deformation with reasonable precision. “We have shown that crustal deformation can be monitored with an unprecedentedly high spatial resolution by the original, very dense GNSS observation networks of cell phone carriers that are being deployed for the advancement of location-based services,” said earth scientist Mako Ohzono, associate professor at Hokkaido University.
    Looking ahead, they project that combining the SoftBank sites with the government-run GEONET sites could yield better spatial resolution results for a more detailed fault model. In the study area of the Fukushima Prefecture, combining the networks would result in an average density of GNSS sites of one per 5.7 kilometers. “It indicates that these private sector GNSS observation networks can play a complementary role to GNSS networks operated by public organizations,” said Ohta.
    The study paved the way for considering synergy between public and private GNSS networks as a resource for seismic monitoring in Japan and elsewhere. “The results are important for understanding earthquake phenomena and volcanic activity, which can contribute to disaster prevention and mitigation,” noted Ohzono.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More