More stories

  • in

    The next generation of robots will be shape-shifters

    Physicists have discovered a new way to coat soft robots in materials that allow them to move and function in a more purposeful way. The research, led by the UK’s University of Bath, is described today in Science Advances.
    Authors of the study believe their breakthrough modelling on ‘active matter’ could mark a turning point in the design of robots. With further development of the concept, it may be possible to determine the shape, movement and behaviour of a soft solid not by its natural elasticity but by human-controlled activity on its surface.
    The surface of an ordinary soft material always shrinks into a sphere. Think of the way water beads into droplets: the beading occurs because the surface of liquids and other soft material naturally contracts into the smallest surface area possible — i.e. a sphere. But active matter can be designed to work against this tendency. An example of this in action would be a rubber ball that’s wrapped in a layer of nano-robots, where the robots are programmed to work in unison to distort the ball into a new, pre-determined shape (say, a star).
    It is hoped that active matter will lead to a new generation of machines whose function will come from the bottom up. So, instead of being governed by a central controller (the way today’s robotic arms are controlled in factories), these new machines would be made from many individual active units that cooperate to determine the machine’s movement and function. This is akin to the workings of our own biological tissues, such as the fibres in heart muscle.
    Using this idea, scientists could design soft machines with arms made of flexible materials powered by robots embedded in their surface. They could also tailor the size and shape of drug delivery capsules, by coating the surface of nanoparticles in a responsive, active material.. This in turn could have a dramatic effect on how a drug interacts with cells in the body.
    Work on active matter challenges the assumption that the energetic cost of the surface of a liquid or soft solid must always be positive because a certain amount of energy is always necessary to create a surface.
    Dr Jack Binysh, study first author, said: “Active matter makes us look at the familiar rules of nature — rules like the fact that surface tension has to be positive — in a new light. Seeing what happens if we break these rules, and how we can harness the results, is an exciting place to be doing research.”
    Corresponding author Dr Anton Souslov added: “This study is an important proof of concept and has many useful implications. For instance, future technology could produce soft robots that are far squishier and better at picking up and manipulating delicate materials.”
    For the study, the researchers developed theory and simulations that described a 3D soft solid whose surface experiences active stresses. They found that these active stresses expand the surface of the material, pulling the solid underneath along with it, and causing a global shape change. The researchers found that the precise shape adopted by the solid could then be tailored by altering the elastic properties of the material.
    In the next phase of this work — which has already begun — the researchers will apply this general principle to design specific robots, such as soft arms or self-swimming materials. They will also look at collective behaviour — for example, what happens when you have many active solids, all packed together.
    This work was a collaboration between the Universities of Bath and Birmingham. It was funded by the Engineering and Physical Sciences Research Council (EPSRC) through New Investigator Award no. EP/T000961/1.
    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Brain-based computing chips not just for AI anymore

    With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
    The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
    “Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
    In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
    The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
    “These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
    Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.” More

  • in

    Labeling key to success of software company innovations

    Companies in the software industry, where novel ideas are prized, use linguistic tactics to develop new labels for their innovations to stay ahead of competitors. Using language to signal that something is “new and different” is an important tool for success, University of California, Davis, research suggests.
    Category innovation during a study period from 1990 to 2002 included words and phrases such as “platform” and “supply chain management” — market categories that are now established.
    “There is an association between companies that use category innovation and their likelihood to IPO, suggesting category innovation is part of successful firm strategies,” said Elizabeth George Pontikes of the UC Davis Graduate School of Management who is the author of the study.
    The article, “Category Innovation in the Software Industry, 1990-2002,” was published in Strategic Management Journal in January. Pontikes looked at more than 400 labels used in news releases about innovations by more than 4,000 different software firms over 12 years. Researchers also interviewed 12 executives and venture capitalists in the software industry.
    Category innovation, as defined in the study, is a practice that involves firms claiming a new category label to describe the market they are in. A firm may do this to differentiate from rivals, or to try to become a market leader or even a “category king.”
    One executive interviewed for the study described the “tag management” label, for example, as something that “wasn’t super innovative, but it was labeling it … so it was strategic the way we were thinking about it.”
    The research found that 75% of the labels analyzed when they were new only had one or two firms using them in the first two years, when it is traditionally difficult to determine if innovations even have a nascent market. Those labels don’t gain traction until the second year of innovation, the research showed.
    Firms sometimes engage in category innovation by borrowing and recasting a little-known term or are unaware another firm had claimed the label, Pontikes said.
    Story Source:
    Materials provided by University of California – Davis. Original written by Karen Michele Nikos-Rose. Note: Content may be edited for style and length. More

  • in

    Magnetism helps electrons vanish in high-temp superconductors

    Superconductors — metals in which electricity flows without resistance — hold promise as the defining material of the near future, according to physicist Brad Ramshaw, and are already used in medical imaging machines, drug discovery research and quantum computers being built by Google and IBM.
    However, the super-low temperatures conventional superconductors need to function — a few degrees above absolute zero — make them too expensive for wide use.
    In their quest to find more useful superconductors, Ramshaw, the Dick & Dale Reis Johnson Assistant Professor of physics in the College of Arts and Sciences (A&S), and colleagues have discovered that magnetism is key to understanding the behavior of electrons in “high-temperature” superconductors. With this finding, they’ve solved a 30-year-old mystery surrounding this class of superconductors, which function at much higher temperatures, greater than 100 degrees above absolute zero. Their paper, “Fermi Surface Transformation at the Pseudogap Critical Point of a Cuprate Superconductor,” published in Nature Physics March 10.
    “We’d like to understand what makes these high-temperature superconductors work and engineer that property into some other material that is easier to adopt in technologies,” Ramshaw said.
    A central mystery to high-temperature superconductors is what happens with their electrons, Ramshaw said.
    “All metals have electrons, and when a metal becomes a superconductor, the electrons pair up with each other,” he said. “We measure something called the ‘Fermi surface,’ which you can think of as a map showing where all the electrons are in a metal.”
    To study how electrons pair up in high-temperature superconductors, researchers continuously change the number of electrons through a process known as chemical doping. In high-temperature superconductors, at a certain “critical point,” electrons seem to vanish from the Fermi surface map, Ramshaw said. More

  • in

    Physicists show how frequencies can easily be multiplied without special circuitry

    A new discovery by physicists at Martin Luther University Halle-Wittenberg (MLU) could make certain components in computers and smartphones obsolete. The team has succeeded in directly converting frequencies to higher ranges in a common magnetic material without the need for additional components. Frequency multiplication is a fundamental process in modern electronics. The team reports on its research in the latest issue of Science.
    Digital technologies and devices are already responsible for about ten percent of global electricity consumption, and the trend is rising sharply. “It is therefore necessary to develop more efficient components for information processing,” says Professor Georg Woltersdorf, a physicist from MLU.
    Non-linear electronic circuits are typically used to generate the high-frequency gigahertz signals needed to operate today’s devices. The team at MLU has now found a way to do this within a magnetic material without the electronic components that are usually used for this. Instead, the magnetization is excited by a low-frequency megahertz source. Using the newly discovered effect, the source generates several frequency components, each of which is a multiple of the excitation frequency. These cover a range of six octaves and reach up to several gigahertz. “This is like hitting the lowest note on a piano while also hearing the corresponding harmonic tones of the higher octaves,” explains Woltersdorf.
    The surprising effect of frequency multiplication is explained by synchronized switching of the dynamic magnetization on a micron scale. “Different areas do not switch at the same time. Instead, they are triggered by adjacent areas just like in a falling row of dominoes,” explains first author Chris Körner from the Institute of Physics at MLU.
    The discovery could also help make digital technologies more energy efficient in the future. It is also important for new applications. Today’s microelectronics use electron charges as information carriers. A major disadvantage of this method is that the electric charge transport releases heat and therefore requires a lot of energy. Spin electronics could provide a promising solution. In addition to using the electron’s charge, it also uses its magnetic moment, or so-called spin. Its properties open the possibility to significantly improve the energy efficiency. The newly discovered effect could enable space-saving and efficient frequency sources for spin electronics in the gigahertz range.
    The study was funded in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation).
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    Using cell phone GNSS Networks to monitor crustal deformation

    A paper published February 9 in Earth Planets and Space by Japanese Earth Science researchers analyzed the potential of a dense Global Navigation Satellite System (GNSS) network, which is installed at cell phone base stations, to monitor crustal deformation as an early warning indicator of seismic activity. The results showed that data from a cell phone network can rival the precision of data from a government-run GNSS network, while providing more complete geographic coverage.
    Crustal deformation is monitored around plate boundaries, active faults, and volcanoes to assess the accumulation of strains that lead to significant seismic events. GNSS networks have been constructed worldwide in areas that are vulnerable to volcanoes and earthquakes, such as in Hawai’i, California, and Japan. Data from these networks can be analyzed in real time to serve in tsunami forecasting and earthquake early warning systems.
    Japan’s GNSS network (GEONET) is operated by the Geospatial Information Authority of Japan. While GEONET has been fundamental in earth science research, its layout of 20-25 kilometers on average between sites limits monitoring of crustal deformation for some areas. For example, magnitude 6-7 earthquakes on active faults in inland Japan have fault lengths of 20-40 kilometers; the GEONET site spacing is slightly insufficient to measure their deformation with suitable precision for use in predictive models.
    However, Japanese cell phone carriers have constructed GNSS networks to improve locational information for purposes like automated driving. The new study examines the potential of a GNSS network built by the carrier SoftBank Corporation to play a role in monitoring crustal deformation. With 3300 sites in Japan, this private company oversees 2.5 times the number of sites as the government GEONET system.
    “By utilizing these observation networks, we aim to understand crustal deformation phenomena in higher resolution and to search for unknown phenomena that have not been found so far,” explained study author Yusaku Ohta, a geoscientist and assistant professor at the Graduate School of Science, Tohoku University.
    The study used raw data provided by SoftBank GNSS from cell phone base stations to evaluate its quality in monitoring crustal deformation. Two datasets were analyzed, one from a seismically quiet nine-day period in September of 2020 in Japan’s Miyagi Prefecture, the other from a nine-day period that included a 7.3 magnitude earthquake off the Fukushima coast on February 13, 2021, in Fukushima Prefecture.
    The study authors found that SoftBank’s dense GNSS network can monitor crustal deformation with reasonable precision. “We have shown that crustal deformation can be monitored with an unprecedentedly high spatial resolution by the original, very dense GNSS observation networks of cell phone carriers that are being deployed for the advancement of location-based services,” said earth scientist Mako Ohzono, associate professor at Hokkaido University.
    Looking ahead, they project that combining the SoftBank sites with the government-run GEONET sites could yield better spatial resolution results for a more detailed fault model. In the study area of the Fukushima Prefecture, combining the networks would result in an average density of GNSS sites of one per 5.7 kilometers. “It indicates that these private sector GNSS observation networks can play a complementary role to GNSS networks operated by public organizations,” said Ohta.
    The study paved the way for considering synergy between public and private GNSS networks as a resource for seismic monitoring in Japan and elsewhere. “The results are important for understanding earthquake phenomena and volcanic activity, which can contribute to disaster prevention and mitigation,” noted Ohzono.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    A cautionary tale of machine learning uncertainty

    A new analysis shows that researchers using machine learning methods could risk underestimating uncertainties in their final results.
    The Standard Model of particle physics offers a robust theoretical picture of the fundamental particles, and most fundamental forces which compose the universe. All the same, there are several aspects of the universe: from the existence of dark matter, to the oscillating nature of neutrinos, which the model can’t explain — suggesting that the mathematical descriptions it provides are incomplete. While experiments so far have been unable to identify significant deviations from the Standard Model, physicists hope that these gaps could start to appear as experimental techniques become increasingly sensitive.
    A key element of these improvements is the use of machine learning algorithms, which can automatically improve upon classical techniques by using higher-dimensional inputs, and extracting patterns from many training examples. Yet in new analysis published in EPJ C, Aishik Ghosh at the University of California, Irvine, and Benjamin Nachman at the Lawrence Berkeley National Laboratory, USA, show that researchers using machine learning methods could risk underestimating uncertainties in their final results.
    In this context, machine learning algorithms can be trained to identify particles and forces within the data collected by experiments such as high-energy collisions within particle accelerators — and to identify new particles, which don’t match up with the theoretical predictions of the Standard Model. To train machine learning algorithms, physicists typically use simulations of experimental data, which are based on advanced theoretical calculations. Afterwards, the algorithms can then classify particles in real experimental data.
    These training simulations may be incredibly accurate, but even so, they can only provide an approximation of what would really be observed in a real experiment. As a result, researchers need to estimate the possible differences between their simulations and true nature — giving rise to theoretical uncertainties. In turn, these differences can weaken or even bias a classifier algorithm’s ability to identify fundamental particles.
    Recently, physicists have increasingly begun to consider how machine learning approaches could be developed which are insensitive to these estimated theoretical uncertainties. The idea here is to decorrelate the performance of these algorithms from imperfections in the simulations. If this could be done effectively, it would allow for algorithms whose uncertainties are far lower than traditional classifiers trained on the same simulations. But as Ghosh and Nachman argue, the estimation of theoretical uncertainties essentially involves well-motivated guesswork — making it crucial for researchers to be cautious about this insensitivity.
    In particular, the duo argues there is a real danger that these techniques will simply deceive the unsuspecting researcher by reducing only the estimate of the uncertainty, rather than the true uncertainty. A machine learning procedure that is insensitive to the estimated theory uncertainty may not be insensitive to the actual difference between nature, and the approximations used to simulate the training data. This in turn could lead physicists to artificially underestimate their theory uncertainties if they aren’t careful. In high-energy particle collisions, for example, it may cause a classifier to incorrectly confirm the presence of certain fundamental particles.
    In presenting this ‘cautionary tale’, Ghosh and Nachman hope that future assessments of the Standard Model which use machine learning will not be caught out by incorrectly shrinking uncertainty estimates. This could enable physicists to better ensure reliability in their results, even as experimental techniques become ever more sensitive. In turn, it could pave the way for experiments which finally reveal long-awaited gaps in the Standard Model’s predictions.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    On the hunt for ultra-thin materials using data mining

    Two-dimensional (2D) materials possess extraordinary properties. They usually consist of atomic layers that are only a few nanometers thick and are particularly good at conducting heat and electricity, for instance. To the astonishment of many scientists, it recently became known that 2D materials can also exist on the basis of certain metal oxides. These oxides are of great interest in areas such as nanoelectronics applications. A German-American research team, led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), has now succeeded in predicting twenty-eight representatives of this new class of materials by using data-driven methods.
    There is a substantial difference between conventional 2D materials such as graphene and the novel materials that can be synthesized from metal oxides such as ilmenite and chromite. The latter do not form weak interactions — what are known as van der Waals forces — in their crystal structure, but instead form stronger ionic bonds that point in all directions. For this reason, only a few experiments have so far succeeded in detaching novel 2D materials from 3D material blocks. The results of the study can now lead to success in further experiments of this type. Using theoretical methods, the scientists predict which compounds are actually worthwhile for experimental research.
    “With our data driven method, we built upon the first available information from the initial experiments. From this information, we developed structural prototypes and then ran them through a huge materials database as a filter criterion,” explains the leader of the study, Dr. Rico Friedrich from the HZDR Institute of Ion Beam Physics and Materials Research. “The main challenge was figuring out why these materials form 2D systems so easily with particular oxides. From this information, we were able to develop a valid generalized search criterion and could systematically characterize the identified candidates according to their properties.”
    For this purpose, the researchers primarily applied what is known as “density functional theory,” a practical computational method for electronic structures that is widely used in quantum chemistry and in condensed matter physics. They collaborated with several German high-performance data centers for the necessary computing stages. A decisive factor was determining the exfoliation energy: this defines how much energy must be expended to remove a 2D layer from the surface of a material.
    Materials database with approximately 3.5 million entries
    The study also utilized the AFLOW materials database (Automatic Flow for Materials Discovery). It has been under development for more than twenty years by Prof. Stefano Curtarolo from Duke University (USA), who also contributed as author of the study. AFLOW is regarded as one of the largest materials science databases and classifies approximately 3.5 million compounds with more than 700 million calculated material properties.
    Together with the associated software, the database ultimately provided the researchers with not only the chemical composition of twenty-eight 2D-capable materials, but also enabled them to study their properties, which are remarkable in electronic and magnetic as well as topological respects. According to Rico Friedrich, their specific magnetic surface structures could make them particularly attractive for spintronic applications, such as for data storage in computers and smartphones.
    “I’m certain that we can find additional 2D materials of this kind,” says the Dresden physicist, casting a glance into the future. “With enough candidates, perhaps even a dedicated database could be created entirely specialized in this new class of materials.” The HZDR scientists remain in close contact with colleagues from a subject-related collaborative research center (Sonderforschungsbereich) at the TU Dresden as well as with the leading research group for synthesizing novel 2D systems in the United States. Together with both partners, they plan to pursue further study of the most promising compounds.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More