More stories

  • in

    Using cell phone GNSS Networks to monitor crustal deformation

    A paper published February 9 in Earth Planets and Space by Japanese Earth Science researchers analyzed the potential of a dense Global Navigation Satellite System (GNSS) network, which is installed at cell phone base stations, to monitor crustal deformation as an early warning indicator of seismic activity. The results showed that data from a cell phone network can rival the precision of data from a government-run GNSS network, while providing more complete geographic coverage.
    Crustal deformation is monitored around plate boundaries, active faults, and volcanoes to assess the accumulation of strains that lead to significant seismic events. GNSS networks have been constructed worldwide in areas that are vulnerable to volcanoes and earthquakes, such as in Hawai’i, California, and Japan. Data from these networks can be analyzed in real time to serve in tsunami forecasting and earthquake early warning systems.
    Japan’s GNSS network (GEONET) is operated by the Geospatial Information Authority of Japan. While GEONET has been fundamental in earth science research, its layout of 20-25 kilometers on average between sites limits monitoring of crustal deformation for some areas. For example, magnitude 6-7 earthquakes on active faults in inland Japan have fault lengths of 20-40 kilometers; the GEONET site spacing is slightly insufficient to measure their deformation with suitable precision for use in predictive models.
    However, Japanese cell phone carriers have constructed GNSS networks to improve locational information for purposes like automated driving. The new study examines the potential of a GNSS network built by the carrier SoftBank Corporation to play a role in monitoring crustal deformation. With 3300 sites in Japan, this private company oversees 2.5 times the number of sites as the government GEONET system.
    “By utilizing these observation networks, we aim to understand crustal deformation phenomena in higher resolution and to search for unknown phenomena that have not been found so far,” explained study author Yusaku Ohta, a geoscientist and assistant professor at the Graduate School of Science, Tohoku University.
    The study used raw data provided by SoftBank GNSS from cell phone base stations to evaluate its quality in monitoring crustal deformation. Two datasets were analyzed, one from a seismically quiet nine-day period in September of 2020 in Japan’s Miyagi Prefecture, the other from a nine-day period that included a 7.3 magnitude earthquake off the Fukushima coast on February 13, 2021, in Fukushima Prefecture.
    The study authors found that SoftBank’s dense GNSS network can monitor crustal deformation with reasonable precision. “We have shown that crustal deformation can be monitored with an unprecedentedly high spatial resolution by the original, very dense GNSS observation networks of cell phone carriers that are being deployed for the advancement of location-based services,” said earth scientist Mako Ohzono, associate professor at Hokkaido University.
    Looking ahead, they project that combining the SoftBank sites with the government-run GEONET sites could yield better spatial resolution results for a more detailed fault model. In the study area of the Fukushima Prefecture, combining the networks would result in an average density of GNSS sites of one per 5.7 kilometers. “It indicates that these private sector GNSS observation networks can play a complementary role to GNSS networks operated by public organizations,” said Ohta.
    The study paved the way for considering synergy between public and private GNSS networks as a resource for seismic monitoring in Japan and elsewhere. “The results are important for understanding earthquake phenomena and volcanic activity, which can contribute to disaster prevention and mitigation,” noted Ohzono.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    A cautionary tale of machine learning uncertainty

    A new analysis shows that researchers using machine learning methods could risk underestimating uncertainties in their final results.
    The Standard Model of particle physics offers a robust theoretical picture of the fundamental particles, and most fundamental forces which compose the universe. All the same, there are several aspects of the universe: from the existence of dark matter, to the oscillating nature of neutrinos, which the model can’t explain — suggesting that the mathematical descriptions it provides are incomplete. While experiments so far have been unable to identify significant deviations from the Standard Model, physicists hope that these gaps could start to appear as experimental techniques become increasingly sensitive.
    A key element of these improvements is the use of machine learning algorithms, which can automatically improve upon classical techniques by using higher-dimensional inputs, and extracting patterns from many training examples. Yet in new analysis published in EPJ C, Aishik Ghosh at the University of California, Irvine, and Benjamin Nachman at the Lawrence Berkeley National Laboratory, USA, show that researchers using machine learning methods could risk underestimating uncertainties in their final results.
    In this context, machine learning algorithms can be trained to identify particles and forces within the data collected by experiments such as high-energy collisions within particle accelerators — and to identify new particles, which don’t match up with the theoretical predictions of the Standard Model. To train machine learning algorithms, physicists typically use simulations of experimental data, which are based on advanced theoretical calculations. Afterwards, the algorithms can then classify particles in real experimental data.
    These training simulations may be incredibly accurate, but even so, they can only provide an approximation of what would really be observed in a real experiment. As a result, researchers need to estimate the possible differences between their simulations and true nature — giving rise to theoretical uncertainties. In turn, these differences can weaken or even bias a classifier algorithm’s ability to identify fundamental particles.
    Recently, physicists have increasingly begun to consider how machine learning approaches could be developed which are insensitive to these estimated theoretical uncertainties. The idea here is to decorrelate the performance of these algorithms from imperfections in the simulations. If this could be done effectively, it would allow for algorithms whose uncertainties are far lower than traditional classifiers trained on the same simulations. But as Ghosh and Nachman argue, the estimation of theoretical uncertainties essentially involves well-motivated guesswork — making it crucial for researchers to be cautious about this insensitivity.
    In particular, the duo argues there is a real danger that these techniques will simply deceive the unsuspecting researcher by reducing only the estimate of the uncertainty, rather than the true uncertainty. A machine learning procedure that is insensitive to the estimated theory uncertainty may not be insensitive to the actual difference between nature, and the approximations used to simulate the training data. This in turn could lead physicists to artificially underestimate their theory uncertainties if they aren’t careful. In high-energy particle collisions, for example, it may cause a classifier to incorrectly confirm the presence of certain fundamental particles.
    In presenting this ‘cautionary tale’, Ghosh and Nachman hope that future assessments of the Standard Model which use machine learning will not be caught out by incorrectly shrinking uncertainty estimates. This could enable physicists to better ensure reliability in their results, even as experimental techniques become ever more sensitive. In turn, it could pave the way for experiments which finally reveal long-awaited gaps in the Standard Model’s predictions.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    On the hunt for ultra-thin materials using data mining

    Two-dimensional (2D) materials possess extraordinary properties. They usually consist of atomic layers that are only a few nanometers thick and are particularly good at conducting heat and electricity, for instance. To the astonishment of many scientists, it recently became known that 2D materials can also exist on the basis of certain metal oxides. These oxides are of great interest in areas such as nanoelectronics applications. A German-American research team, led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), has now succeeded in predicting twenty-eight representatives of this new class of materials by using data-driven methods.
    There is a substantial difference between conventional 2D materials such as graphene and the novel materials that can be synthesized from metal oxides such as ilmenite and chromite. The latter do not form weak interactions — what are known as van der Waals forces — in their crystal structure, but instead form stronger ionic bonds that point in all directions. For this reason, only a few experiments have so far succeeded in detaching novel 2D materials from 3D material blocks. The results of the study can now lead to success in further experiments of this type. Using theoretical methods, the scientists predict which compounds are actually worthwhile for experimental research.
    “With our data driven method, we built upon the first available information from the initial experiments. From this information, we developed structural prototypes and then ran them through a huge materials database as a filter criterion,” explains the leader of the study, Dr. Rico Friedrich from the HZDR Institute of Ion Beam Physics and Materials Research. “The main challenge was figuring out why these materials form 2D systems so easily with particular oxides. From this information, we were able to develop a valid generalized search criterion and could systematically characterize the identified candidates according to their properties.”
    For this purpose, the researchers primarily applied what is known as “density functional theory,” a practical computational method for electronic structures that is widely used in quantum chemistry and in condensed matter physics. They collaborated with several German high-performance data centers for the necessary computing stages. A decisive factor was determining the exfoliation energy: this defines how much energy must be expended to remove a 2D layer from the surface of a material.
    Materials database with approximately 3.5 million entries
    The study also utilized the AFLOW materials database (Automatic Flow for Materials Discovery). It has been under development for more than twenty years by Prof. Stefano Curtarolo from Duke University (USA), who also contributed as author of the study. AFLOW is regarded as one of the largest materials science databases and classifies approximately 3.5 million compounds with more than 700 million calculated material properties.
    Together with the associated software, the database ultimately provided the researchers with not only the chemical composition of twenty-eight 2D-capable materials, but also enabled them to study their properties, which are remarkable in electronic and magnetic as well as topological respects. According to Rico Friedrich, their specific magnetic surface structures could make them particularly attractive for spintronic applications, such as for data storage in computers and smartphones.
    “I’m certain that we can find additional 2D materials of this kind,” says the Dresden physicist, casting a glance into the future. “With enough candidates, perhaps even a dedicated database could be created entirely specialized in this new class of materials.” The HZDR scientists remain in close contact with colleagues from a subject-related collaborative research center (Sonderforschungsbereich) at the TU Dresden as well as with the leading research group for synthesizing novel 2D systems in the United States. Together with both partners, they plan to pursue further study of the most promising compounds.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    A first step towards quantum algorithms: Minimizing the guesswork of a quantum ensemble

    Given the rapid pace at which technology is developing, it comes as no surprise that quantum technologies will become commonplace within decades. A big part of ushering in this new age of quantum computing requires a new understanding of both classical and quantum information and how the two can be related to each other.
    Before one can send classical information across quantum channels, it needs to be encoded first. This encoding is done by means of quantum ensembles. A quantum ensemble refers to a set of quantum states, each with its own probability. To accurately receive the transmitted information, the receiver has to repeatedly ‘guess’ the state of the information being sent. This constitutes a cost function that is called ‘guesswork.’ Guesswork refers to the average number of guesses required to correctly guess the state.
    The concept of guesswork has been studied at length in classical ensembles, but the subject is still new for quantum ensembles. Recently, a research team from Japan — consisting of Prof. Takeshi Koshiba of Waseda University, Michele Dall’Arno from Waseda University and Kyoto University, and Prof. Francesco Buscemi from Nagoya University — has derived analytical solutions to the guesswork problem subject to a finite set of conditions. “The guesswork problem is fundamental in many scientific areas in which machine learning techniques or artificial intelligence are used. Our results trailblaze an algorithmic aspect of the guesswork problem,” says Koshiba. Their findings are published in IEEE Transactions on Information Theory.
    To begin with, the researchers considered a common formalism of quantum circuits that relates the transmitted state of a quantum ensemble ? to the quantum measurement ?. They next introduced the probability distributions for both the quantum ensemble and the numberings obtained from the quantum measurement. They then established the guesswork function. The guesswork function maps any pair of ? and ? into the expectation value of the tth guess (where t refers to the guess number), averaged over the probability distribution of the tth guess being correct. Finally, they minimized the guesswork function over the elements of ? and used this result to derive analytical solutions to the guesswork problem subject to a finite set of conditions.
    These solutions included the explicit solution to a qubit ensemble with a uniform probability distribution. “Previously, results for analytical solutions have been known only for binary and symmetric ensembles. Our calculation for ensembles with a uniform probability distribution extends these,” explains Koshiba. The research team also calculated the solutions for a qubit regular polygonal ensemble, and a qubit regular polyhedral ensemble.
    “Guesswork is a very basic scientific problem, but there is very little research on quantum guesswork and even less on the algorithmic implications of quantum guesswork. Our paper goes a little way towards filling that gap,” concludes Koshiba.
    While the consequences of these findings may not be immediately obvious, in the future they are sure to have a major influence on quantum science, such as quantum chemistry for drug development and quantum software for quantum computing.
    Story Source:
    Materials provided by Waseda University. Note: Content may be edited for style and length. More

  • in

    Researchers use flat lenses to extend viewing distance for 3D display

    Researchers have demonstrated a prototype glasses-free 3D light field display system with a significantly extended viewing distance thanks to a newly developed flat lens. The system is an important step toward compact, realistic-looking 3D displays that could be used for televisions, portable electronics and table-top devices.
    Light field displays use a dense field of light rays to produce full-color real-time 3D videos that can be viewed without glasses. This approach to creating a 3D display allows several people to view the virtual scene at once, much like a real 3D object.
    “Most light field 3D displays have a limited viewing range, which causes the 3D virtual image to degrade as the observer moves farther away from the device,” said research team leader Wen Qiao from Soochow University. “The nanostructured flat lens we designed is just 100 microns thick and has a very large depth of focus, which enables a high-quality virtual 3D scene to be seen from farther away.”
    In Optica, an Optica Publishing Group journal, the researchers report that their prototype display exhibits high efficiency and high color fidelity over viewing distances from 24 cm to 90 cm. These characteristics all combine to create a more realistic viewing experience.
    “We developed this new technology in hopes of creating displays that could allow people to feel as if they were actually together during a video conference,” said Qiao. “With the continued development of nanotechnology, we envision that glasses-free 3D displays will become a normal part of everyday life and will change the way people interact with computers.”
    Creating multiple views
    Light field displays create realistic images by projecting different views that allow the 3D scene to look the same when looked at from different angles. The focal length of the lenses used to create these views is the limiting factor when it comes to viewing distance. More

  • in

    Roadmap for finding new functional porous materials

    A recent study has revealed how future structures of MOPs can be predicted and designed at the molecular level. The discovery of new structures holds tremendous promise for accessing advanced functional materials in energy and environmental applications. Although cage-based porous materials, metal-organic polyhedra (MOPs), are attracting attention as an emerging functional platform for numerous applications, hardly predictable and seemingly uncontrollable packing structures remain an open question. There is a high demand for a roadmap for discovering and rationally designing new MOP structures.
    A research team, led by Professor Wonyoung Choe in the Department of Chemistry at Ulsan National Institute of Science and Technology (UNIST), South Korea, has made a major leap forward in revealing how future structures of MOPs can be predicted and designed at the molecular level. Their findings are expected to create a new paradigm for accelerating materials development and application of MOPs.
    Prior to MOPs, metal-organic frameworks (MOFs), another well-known class of porous material, have developed rapidly. MOFs share compositional similarities (i.e., metal clusters and organic ligands) to MOPs. However, the molecular building blocks of MOFs are connected in an extended manner, while discrete cages consisting of metal clusters and organic ligands are packed by weak interactions in MOPs. Unlike MOPs, thousands of MOFs have been synthesized since their first discovery and now they are becoming increasingly important materials in academia and industries alike. A major driving force behind the phenomenal success of MOFs is their predictable and designable structures with a rich choice of molecular building blocks. By considering the molecular geometry of building blocks, the possible structures can be predicted and designed.
    So far, it was believed that strong bonds to connect building blocks are necessary to construct structures in a predictable way. Since weak or non-directional interactions have often resulted in unpredictable structures, the rational design of MOPs has been less illuminated. In this study, the research team discovered a special type of MOPs where the design principle can be applied to molecular packing systems, despite the absence of strong bonds. The zirconium (Zr)-based MOPs are notable examples. The authors unveiled multiple weaker bonds can do a similar role to strong bonds.
    Zr-based MOPs are an emerging class of MOPs with their excellent chemical stability. While the Zr-MOPs are essentially cage-based compounds, features mainly found in MOFs, such as robust framework and permanent porosity, also appear in Zr-MOPs. The authors say that such extraordinary dual features motivated them to further investigate the solid-state packing of Zr-MOPs. In this study, the authors not only provided a comprehensive study of the existing structures but also discovered future structures that have not been observed but are potentially accessible. A fundamental understanding of the nanoscale self-assembly of cages provides opportunities to control the packing structure, porosity, and properties. The authors expected that these unique dual features of Zr-MOPs can lead to many intriguing applications that are not accessible by typical MOPs or MOFs. They also encouraged to find other interesting classes of cage-based frameworks.
    “The emergence of new structures would provide a new opportunity to control their properties,” said Professor Wonyoung Choe. “Taking a different perspective on cage-based frameworks can lead to a new stage of functional porous materials.”
    The findings of this research have been published as a Perspective in Chem, a sister journal to Cell, on March 10, 2022. This study has been supported by the National Research Foundation (NRF) of Korea via the Mid-Career Researcher Program, Hydrogen Energy Innovation Technology Development Project, Science Research Center (SRC), and Global Ph.D. Fellowship (GPF), as well as Korea Environment Industry & Technology Institute (KEITI) through Public Technology Program based on Environmental Policy Program, funded by Korea Ministry of Environment (MOE).
    Story Source:
    Materials provided by Ulsan National Institute of Science and Technology(UNIST). Original written by JooHyeon Heo. Note: Content may be edited for style and length. More

  • in

    Robot that seems to convey emotion while reading

    Scientists from the Faculty of Engineering, Information and Systems at the University of Tsukuba devised a text message mediation robot that can help users control their anger when receiving upsetting news. This device may help improve social interactions as we move towards a world with increasingly digital communications.
    While a quick text message apology is a fast and easy way for friends to let us know they are going to be late for a planned meet up, it is often missing the human element that would accompany an explanation face-to-face, or even over the phone. It is likely to be more upsetting when we are not able to perceive the emotional weight behind our friends’ regret at making us wait.
    Now, researchers at the University of Tsukuba have built a handheld robot they called OMOY, which was equipped with a movable weight actuated by mechanical components inside its body. By shifting the internal weight, the robot could express simulated emotions. The robot was deployed as a mediator for reading text messages. A text with unwelcome or frustrating news could be followed by an exhortation by OMOY to not get upset, or even sympathy for the user. “With the medium of written digital communication, the lack of social feedback redirect focus from the sender and onto the content of the message itself,” author Professor Fumihide Tanaka says. The mediator robot was designed so that it can suppress the user’s anger and other negative interpersonal motivations, such as thoughts of revenge, and instead fostered forgiveness.
    The researchers tested 94 people with a message like “I’m sorry, I am late. The appointment slipped my mind. Can you wait another hour?” The team found that OMOY was able to reduce negative emotions. “The mediator robot can relay a frustrating message followed by giving its own opinion. When this speech is accompanied by the appropriate weight shifts, we saw that that the user would perceive the ‘intention’ of the robot to help them calm down,” Professor Tanaka says.
    The robot’s body expression produced by weight shifts did not require any specific external components, such as arms or legs, which implied that the internal weight movements could reduce a user’s anger or other negative emotions without the use of rich body gestures or facial expressions.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Video game-based therapy helps stroke patients

    After a stroke, patients may lose feeling in an arm or experience weakness and reduced movement that limits their ability to complete basic daily activities. Traditional rehabilitation therapy is very intensive, time-consuming and can be both expensive and inconvenient, especially for rural patients travelling long distances to in-person therapy appointments.
    That’s why a team of researchers, including one at the University of Missouri, utilized a motion-sensor video game, Recovery Rapids, to allow patients recovering from a stroke to improve their motor skills and affected arm movements at home while checking in periodically with a therapist via telehealth.
    The researchers found the game-based therapy led to improved outcomes similar to a highly regarded form of in-person therapy, known as constraint-induced therapy, while only requiring one-fifth of the therapist hours. This approach saves time and money while increasing convenience and safety as telehealth has boomed in popularity during the COVID-19 pandemic.
    “As an occupational therapist, I have seen patients from rural areas drive more than an hour to come to an in-person clinic three to four days a week, where the rehab is very intensive, taking three to four hours per session, and the therapist must be there the whole time,” said Rachel Proffitt, assistant professor in the MU School of Health Professions. “With this new at-home gaming approach, we are cutting costs for the patient and reducing time for the therapist while still improving convenience and overall health outcomes, so it’s a win-win. By saving time for the therapists, we can also now serve more patients and make a broader impact on our communities.”
    Traditional rehab home exercises tend to be very repetitive and monotonous, and patients rarely adhere to them. The Recovery Rapids game helps patients look forward to rehabilitation by completing various challenges in a fun, interactive environment, and the researchers found that the patients adhered well to their prescribed exercises.
    “The patient is virtually placed in a kayak, and as they go down the river, they perform arm motions simulating paddling, rowing, scooping up trash, swaying from side to side to steer, and reaching overhead to clear out spider webs and bats, so it’s making the exercises fun,” said Rachel Proffitt, assistant professor in the MU School of Health Professions. “As they progress, the challenges get harder, and we conduct check-ins with the participants via telehealth to adjust goals, provide feedback and discuss the daily activities they want to resume as they improve.”
    Nearly 800,000 Americans have a stroke each year according to the CDC, and two-thirds of stroke survivors report they cannot use their affected limbs to do normal daily activities, including making a cup of coffee, cooking a meal or playing with one’s grandchildren.
    “I am passionate about helping patients get back to all the activities they love to do in their daily life,” Proffitt said. “Anything we can do as therapists to help in a creative way while saving time and money is the ultimate goal.”
    Story Source:
    Materials provided by University of Missouri-Columbia. Note: Content may be edited for style and length. More