More stories

  • in

    Gunfire or plastic bag popping? Trained computer can tell the difference

    According to the Gun Violence Archive, there have been 296 mass shootings in the United States this year. Sadly, 2021 is on pace to be America’s deadliest year of gun violence in the last two decades.
    Discerning between a dangerous audio event like a gun firing and a non-life-threatening event, such as a plastic bag bursting, can mean the difference between life and death. Additionally, it also can determine whether or not to deploy public safety workers. Humans, as well as computers, often confuse the sounds of a plastic bag popping and real gunshot sounds.
    Over the past few years, there has been a degree of hesitation over the implementation of some of the well-known available acoustic gunshot detector systems since they can be costly and often unreliable.
    In an experimental study, researchers from Florida Atlantic University’s College of Engineering and Computer Study focused on addressing the reliability of these detection systems as it relates to the false positive rate. The ability of a model to correctly discern sounds, even in the subtlest of scenarios, will differentiate a well-trained model from one that is not very efficient.
    With the daunting task of accounting for all sounds that are similar to a gunshot sound, the researchers created a new dataset comprised of audio recordings of plastic bag explosions collected over a variety of environments and conditions, such as plastic bag size and distance from the recording microphones. Recordings from the audio clips ranged from 400 to 600 milliseconds in duration.
    Researchers also developed a classification algorithm based on a convolutional neural network (CNN), as a baseline, to illustrate the relevance of this data collection effort. The data was then used, together with a gunshot sound dataset, to train a classification model based on a CNN to differentiate life-threatening gunshot events from non-life-threatening plastic bag explosion events. More

  • in

    How organic neuromorphic electronics can think and act

    The processor is the brain of a computer — an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived — for example, via the eyes, ears or sense of touch — is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.
    Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven, Stanford, Brescia, Oxford and KAUST.
    “We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.
    To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit.
    “We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.
    Story Source:
    Materials provided by Max Planck Institute for Polymer Research. Note: Content may be edited for style and length. More

  • in

    Quantum algorithms bring ions to a standstill

    Laser beams can do more than just heat things up; they can cool them down too. That is nothing new for physicists who have devoted themselves to precision spectroscopy and the development of optical atomic clocks. But what is new is the extremely low temperature that researchers at the QUEST Institute at the Physikalisch-Technische Bundesanstalt (PTB) have been able to reach with their highly charged ions — this type of ion has never been cooled down as far as 200 µK before. The team working on this succeeded by combining their established methods which include the laser cooling of coupled ions and methods from the field of quantum computing. The application of quantum algorithms ensured that ions that are too dissimilar for traditional laser cooling to work effectively could be cooled down together after all. This means that we are getting closer to an optical atomic clock with highly charged ions, and this clock might have the potential to be even more accurate than existing optical atomic clocks. The results have been published in the current issue of Physical Review X.
    If you want to investigate particles — such as ions — extremely accurately (say, using precision spectroscopy or for measuring their frequency in an atomic clock), then you have to bring them as close as you can to a standstill. The most extreme standstill is the same as the lowest possible temperature — meaning you have to cool them down as efficiently as you can. One of the established high-tech cooling methods is so-called laser cooling. This method sees the particles being slowed down by lasers that have been skillfully arranged. Not every particle is suited to this method, however. That is why pairs of coupled ions have been used at the QUEST Institute for a long time in order to overcome this: One ion (called the “cooling ion” or the “logic ion”) is cooled by lasers; simultaneously, its coupled partner ion is also cooled and can then be investigated spectroscopically (hence, it is called the “spectroscopy ion”). But this method has previously always reached its limits when the two ions have differed by too much in their charge-to-mass ratios — that is, when they have been very different in mass and very differently charged. “But it is now these very ions that are particularly interesting for our research, for instance, for developing novel optical clocks,” explains QUEST physicist Steven King.
    As he and his team are naturally very experienced in applying the laws of quantum mechanics (coupled cooling is, after all, based on quantum laws), they have made use of the toolkit of the quantum computing researcher. Quantum algorithms — i.e. computer operations that are based on manipulating individual quanta — cannot only be used to perform calculations faster than ever before with a quantum computer. They can also help to extract kinetic energy from the mismatched ion pair. During the process of so-called algorithmic cooling, quantum operations are used to do just that: to transfer the energy from the barely coolable motion of the spectroscopy ion to the easily coolable motion of the logic ion.
    And they managed to do this extremely well: “We were able to extract so much energy from the pair of ions — consisting of a singly charged beryllium ion and a highly charged argon ion — that their temperature finally dropped to only 200 µK,” said one of QUEST’s PhD students Lukas Spieß. Such an ensemble has never been so close to absolute zero (as in: so motionless). “What is more, we also observed an unprecedentedly low level of electric-field noise,” he expanded. This noise normally leads to the ions being heated when the cooling stops, but this turns out to be particularly low in their apparatus. Combining these two things means that the final major hurdle in their way has now been overcome, and an optical atomic clock that is based on highly charged ions can be built. This atomic clock could reach an uncertainty of less than 10-18. Only the best optical atomic clocks in the world are currently able to reach this kind of performance. These findings are also of great significance for the development of quantum computers and for precision spectroscopy.
    Story Source:
    Materials provided by Physikalisch-Technische Bundesanstalt (PTB). Original written by Erika Schow. Note: Content may be edited for style and length. More

  • in

    New crystal structure for hydrogen compounds for high-temperature superconductivity

    Superconductivity is the disappearance of electrical resistance in certain materials below a certain temperature, known as “transition temperature.” The phenomenon has tremendous implications for revolutionizing technology as know it, enabling low-loss power transmission and maintenance of electromagnetic force without electrical supply. However, superconductivity usually requires extremely low temperatures ~ 30 K (the temperature of liquid nitrogen, in comparison, is 77 K) and, therefore, expensive cooling technology. To have a shot at realizing a low-cost superconducting technology, superconductivity must be achieved at much higher transition temperatures.
    Materials scientists have had a breakthrough on this front with crystalline materials containing hydrogen, known as “metal hydrides.” These are compounds formed by a metal atom bonded with hydrogen that have been predicted and realized as suitable candidates for achieving even room-temperature superconductivity. However, they require extremely high pressures to do so, limiting their practical applications.
    In a new study published in Chemistry of Materials, a group of researchers led by Professor Ryo Maezono from Japan Advanced Institute of Science and Technology (JAIST) performed computer simulations to expand the search for high-temperature superconductors, looking for potential candidates among ternary hydrides (hydrogen combined with two other elements).
    “In ternary hydrides, the number of elements is increased from two to three. While this enormously increases the number of possible combinations and can make the problem of predicting suitable materials more difficult, it also increases our chances of coming across a potential high-temperature superconductor,” explains Prof. Maezono.
    Using the supercomputer at the university, the researchers examined possible crystal structures for (LaH6) (YH6)y compounds (y= 1-4), looking for configurations that would yield stable structures, allowing their synthesis in the laboratory at high pressures. Starting from a random structure, the simulations went through various possible combinations of elements, testing their stability at extremely high pressures ~ 300 GPa.
    The simulations revealed clathrate (Cmmm-) structures of LaYH12 and LaY3H24, consisting of LaH24 and YH24 cages stacked on top of each other, as viable candidates for high-temperature and high-pressure superconductors. “The longer stacking for Cmmm-LaY3H24 lead to a slightly increased transition temperature,” explains Prof. Maezono. Among the possible structures, the highest transition temperature (145.31 K — 137.11 K) was observed for LaY3H24. The researchers attributed the origin of higher transition temperature to a high “density of states” and high “phonon frequency,” two parameters that are used to assess superconductivity in materials.
    These findings have excited the researchers, who optimistically speculate the discovery of more such high-temperature superconductors. “It is quite possible to predict using simulations other new combinations of elements that would improve the desired properties further,” says Prof. Maezono.
    With potential new discoveries on the horizon, a practical superconductor-based technology may not be a pipe dream after all!
    Story Source:
    Materials provided by Japan Advanced Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    From flashing fireflies to cheering crowds — Physicists unlock secret to synchronization

    Physicists from Trinity College Dublin have unlocked the secret that explains how large groups of individual “oscillators” — from flashing fireflies to cheering crowds, and from ticking clocks to clicking metronomes — tend to synchronise when in each other’s company.
    Their work, just published in the journal Physical Review Research, provides a mathematical basis for a phenomenon that has perplexed millions — their newly developed equations help explain how individual randomness seen in the natural world and in electrical and computer systems can give rise to synchronisation.
    We have long known that when one clock runs slightly faster than another, physically connecting them can make them tick in time. But making a large assembly of clocks synchronise in this way was thought to be much more difficult — or even impossible, if there are too many of them.
    The Trinity researchers work, however, explains that synchronisation can occur, even in very large assemblies of clocks.
    Dr Paul Eastham, Naughton Associate Professor in Physics at Trinity, said:
    “The equations we have developed describe an assembly of laser-like devices — acting as our ‘oscillating clocks’ — and they essentially unlock the secret to synchronisation. These same equations describe many other kinds of oscillators, however, showing that synchronisation is more readily achieved in many systems than was previously thought.
    “Many things that exhibit repetitive behaviour can be considered clocks, from flashing fireflies and applauding crowds to electrical circuits, metronomes, and lasers. Independently they will oscillate at slightly different rates, but when they are formed into an assembly their mutual influences can overcome that variation.”
    This new discovery has a suite of potential applications, including developing new types of computer technology that uses light signals to process information.
    The research was supported by the Irish Research Council and involved the Trinity Centre for High Performance Computing, which has been supported by Science Foundation Ireland.
    Story Source:
    Materials provided by Trinity College Dublin. Note: Content may be edited for style and length. More

  • in

    Computer-, smartphone-based treatments effective at reducing symptoms of depression

    Computer- and smartphone-based treatments appear to be effective in reducing symptoms of depression, and while it remains unclear whether they are as effective as face-to-face psychotherapy, they offer a promising alternative to address the growing mental health needs spawned by the COVID-19 pandemic, according to research published by the American Psychological Association.
    “The year 2020 marked 30 years since the first paper was published on a digital intervention for the treatment of depression. It also marked an unparalleled inflection point in the worldwide conversion of mental health services from face-to-face delivery to remote, digital solutions in response to the COVID-19 pandemic,” said lead author Isaac Moshe, MA, a doctoral candidate at the University of Helsinki. “Given the accelerated adoption of digital interventions, it is both timely and important to ask to what extent digital interventions are effective in the treatment of depression, whether they may provide viable alternatives to face-to-face psychotherapy beyond the lab and what are the key factors that moderate outcomes.”
    The research was published in the journal Psychological Bulletin.
    Digital interventions typically require patients to log in to a software program, website or app to read, watch, listen to and interact with content structured as a series of modules or lessons. Individuals often receive homework assignments relating to the modules and regularly complete digitally administered questionnaires relevant to their presenting problems. This allows clinicians to monitor patients’ progress and outcomes in cases where digital interventions include human support. Digital interventions are not the same as teletherapy, which has gotten much attention during the pandemic, according to Moshe. Teletherapy uses videoconferencing or telephone services to facilitate one-on-one psychotherapy.
    “Digital interventions have been proposed as a way of meeting the unmet demand for psychological treatment,” Moshe said. “As digital interventions are being increasingly adopted within both private and public health care systems, we set out to understand whether these
    treatments are as effective as traditional face-to-face therapy, to what extent human support has an impact on outcomes and whether the benefits found in lab settings transfer to real-world settings.”
    Researchers conducted a meta-analysis of 83 studies testing digital applications for treating depression, dating as far back as 1990 and involving more than 15,000 participants in total, 80% adults and 69.5% women. All of the studies were randomized controlled trials comparing a digital intervention treatment to either an inactive control (e.g., waitlist control or no treatment at all) or an active comparison condition (e.g., treatment as usual or face-to-face psychotherapy) and primarily focused on individuals with mild to moderate depression symptoms.
    Overall, researchers found that digital interventions improved depression symptoms over control conditions, but the effect was not as strong as that found in a similar meta-analysis of face-to-face psychotherapy. There were not enough studies in the current meta-analysis to directly compare digital interventions to face-to-face psychotherapy, and researchers found no studies comparing digital strategies with drug therapy.
    The digital treatments that involved a human component, whether in the form of feedback on assignments or technical assistance, were the most effective in reducing depression symptoms. This may be partially explained by the fact that a human component increased the likelihood that participants would complete the full intervention, and compliance with therapy is linked to better outcomes, according to Moshe.
    One finding that concerned Moshe was that only about half of participants actually completed the full treatment. That number was even lower (25%) in studies conducted in real-world health care settings compared with controlled laboratory experiments. This may help explain why treatments tested in real-world settings were less effective than those tested in laboratories.
    “The COVID-19 pandemic has had a major impact on mental health across the globe. Depression is predicted to be the leading cause of lost life years due to illness by 2030. At the same time, less than 1 in 5 people receive appropriate treatment, and less than 1 in 27 in low-income settings. A major reason for this is the lack of trained health care providers,” he said. “Overall, our findings from effectiveness studies suggest that digital interventions may have a valuable role to play as part of the treatment offering in routine care, especially when accompanied by some sort of human guidance.” More

  • in

    'Human-like' brain helps robot out of a maze

    A maze is a popular device among psychologists to assess the learning capacity of mice or rats. But how about robots? Can they learn to successfully navigate the twists and turns of a labyrinth? Now, researchers at the Eindhoven University of Technology (TU/e) in the Netherlands and the Max Planck Institute for Polymer Research in Mainz, Germany, have proven they can. Their robot bases its decisions on the very system humans use to think and act: the brain. The study, which was published in Science Advances, paves the way to exciting new applications of neuromorphic devices in health and beyond.
    Machine learning and neural networks have become all the rage in recent years, and quite understandably so, considering their many successes in image recognition, medical diagnosis, e-commerce and many other fields. Still though, this software-based approach to machine intelligence has its drawbacks, not least because it consumes so
    Mimicking the human brain
    This power issue is one of the reasons that researchers have been trying to develop computers that are much more energy efficient. And to find a solution many are finding inspiration in the human brain, a thinking machine unrivalled in its low power consumption due to how it combines memory and processing.
    Neurons in our brain communicate with one another through so-called synapses, which are strengthened each time information flows through them. It is this plasticity that ensures that humans remember and learn.
    “In our research, we have taken this model to develop a robot that is able to learn to move through a labyrinth,” explains Imke Krauhausen, PhD student at the department of Mechanical Engineering at TU/e and principal author of the paper. More

  • in

    Development of a versatile, accurate AI prediction technique even with a small number of experiments

    NIMS, Asahi Kasei, Mitsubishi Chemical, Mitsui Chemicals and Sumitomo Chemical have used the chemical materials open platform framework to develop an AI technique capable of increasing the accuracy of machine learning-based predictions of material properties (e.g., strength, brittleness) through efficient use of material structural data obtained from only a small number of experiments. This technique may expedite the development of various materials, including polymers.
    Materials informatics research exploits machine learning models to predict the physical properties of materials of interest based on compositional and processing parameters (e.g., temperature and pressure). This approach has accelerated materials development. When physical properties of materials are known to be strongly influenced by their post-processing microstructures, the model’s property prediction accuracy can be effectively improved by incorporating microstructure-related data (e.g., x-ray diffraction (XRD) and differential scanning calorimetry (DSC) data) into it. However, these types of data can only be obtained by actually analyzing processed materials. In addition to these analyses, improving prediction accuracy requires predetermined parameters (e.g., material compositions).
    This research group developed an AI technique capable of first selecting potentially promising material candidates for fabrication and then accurately predicting their physical properties using XRD, DSC and other measurement data obtained from only a small number of actually synthesized materials. This technique selects candidate materials using Bayesian optimization and other methods and repeats the AI-based selection process while incorporating measurement data into machine learning models. To verify the technique’s effectiveness, the group used it to predict the physical properties of polyolefins. As a result, this technique was found to improve the material property prediction accuracy of machine learning models with a smaller sample set of actually synthesized materials than methods in which candidate materials were randomly selected.
    The use of this prediction accuracy improvement technique may enable a more thorough understanding of the relationship between materials’ structures and physical properties, which would facilitate investigation of fundamental causes of material properties and the formulation of more efficient materials development guidelines. Furthermore, this technique is expected to be applicable to the development of a wide range of materials in addition to polyolefins and other polymers, thereby promoting digital transformation (DX) in materials development.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More