More stories

  • in

    From flashing fireflies to cheering crowds — Physicists unlock secret to synchronization

    Physicists from Trinity College Dublin have unlocked the secret that explains how large groups of individual “oscillators” — from flashing fireflies to cheering crowds, and from ticking clocks to clicking metronomes — tend to synchronise when in each other’s company.
    Their work, just published in the journal Physical Review Research, provides a mathematical basis for a phenomenon that has perplexed millions — their newly developed equations help explain how individual randomness seen in the natural world and in electrical and computer systems can give rise to synchronisation.
    We have long known that when one clock runs slightly faster than another, physically connecting them can make them tick in time. But making a large assembly of clocks synchronise in this way was thought to be much more difficult — or even impossible, if there are too many of them.
    The Trinity researchers work, however, explains that synchronisation can occur, even in very large assemblies of clocks.
    Dr Paul Eastham, Naughton Associate Professor in Physics at Trinity, said:
    “The equations we have developed describe an assembly of laser-like devices — acting as our ‘oscillating clocks’ — and they essentially unlock the secret to synchronisation. These same equations describe many other kinds of oscillators, however, showing that synchronisation is more readily achieved in many systems than was previously thought.
    “Many things that exhibit repetitive behaviour can be considered clocks, from flashing fireflies and applauding crowds to electrical circuits, metronomes, and lasers. Independently they will oscillate at slightly different rates, but when they are formed into an assembly their mutual influences can overcome that variation.”
    This new discovery has a suite of potential applications, including developing new types of computer technology that uses light signals to process information.
    The research was supported by the Irish Research Council and involved the Trinity Centre for High Performance Computing, which has been supported by Science Foundation Ireland.
    Story Source:
    Materials provided by Trinity College Dublin. Note: Content may be edited for style and length. More

  • in

    Computer-, smartphone-based treatments effective at reducing symptoms of depression

    Computer- and smartphone-based treatments appear to be effective in reducing symptoms of depression, and while it remains unclear whether they are as effective as face-to-face psychotherapy, they offer a promising alternative to address the growing mental health needs spawned by the COVID-19 pandemic, according to research published by the American Psychological Association.
    “The year 2020 marked 30 years since the first paper was published on a digital intervention for the treatment of depression. It also marked an unparalleled inflection point in the worldwide conversion of mental health services from face-to-face delivery to remote, digital solutions in response to the COVID-19 pandemic,” said lead author Isaac Moshe, MA, a doctoral candidate at the University of Helsinki. “Given the accelerated adoption of digital interventions, it is both timely and important to ask to what extent digital interventions are effective in the treatment of depression, whether they may provide viable alternatives to face-to-face psychotherapy beyond the lab and what are the key factors that moderate outcomes.”
    The research was published in the journal Psychological Bulletin.
    Digital interventions typically require patients to log in to a software program, website or app to read, watch, listen to and interact with content structured as a series of modules or lessons. Individuals often receive homework assignments relating to the modules and regularly complete digitally administered questionnaires relevant to their presenting problems. This allows clinicians to monitor patients’ progress and outcomes in cases where digital interventions include human support. Digital interventions are not the same as teletherapy, which has gotten much attention during the pandemic, according to Moshe. Teletherapy uses videoconferencing or telephone services to facilitate one-on-one psychotherapy.
    “Digital interventions have been proposed as a way of meeting the unmet demand for psychological treatment,” Moshe said. “As digital interventions are being increasingly adopted within both private and public health care systems, we set out to understand whether these
    treatments are as effective as traditional face-to-face therapy, to what extent human support has an impact on outcomes and whether the benefits found in lab settings transfer to real-world settings.”
    Researchers conducted a meta-analysis of 83 studies testing digital applications for treating depression, dating as far back as 1990 and involving more than 15,000 participants in total, 80% adults and 69.5% women. All of the studies were randomized controlled trials comparing a digital intervention treatment to either an inactive control (e.g., waitlist control or no treatment at all) or an active comparison condition (e.g., treatment as usual or face-to-face psychotherapy) and primarily focused on individuals with mild to moderate depression symptoms.
    Overall, researchers found that digital interventions improved depression symptoms over control conditions, but the effect was not as strong as that found in a similar meta-analysis of face-to-face psychotherapy. There were not enough studies in the current meta-analysis to directly compare digital interventions to face-to-face psychotherapy, and researchers found no studies comparing digital strategies with drug therapy.
    The digital treatments that involved a human component, whether in the form of feedback on assignments or technical assistance, were the most effective in reducing depression symptoms. This may be partially explained by the fact that a human component increased the likelihood that participants would complete the full intervention, and compliance with therapy is linked to better outcomes, according to Moshe.
    One finding that concerned Moshe was that only about half of participants actually completed the full treatment. That number was even lower (25%) in studies conducted in real-world health care settings compared with controlled laboratory experiments. This may help explain why treatments tested in real-world settings were less effective than those tested in laboratories.
    “The COVID-19 pandemic has had a major impact on mental health across the globe. Depression is predicted to be the leading cause of lost life years due to illness by 2030. At the same time, less than 1 in 5 people receive appropriate treatment, and less than 1 in 27 in low-income settings. A major reason for this is the lack of trained health care providers,” he said. “Overall, our findings from effectiveness studies suggest that digital interventions may have a valuable role to play as part of the treatment offering in routine care, especially when accompanied by some sort of human guidance.” More

  • in

    'Human-like' brain helps robot out of a maze

    A maze is a popular device among psychologists to assess the learning capacity of mice or rats. But how about robots? Can they learn to successfully navigate the twists and turns of a labyrinth? Now, researchers at the Eindhoven University of Technology (TU/e) in the Netherlands and the Max Planck Institute for Polymer Research in Mainz, Germany, have proven they can. Their robot bases its decisions on the very system humans use to think and act: the brain. The study, which was published in Science Advances, paves the way to exciting new applications of neuromorphic devices in health and beyond.
    Machine learning and neural networks have become all the rage in recent years, and quite understandably so, considering their many successes in image recognition, medical diagnosis, e-commerce and many other fields. Still though, this software-based approach to machine intelligence has its drawbacks, not least because it consumes so
    Mimicking the human brain
    This power issue is one of the reasons that researchers have been trying to develop computers that are much more energy efficient. And to find a solution many are finding inspiration in the human brain, a thinking machine unrivalled in its low power consumption due to how it combines memory and processing.
    Neurons in our brain communicate with one another through so-called synapses, which are strengthened each time information flows through them. It is this plasticity that ensures that humans remember and learn.
    “In our research, we have taken this model to develop a robot that is able to learn to move through a labyrinth,” explains Imke Krauhausen, PhD student at the department of Mechanical Engineering at TU/e and principal author of the paper. More

  • in

    Development of a versatile, accurate AI prediction technique even with a small number of experiments

    NIMS, Asahi Kasei, Mitsubishi Chemical, Mitsui Chemicals and Sumitomo Chemical have used the chemical materials open platform framework to develop an AI technique capable of increasing the accuracy of machine learning-based predictions of material properties (e.g., strength, brittleness) through efficient use of material structural data obtained from only a small number of experiments. This technique may expedite the development of various materials, including polymers.
    Materials informatics research exploits machine learning models to predict the physical properties of materials of interest based on compositional and processing parameters (e.g., temperature and pressure). This approach has accelerated materials development. When physical properties of materials are known to be strongly influenced by their post-processing microstructures, the model’s property prediction accuracy can be effectively improved by incorporating microstructure-related data (e.g., x-ray diffraction (XRD) and differential scanning calorimetry (DSC) data) into it. However, these types of data can only be obtained by actually analyzing processed materials. In addition to these analyses, improving prediction accuracy requires predetermined parameters (e.g., material compositions).
    This research group developed an AI technique capable of first selecting potentially promising material candidates for fabrication and then accurately predicting their physical properties using XRD, DSC and other measurement data obtained from only a small number of actually synthesized materials. This technique selects candidate materials using Bayesian optimization and other methods and repeats the AI-based selection process while incorporating measurement data into machine learning models. To verify the technique’s effectiveness, the group used it to predict the physical properties of polyolefins. As a result, this technique was found to improve the material property prediction accuracy of machine learning models with a smaller sample set of actually synthesized materials than methods in which candidate materials were randomly selected.
    The use of this prediction accuracy improvement technique may enable a more thorough understanding of the relationship between materials’ structures and physical properties, which would facilitate investigation of fundamental causes of material properties and the formulation of more efficient materials development guidelines. Furthermore, this technique is expected to be applicable to the development of a wide range of materials in addition to polyolefins and other polymers, thereby promoting digital transformation (DX) in materials development.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    Resolving the puzzles of graphene superconductivity

    A single layer of carbon atoms arranged in a honeycomb lattice makes up the promising nanomaterial called graphene. Research on a setup of three sheets of graphene stacked on top of one another so that their lattices are aligned but shifted — forming rhombohedral trilayer graphene — revealed an unexpected state of superconductivity. In this state electrical resistance vanishes due to the quantum nature of the electrons. The discovery was published and debated in Nature, whilst the origins remained elusive. Now, Professor Maksym Serbyn and Postdoc Areg Ghazaryan from the Institute of Science and Technology (IST) Austria in collaboration with Professor Erez Berg and Postdoc Tobias Holder from the Weizmann Institute of Science, Israel, developed a theoretical framework of unconventional superconductivity, which resolves the puzzles posed by the experimental data.
    The Puzzles and their Resolution
    Superconductivity relies on the pairing of free electrons in the material despite their repulsion arising from their equal negative charges. This pairing happens between electrons of opposite spin through vibrations of the crystal lattice. Spin is a quantum property of particles comparable, but not identical to rotation. The mentioned kind of pairing is the case at least in conventional superconductors. “Applied to trilayer graphene,” co-lead-author Ghazaryan points out, “we identified two puzzles that seem difficult to reconcile with conventional superconductivity.”
    First, above a threshold temperature of roughly -260 °C electrical resistance should rise in equal steps with increasing temperature. However, in the experiments it remained constant up to -250 °C. Second, pairing between electrons of opposite spin implies a coupling that contradicts another experimentally observed feature, namely the presence of a nearby configuration with fully aligned spins, which we know as magnetism. “In the paper, we show that both observations are explainable,” group leader Maksym Serbyn summarizes, “if one assumes that an interaction between electrons provides the ‘glue’ that holds electrons together. This leads to unconventional superconductivity.”
    When one draws all possible states, which electrons can have, on a certain chart and then separates the occupied ones from the unoccupied ones with a line, this separation line is called a Fermi surface. Experimental data from graphene shows two Fermi surfaces, creating a ring-like shape. In their work, the researchers draw from a theory from Kohn and Luttinger from the 1960’s and demonstrate that such circular Fermi surfaces favor a mechanism for superconductivity based only on electron interactions. They also suggest experimental setups to test their argument and offer routes towards raising the critical temperature, where superconductivity starts appearing.
    The Benefits of Graphene Superconductivity
    While superconductivity has been observed in other trilayer and bilayer graphene, these known materials must be specifically engineered and may be hard to control because of their low stability. Rhombohedral trilayer graphene, although rare, is naturally occurring. The proposed theoretical solution has the potential of shedding light on long-standing problems in condensed matter physics and opening the way to potential applications of both superconductivity and graphene.
    Story Source:
    Materials provided by Institute of Science and Technology Austria. Note: Content may be edited for style and length. More

  • in

    AI models microprocessor performance in real-time

    Computer engineers at Duke University have developed a new AI method for accurately predicting the power consumption of any type of computer processor more than a trillion times per second while barely using any computational power itself. Dubbed APOLLO, the technique has been validated on real-world, high-performance microprocessors and could help improve the efficiency and inform the development of new microprocessors.
    The approach is detailed in a paper published at MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, one of the top-tier conferences in computer architecture, where it was selected the conference’s best publication.
    “This is an intensively studied problem that has traditionally relied on extra circuitry to address,” said Zhiyao Xie, first author of the paper and a PhD candidate in the laboratory of Yiran Chen, professor of electrical and computer engineering at Duke. “But our approach runs directly on the microprocessor in the background, which opens many new opportunities. I think that’s why people are excited about it.”
    In modern computer processors, cycles of computations are made on the order of 3 trillion times per second. Keeping track of the power consumed by such intensely fast transitions is important to maintain the entire chip’s performance and efficiency. If a processor draws too much power, it can overheat and cause damage. Sudden swings in power demand can cause internal electromagnetic complications that can slow the entire processor down.
    By implementing software that can predict and stop these undesirable extremes from happening, computer engineers can protect their hardware and increase its performance. But such schemes come at a cost. Keeping pace with modern microprocessors typically requires precious extra hardware and computational power.
    “APOLLO approaches an ideal power estimation algorithm that is both accurate and fast and can easily be built into a processing core at a low power cost,” Xie said. “And because it can be used in any type of processing unit, it could become a common component in future chip design.”
    The secret to APOLLO’s power comes from artificial intelligence. The algorithm developed by Xie and Chen uses AI to identify and select just 100 of a processor’s millions of signals that correlate most closely with its power consumption. It then builds a power consumption model off of those 100 signals and monitors them to predict the entire chip’s performance in real-time.
    Because this learning process is autonomous and data driven, it can be implemented on most any computer processor architecture — even those that have yet to be invented. And while it doesn’t require any human designer expertise to do its job, the algorithm could help human designers do theirs.
    “After the AI selects its 100 signals, you can look at the algorithm and see what they are,” Xie said. “A lot of the selections make intuitive sense, but even if they don’t, they can provide feedback to designers by informing them which processes are most strongly correlated with power consumption and performance.”
    The work is part of a collaboration with Arm Research, a computer engineering research organization that aims to analyze the disruptions impacting industry and create advanced solutions, many years ahead of deployment. With the help of Arm Research, APOLLO has already been validated on some of today’s highest performing processors. But according to the researchers, the algorithm still needs testing and comprehensive evaluations on many more platforms before it would be adopted by commercial computer manufacturers.
    “Arm Research works with and receives funding from some of the biggest names in the industry, like Intel and IBM, and predicting power consumption is one of their major priorities,” Chen added. “Projects like this offer our students an opportunity to work with these industry leaders, and these are the types of results that make them want to continue working with and hiring Duke graduates.”
    This work was conducted under the high-performance AClass CPU research program at Arm Research and was partially supported by the National Science Foundation (NSF-2106828, NSF-2112562) and the Semiconductor Research Corporation (SRC).
    Story Source:
    Materials provided by Duke University. Original written by Ken Kingery. Note: Content may be edited for style and length. More

  • in

    Doctoral student finds alternative cell option for organs-on-chips

    Organ-on-a-chip technology has provided a push to discover new drugs for a variety of rare and ignored diseases for which current models either don’t exist or lack precision. In particular, these platforms can include the cells of a patient, thus resulting in patient-specific discovery.
    As an example, even though sickle cell disease was first described in the early 1900s, the range of severity in the disease causes challenges when trying to treat patients. Since this disease is most prevalent among economically poor and underrepresented minorities, there has been a general lack of stimulus to discover new treatment strategies due to socioeconomic inequity, making it one of the most serious orphan conditions globally.
    Tanmay Mathur, doctoral student in Dr. Abhishek Jain’s lab in the Department of Biomedical Engineering at Texas A&M University, is developing personalized blood vessels to improve knowledge and derive treatments against the vascular dysfunction seen in sickle cell disease and other rare diseases of the blood and vessels.
    Current cells used in blood vessel models use induced pluripotent stem cells (IPSCs), which are derived from a patient’s endothelial cells. However, Mathur said these cells have limitations — they expire quickly and can’t be stored for long periods of time.
    Mathur’s research offers an alternative — blood outgrowth endothelial cells (BOECs), which can be isolated from a patient’s blood. All that is needed is 50 to 100 milliliters of blood.
    “The equipment and the reagents involved are also very cheap and available in most clinical settings,” Mathur said. “These cells are progenitor endothelial cells, meaning they have high proliferation, so if you keep giving them the food they want, within a month, we will have enough cells so that we can successfully keep on subculturing them forever.”
    However, the question is do BOECs work like IPSCs in the context of organ-on-chips, a microdevice that allows researchers to create these blood vessel models. That’s a question Mathur recently answered in a paper published in the Journal of the American Heart Association. More

  • in

    Real-world study shows the potential of gait authentication to enhance smartphone security

    Real-world tests have shown that gait authentication could be a viable means of protecting smartphones and other mobiles devices from cyber crime, according to new research.
    A study led by the University of Plymouth asked smartphone users to go about their daily activities while motion sensors within their mobile devices captured data about their stride patterns.
    The results showed the system was on average around 85% accurate in recognising an individual’s gait, with that figure rising to almost 90% when they were walking normally and fast walking.
    There are currently more than 6.3billion smartphone users around the world, using their devices to provide a wide range of services and to store sensitive and confidential information.
    While authentication mechanisms — such as passwords, PINs and biometrics — exist, studies have shown the level of security and usability of such approaches varies considerably.
    Writing in Computers & Security, the researchers say the study illustrates that — within an appropriate framework — gait recognition could be a viable technique for protecting individuals and their data from potential crime. More