More stories

  • in

    New brain learning mechanism calls for revision of long-held neuroscience hypothesis

    The brain is a complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses (links), and collects incoming signals through several extremely long, branched “arms,” called dendritic trees.
    For the last 70 years a core hypothesis of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. This hypothesis has been the basis for machine and deep learning algorithms which increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has now been called into question.
    In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel reveal that the brain learns completely differently than has been assumed since the 20th century. The new experimental observations suggest that learning is mainly performed in neuronal dendritic trees, where the trunk and branches of the tree modify their strength, as opposed to modifying solely the strength of the synapses (dendritic leaves), as was previously thought. These observations also indicate that the neuron is actually a much more complex, dynamic and computational element than a binary element that can fire or not. Just one single neuron can realize deep learning algorithms, which previously required an artificial complex network consisting of thousands of connected neurons and synapses.
    “We’ve shown that efficient learning on dendritic trees of a single neuron can artificially achieve success rates approaching unity for handwritten digit recognition. This finding paves the way for an efficient biologically inspired new type of AI hardware and algorithms,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “This simplified learning mechanism represents a step towards a plausible biological realization of backpropagation algorithms, which are currently the central technique in AI,” added Shiri Hodassman, a PhD student and one of the key contributors to this work.
    The efficient learning on dendritic trees is based on Kanter and his research team’s experimental evidence for sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates.
    The brain’s clock is a billion times slower than existing parallel GPUs, but with comparable success rates in many perceptual tasks.
    The new demonstration of efficient learning on dendritic trees calls for new approaches in brain research, as well as for the generation of counterpart hardware aiming to implement advanced AI algorithms. If one can implement slow brain dynamics on ultrafast computers, the sky is the limit.
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Ionic liquid-based reservoir computing: The key to efficient and flexible edge computing

    Physical reservoir computing (PRC), which relies on the transient response of physical systems, is an attractive machine learning framework that can perform high-speed processing of time-series signals at low power. However, PRC systems have low tunability, limiting the signals it can process. Now, researchers from Japan present ionic liquids as an easily tunable physical reservoir device that can be optimized to process signals over a broad range of timescales by simply changing their viscosity.
    Artificial Intelligence (AI) is fast becoming ubiquitous in the modern society and will feature a broader implementation in the coming years. In applications involving sensors and internet-of-things devices, the norm is often edge AI, a technology in which the computing and analyses are performed close to the user (where the data is collected) and not far away on a centralized server. This is because edge AI has low power requirements as well as high-speed data processing capabilities, traits that are particularly desirable in processing time-series data in real time.
    In this regard, physical reservoir computing (PRC), which relies on the transient dynamics of physical systems, can greatly simplify the computing paradigm of edge AI. This is because PRC can be used to store and process analog signals into those edge AI can efficiently work with and analyze. However, the dynamics of solid PRC systems are characterized by specific timescales that are not easily tunable and are usually too fast for most physical signals. This mismatch in timescales and their low controllability make PRC largely unsuitable for real-time processing of signals in living environments.
    To address this issue, a research team from Japan involving Professor Kentaro Kinoshita and Sang-Gyu Koh, a PhD student, from the Tokyo University of Science, and senior researchers Dr. Hiroyuki Akinaga, Dr. Hisashi Shima, and Dr. Yasuhisa Naitoh from the National Institute of Advanced Industrial Science and Technology, proposed, in a new study published in Scientific Reports, the use of liquid PRC systems instead. “Replacing conventional solid reservoirs with liquid ones should lead to AI devices that can directly learn at the time scales of environmentally generated signals, such as voice and vibrations, in real time,” explains Prof. Kinoshita. “Ionic liquids are stable molten salts that are completely made up of free-roaming electrical charges. The dielectric relaxation of the ionic liquid, or how its charges rearrange as a response to an electric signal, could be used as a reservoir and is holds much promise for edge AI physical computing.”
    In their study, the team designed a PRC system with an ionic liquid (IL) of an organic salt, 1-alkyl-3-methylimidazolium bis(trifluoromethane sulfonyl)imide ([Rmim+] [TFSI-] R = ethyl (e), butyl (b), hexyl (h), and octyl (o)), whose cationic part (the positively charged ion) can be easily varied with the length of a chosen alkyl chain. They fabricated gold gap electrodes, and filled in the gaps with the IL. “We found that the timescale of the reservoir, while complex in nature, can be directly controlled by the viscosity of the IL, which depends on the length of the cationic alkyl chain. Changing the alkyl group in organic salts is easy to do, and presents us with a controllable, designable system for a range of signal lifetimes, allowing a broad range of computing applications in the future,” says Prof. Kinoshita. By adjusting the alkyl chain length between 2 and 8 units, the researchers achieved characteristic response times that ranged between 1 — 20 ms, with longer alkyl sidechains leading to longer response times and tunable AI learning performance of devices.
    The tunability of the system was demonstrated using an AI image identification task. The AI was presented a handwritten image as the input, which was represented by 1 ms width rectangular pulse voltages. By increasing the side chain length, the team made the transient dynamics approach that of the target signal, with the discrimination rate improving for higher chain lengths. This is because, compared to [emim+] [TFSI-], in which the current relaxed to its value in about 1 ms, the IL with a longer side chain and, in turn, longer relaxation time retained the history of the time series data better, improving identification accuracy. When the longest sidechain of 8 units was used, the discrimination rate reached a peak value of 90.2%.
    These findings are encouraging as they clearly show that the proposed PRC system based on the dielectric relaxation at an electrode-ionic liquid interface can be suitably tuned according to the input signals by simply changing the IL’s viscosity. This could pave the way for edge AI devices that can accurately learn the various signals produced in the living environment in real time.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    In Einstein's footsteps and beyond

    In physics, as in life, it’s always good to look at things from different perspectives.
    Since the beginning of quantum physics, how light moves and interacts with matter around it has mostly been described and understood mathematically through the lens of its energy. In 1900, Max Planck used energy to explain how light is emitted by heated objects, a seminal study in the foundation of quantum mechanics. In 1905, Albert Einstein used energy when he introduced the concept of photon.
    But light has another, equally important quality known as momentum. And, as it turns out, when you take momentum away, light starts behaving in really interesting ways.
    An international team of physicists led by Michaël Lobet, a research associate at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Eric Mazur, the Balkanski Professor of Physics and Applied Physics at SEAS, are re-examining the foundations of quantum physics from the perspective of momentum and exploring what happens when the momentum of light is reduced to zero.
    The research is published in Nature Light Science & Applications.
    Any object with mass and velocity has momentum — from atoms to bullets to asteroids — and momentum can be transferred from one object to another. A gun recoils when a bullet is fired because the momentum of the bullet is transferred to the gun. At the microscopic scale, an atom recoils when it emits light because of the acquired momentum of the photon. Atomic recoil, first described by Einstein when he was writing the quantum theory of radiation, is a fundamental phenomenon which governs light emission. More

  • in

    New device: Tallest height of any known jumper, engineered or biological

    A mechanical jumper developed by UC Santa Barbara engineering professor Elliot Hawkes and collaborators is capable of achieving the tallest height — roughly 100 feet (30 meters) — of any jumper to date, engineered or biological. The feat represents a fresh approach to the design of jumping devices and advances the understanding of jumping as a form of locomotion.
    “The motivation came from a scientific question,” said Hawkes, who as a roboticist seeks to understand the many possible methods for a machine to be able to navigate its environment. “We wanted to understand what the limits were on engineered jumpers.” While there are centuries’ worth of studies on biological jumpers (that would be us in the animal kingdom), and decades’ worth of research on mostly bio-inspired mechanical jumpers, he said, the two lines of inquiry have been kept somewhat separate.
    “There hadn’t really been a study that compares and contrasts the two and how their limits are different — whether engineered jumpers are really limited to the same laws that biological jumpers are,” Hawkes said.
    Their research is published in the journal Nature.
    Big Spring, Tiny Motor
    Biological systems have long served as the first and best models for locomotion, and that has been especially true for jumping, defined by the researchers as a “movement created by forces applied to the ground by the jumper, while maintaining a constant mass.” Many engineered jumpers have focused on duplicating the designs provided by evolution, and to great effect. More

  • in

    Plug-and-play organ-on-a-chip can be customized to the patient

    Engineered tissues have become a critical component for modeling diseases and testing the efficacy and safety of drugs in a human context. A major challenge for researchers has been how to model body functions and systemic diseases with multiple engineered tissues that can physiologically communicate — just like they do in the body. However, it is essential to provide each engineered tissue with its own environment so that the specific tissue phenotypes can be maintained for weeks to months, as required for biological and biomedical studies. Making the challenge even more complex is the necessity of linking the tissue modules together to facilitate their physiological communication, which is required for modeling conditions that involve more than one organ system, without sacrificing the individual engineered tissue environments.
    Novel plug-and-play multi-organ chip, customized to the patient
    Up to now, no one has been able to meet both conditions. Today, a team of researchers from Columbia Engineering and Columbia University Irving Medical Center reports that they have developed a model of human physiology in the form of a multi-organ chip consisting of engineered human heart, bone, liver, and skin that are linked by vascular flow with circulating immune cells, to allow recapitulation of interdependent organ functions. The researchers have essentially created a plug-and-play multi-organ chip, which is the size of a microscope slide, that can be customized to the patient. Because disease progression and responses to treatment vary greatly from one person to another, such a chip will eventually enable personalized optimization of therapy for each patient. The study is the cover story of the April 2022 issue of Nature Biomedical Engineering.
    “This is a huge achievement for us — we’ve spent ten years running hundreds of experiments, exploring innumerable great ideas, and building many prototypes, and now at last we’ve developed this platform that successfully captures the biology of organ interactions in the body,” said the project leader Gordana Vunjak-Novakovic, University Professor and the Mikati Foundation Professor of Biomedical Engineering, Medical Sciences, and Dental Medicine.
    Inspired by the human body
    Taking inspiration from how the human body works, the team has built a human tissue-chip system in which they linked matured heart, liver, bone, and skin tissue modules by recirculating vascular flow, allowing for interdependent organs to communicate just as they do in the human body. The researchers chose these tissues because they have distinctly different embryonic origins, structural and functional properties, and are adversely affected by cancer treatment drugs, presenting a rigorous test of the proposed approach. More

  • in

    Physicists embark on a hunt for a long-sought quantum glow

    For “Star Wars” fans, the streaking stars seen from the cockpit of the Millennium Falcon as it jumps to hyperspace is a canonical image. But what would a pilot actually see if she could accelerate in an instant through the vacuum of space? According to a prediction known as the Unruh effect, she would more likely see a warm glow.
    Since the 1970s when it was first proposed, the Unruh effect has eluded detection, mainly because the probability of seeing the effect is infinitesimally small, requiring either enormous accelerations or vast amounts of observation time. But researchers at MIT and the University of Waterloo believe they have found a way to significantly increase the probability of observing the Unruh effect, which they detail in a study appearing in Physical Review Letters.
    Rather than observe the effect spontaneously as others have attempted in the past, the team proposes stimulating the phenomenon, in a very particular way that enhances the Unruh effect while suppressing other competing effects. The researchers liken their idea to throwing an invisibility cloak over other conventional phenomena, which should then reveal the much less obvious Unruh effect.
    If it can be realized in a practical experiment, this new stimulated approach, with an added layer of invisibility (or “acceleration-induced transparency,” as described in the paper) could vastly increase the probability of observing the Unruh effect. Instead of waiting longer than the age of the universe for an accelerating particle to produce a warm glow as the Unruh effect predicts, the team’s approach would shave that wait time down to a few hours.
    “Now at least we know there is a chance in our lifetimes where we might actually see this effect,” says study co-author Vivishek Sudhir, assistant professor of mechanical engineering at MIT, who is designing an experiment to catch the effect based on the group’s theory. “It’s a hard experiment, and there’s no guarantee that we’d be able to do it, but this idea is our nearest hope.”
    The study’s co-authors also include Barbara Šoda and Achim Kempf of the University of Waterloo. More

  • in

    AI may detect earliest signs of pancreatic cancer

    An artificial intelligence (AI) tool developed by Cedars-Sinai investigators accurately predicted who would develop pancreatic cancer based on what their CT scan images looked like years prior to being diagnosed with the disease. The findings, which may help prevent death through early detection of one of the most challenging cancers to treat, are published in the journal Cancer Biomarkers.
    “This AI tool was able to capture and quantify very subtle, early signs of pancreatic ductal adenocarcinoma in CT scans years before occurrence of the disease. These are signs that the human eye would never be able to discern,” said Debiao Li, PhD, director of the Biomedical Imaging Research Institute, professor of Biomedical Sciences and Imaging at Cedars-Sinai, and senior and corresponding author of the study. Li is also the Karl Storz Chair in Minimally Invasive Surgery in Honor of George Berci, MD.
    Pancreatic ductal adenocarcinoma is not only the most common type of pancreatic cancer, but it’s also the most deadly. Less than 10% of people diagnosed with the disease live more than five years after being diagnosed or starting treatment. But recent studies have reported that finding the cancer early can increase survival rates by as much as 50%. There currently is no easy way to find pancreatic cancer early, however.
    People with this type of cancer may experience symptoms such as general abdominal pain or unexplained weight loss, but these symptoms are often ignored or overlooked as signs of the cancer since they are common in many health conditions.
    “There are no unique symptoms that can provide an early diagnosis forpancreatic ductal adenocarcinoma,” said Stephen J. Pandol, MD, director of Basic and Translational Pancreas Research and program director of the Gastroenterology Fellowship Program at Cedars-Sinai, and another author of the study. “This AI tool may eventually be used to detect early disease in people undergoing CT scans for abdominal pain or other issues.”
    The investigators reviewed electronic medical records to identify people who were diagnosed with the cancer within the last 15 years and who underwent CT scans six months to three years prior to their diagnosis. These CT images were considered normal at the time they were taken. The team identified 36 patients who met these criteria, the majority of whom had CT scans done in the ER because of abdominal pain.
    The AI tool was trained to analyze these pre-diagnostic CT images from people with pancreatic cancer and compare them with CT images from 36 people who didn’t develop the cancer. The investigators reported that the model was 86% accurate in identifying people who would eventually be found to have pancreatic cancer and those who would not develop the cancer.
    The AI model picked up on variations on the surface of the pancreas between people with cancer and healthy controls. These textural differences could be the result of molecular changes that occur during the development of pancreatic cancer.
    “Our hope is this tool could catch the cancer early enough to make it possible for more people to have their tumor completely removed through surgery,” said Touseef Ahmad Qureshi, PhD, a scientist at Cedars-Sinai and first author of the study.
    The investigators are currently collecting data from thousands of patients at healthcare sites throughout the U.S. to continue to study the AI tool’s prediction capability.
    Funding: The study was funded by the Board of Counselors of Cedars-Sinai Medical Center, the Cedars-Sinai Samuel Oschin Comprehensive Cancer Institute and the National Institutes of Health under award number R01 CA260955.
    Story Source:
    Materials provided by Cedars-Sinai Medical Center. Note: Content may be edited for style and length. More

  • in

    COVID-19 lockdown measures affect air pollution from cities differently

    The COVID-19 pandemic and its public response created large shifts in how people travel. In some areas, these restrictions on travel appear to have had little effect on air pollution, and some cities have worse air quality than ever.
    In Chaos, by AIP Publishing, researchers in China created a network model drawn from the traffic index and air quality index of 21 cities across six regions in their country to quantify how traffic emissions from one city affect another. They wanted to leverage data from COVID-19 lockdown procedures to better explain the relationship between traffic and air pollution and saw the COVID-19 lockdowns as a rare opportunity for research.
    “Air pollution is a typical ‘commons governance’ issue,” said author Jingfang Fan. “The impact of the pandemic has led cities to implement different traffic restriction policies, one after another, which naturally forms a controlled experiment to reveal their relationship.”
    To address these questions, they turned to a weighted climate network framework to model each city as a node using pre-pandemic data from 2019 and data from 2020. They added a two-layer network that incorporated different regions, lockdown stages, and outbreak levels.
    Surrounding traffic conditions influenced air quality in Beijing-Tianjin-Hebei, the Chengdu-Chongqing Economic Circle, and central China after the outbreak. Pollution tended to peak in cities as they made initial progress for containing the virus.
    During this time, pollution in Beijing-Tianjin-Hebei and central China lessened over time. Beijing-Tianjin-Hebei, however, saw another spike as control measures for outbound traffic from Wuhan and Hubei were lifted.
    “Air pollution in big cities, such as Beijing and Shanghai, is more affected by other cities,” said author Saini Yang. “This is contrary to what we generally think, that air pollution in big cities is mainly caused by its own conditions, including the traffic congestion.”
    Author Weiping Wang hopes the team’s work inspires other interdisciplinary teams to explore unique ways to explore problems in environmental science. They will look to improve their model with a higher degree of detail for traffic emissions.
    “Our discovery is that in order to improve air pollution, it is not only necessary to improve and reduce our own urban traffic and increase green travel, but also need the joint efforts of surrounding cities,” said author Na Ying. “Everyone is important in the governance of commons.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More