Our brains are incredibly adaptive. Every day, we form new memories, acquire new knowledge, or refine existing skills. This stands in marked contrast to our current computers, which typically only perform pre-programmed actions. At the core of our adaptability lies synaptic plasticity. Synapses are the connection points between neurons, which can change in different ways depending on how they are used. This synaptic plasticity is an important research topic in neuroscience, as it is central to learning processes and memory. To better understand these brain processes and build adaptive machines, researchers in the fields of neuroscience and artificial intelligence (AI) are creating models for the mechanisms underlying these processes. Such models for learning and plasticity help to understand biological information processing and should also enable machines to learn faster.
Algorithms mimic biological evolution
Working in the European Human Brain Project, researchers at the Institute of Physiology at the University of Bern have now developed a new approach based on so-called evolutionary algorithms. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of natural selection. Thus, biological fitness, which describes the degree to which an organism adapts to its environment, becomes a model for evolutionary algorithms. In such algorithms, the “fitness” of a candidate solution is how well it solves the underlying problem.
Amazing creativity
The newly developed approach is referred to as the “evolving-to-learn” (E2L) approach or “becoming adaptive.” The research team led by Dr. Mihai Petrovici of the Institute of Physiology at the University of Bern and Kirchhoff Institute for Physics at the University of Heidelberg, confronted the evolutionary algorithms with three typical learning scenarios. In the first, the computer had to detect a repeating pattern in a continuous stream of input without receiving feedback about its performance. In the second scenario, the computer received virtual rewards when behaving in a particular desired manner. Finally, in the third scenario of “guided learning,” the computer was precisely told how much its behavior deviated from the desired one.
“In all these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity, and thereby successfully solved a new task,” says Dr. Jakob Jordan, corresponding and co-first author from the Institute of Physiology at the University of Bern. In doing so, the algorithms showed amazing creativity: “For example, the algorithm found a new plasticity model in which signals we defined are combined to form a new signal. In fact, we observe that networks using this new signal learn faster than with previously known rules,” emphasizes Dr. Maximilian Schmidt from the RIKEN Center for Brain Science in Tokyo, co-first author of the study. The results were published in the journal eLife.
“We see E2L as a promising approach to gain deep insights into biological learning principles and accelerate progress towards powerful artificial learning machines,” says Mihai Petrovoci. “We hope it will accelerate the research on synaptic plasticity in the nervous system,” concludes Jakob Jordan. The findings will provide new insights into how healthy and diseased brains work. They may also pave the way for the development of intelligent machines that can better adapt to the needs of their users.
Story Source:
Materials provided by University of Bern. Note: Content may be edited for style and length.