Connective issue: AI learns by doing more with less
Brains have evolved to do more with less. Take a tiny insect brain, which has less than a million neurons but shows a diversity of behaviors and is more energy-efficient than current AI systems. These tiny brains serve as models for computing systems that are becoming more sophisticated as billions of silicon neurons can be implemented on hardware.
The secret to achieving energy-efficiency lies in the silicon neurons’ ability to learn to communicate and form networks, as shown by new research from the lab of Shantanu Chakrabartty, the Clifford W. Murphy Professor in the Preston M. Green Department of Electrical & Systems Engineering at Washington University in St. Louis’ McKelvey School of Engineering.
Their results were published July 28, 2021 in the journal Frontiers in Neuroscience.
For several years, his research group studied dynamical systems approaches to address the neuron-to-network performance gap and provide a blueprint for AI systems as energy efficient as biological ones.
Previous work from his group showed that in a computational system, spiking neurons create perturbations which allow each neuron to “know” which others are spiking and which are responding. It’s as if the neurons were all embedded in a rubber sheet formed by energy constraints; a single ripple, caused by a spike, would create a wave that affects them all. Like all physical processes, systems of silicon neurons tend to self-optimize to their least-energetic states, while also being affected by the other neurons in the network. These constraints come together to form a kind of secondary communication network, where additional information can be communicated through the dynamic but synchronized topology of spikes. It’s like the rubber sheet vibrating in a synchronized rhythm in response to multiple spikes.
In the latest research result, Chakrabartty and doctoral student Ahana Gangopadhyay showed how the neurons learn to pick the most energy-efficient perturbations and wave patterns in the rubber sheet. They show that if the learning is guided by sparsity (less energy), it’s like the electrical stiffness of the rubber sheet is adjusted by each neuron so that the entire network vibrates in a most energy-efficient way. The neuron does this using only local information which is communicated more efficiently. Communications between the neurons then become an emergent phenomenon guided by the need to optimize energy use. More
