Quantifying abstraction in neural networks to increase understanding of human brain processing
Biological and artificial neural systems learn and store information as memories, making generalizations from limited data and forming abstract representations.
Such networks can sustain dynamic internal representations before abstracting complex transformations for prediction and processing. For example, predicting the trajectory of a moving object, learning grammar in language, or changing the pitch of a song, all require the ability to abstract information from inputs.
Both brains and artificial neural networks are capable of abstracting representations from data, but the exact underlying mechanisms are poorly understood.
Smith et al. present quantifiable and measurable abstraction in a simple neural system.
“Abstraction is a process that both humans and computers utilize in complex analyses or everyday functions, such as making predictions or object invariance,” said author Jason Kim. “This work has the potential to shed light on how both human brains and artificial neural networks process information.”
The researchers trained an artificial 1000-neuron network to abstract continuous dynamical memories from discrete examples of memories, finding that the network could encode further representations. The findings demonstrate that abstraction can be generated to an additional dimension through training.
“This research was important to better understand a process that both biological and artificial neural networks need to perform, but is not well understood,” said author Lindsay Smith. “We have derived a simple, yet powerful, mechanism of what abstraction looks like in a simple neural network, which has the potential to be applied to more complex neural networks.”
This abstraction capability could contribute to the future of machine learning and artificial intelligence, providing the implicit and explicit capability of algorithms to form generalizations from data sets.
Source: “Learning continuous chaotic attractors with a reservoir computer,” by Lindsay M. Smith, Jason Z. Kim, Zhixin Lu, and Dani S. Bassett. Chaos (2021). The article can be accessed at https://doi.org/10.1063/5.0075572 .