News & Analysis
/
Article

Quantifying abstraction in neural networks to increase understanding of human brain processing

JAN 07, 2022
Training simple neural networks to form abstract representations valuable for neural systems, machine learning, artificial intelligence
Quantifying abstraction in neural networks to increase understanding of human brain processing internal name

Quantifying abstraction in neural networks to increase understanding of human brain processing lead image

Biological and artificial neural systems learn and store information as memories, making generalizations from limited data and forming abstract representations.

Such networks can sustain dynamic internal representations before abstracting complex transformations for prediction and processing. For example, predicting the trajectory of a moving object, learning grammar in language, or changing the pitch of a song, all require the ability to abstract information from inputs.

Both brains and artificial neural networks are capable of abstracting representations from data, but the exact underlying mechanisms are poorly understood.

Smith et al. present quantifiable and measurable abstraction in a simple neural system.

“Abstraction is a process that both humans and computers utilize in complex analyses or everyday functions, such as making predictions or object invariance,” said author Jason Kim. “This work has the potential to shed light on how both human brains and artificial neural networks process information.”

The researchers trained an artificial 1000-neuron network to abstract continuous dynamical memories from discrete examples of memories, finding that the network could encode further representations. The findings demonstrate that abstraction can be generated to an additional dimension through training.

“This research was important to better understand a process that both biological and artificial neural networks need to perform, but is not well understood,” said author Lindsay Smith. “We have derived a simple, yet powerful, mechanism of what abstraction looks like in a simple neural network, which has the potential to be applied to more complex neural networks.”

This abstraction capability could contribute to the future of machine learning and artificial intelligence, providing the implicit and explicit capability of algorithms to form generalizations from data sets.

Source: “Learning continuous chaotic attractors with a reservoir computer,” by Lindsay M. Smith, Jason Z. Kim, Zhixin Lu, and Dani S. Bassett. Chaos (2021). The article can be accessed at https://doi.org/10.1063/5.0075572 .

Related Topics
More Science
/
Article
Magnesium ions have an outsized effect on the NCP’s charge but they aren’t the only factor.
/
Article
Detailed model provides insights into how blade aerodynamics affect particle collisions.
/
Article
The sensor easily and accurately detects the presence of S. sonnei, a bacterium that causes dysentery.
/
Article
The American WAKE experimeNt seeks to improve wind farm performance and better predict atmospheric effects.