Addressing the memory bottleneck with in-memory computing
Traditional computers are built on von Neumann architecture, where data are stored in a memory unit separate from the computing core. While this design allows for specialized components for each function, it also results in a bottleneck that worsens as technology advances. To run a program, the computing core must first access the data and instructions from the memory. Computing time is therefore limited by transmission time to and from the memory and undercuts many advances in computing speed.
Mannocci et al. described recent advances in the field of in-memory computing, which is one promising solution to the memory bottleneck. They discussed the status of the field and outlined existing challenges and directions for future research.
“As a researcher, I know how important it is to read detailed reviews of the state of the art,” said author Daniele Ielmini. “My goal was to serve the research community with a detailed report and possibly attract more researchers to work on this exciting topic.”
In-memory computing seeks to solve the memory bottleneck by moving some computing work, including simple operations such as vector multiplication and addition as well as advanced tasks like matrix inversion and extraction of eigenvectors, to the memory unit. This could lead to the development of more energy-efficient machine learning systems. However, incorporating this functionality will require a redesign of existing architecture and more concrete ideas about how to implement it.
“An important research direction is on the application side, to identify the applications that might best benefit from in-memory computing in terms of acceleration, energy consumption, and cost,” said Ielmini. “Machine learning and deep learning are the areas that might benefit most.”
Source: “In-memory computing with emerging memory devices: status and outlook,” by Piergiulio Mannocci, Matteo Farronato, Nicola Lepri, Lorenzo Cattaneo, Artem Glukhov, Zhong Sun, and Daniele Ielmini, APL Machine Learning (2023). The article can be accessed at https://doi.org/10.1063/5.0136403 .