z-logo
open-access-imgOpen Access
Resistive Memory‐Based In‐Memory Computing: From Device and Large‐Scale Integration System Perspectives
Author(s) -
Yan Bonan,
Li Bing,
Qiao Ximing,
Xue Cheng-Xin,
Chang MengFan,
Chen Yiran,
Li Hai Helen
Publication year - 2019
Publication title -
advanced intelligent systems
Language(s) - English
Resource type - Journals
ISSN - 2640-4567
DOI - 10.1002/aisy.201900068
Subject(s) - resistive random access memory , von neumann architecture , computer science , in memory processing , matrix multiplication , multiplication (music) , bottleneck , neuromorphic engineering , computer architecture , memory architecture , memory cell , semiconductor memory , artificial neural network , parallel computing , computer hardware , embedded system , transistor , engineering , electrical engineering , artificial intelligence , voltage , query by example , acoustics , quantum , search engine , operating system , quantum mechanics , information retrieval , web search query , physics
In‐memory computing is a computing scheme that integrates data storage and arithmetic computation functions. Resistive random access memory (RRAM) arrays with innovative peripheral circuitry provide the capability of performing vector‐matrix multiplication beyond the basic Boolean logic. With such a memory–computation duality, RRAM‐based in‐memory computing enables an efficient hardware solution for matrix‐multiplication‐dependent neural networks and related applications. Herein, the recent development of RRAM nanoscale devices and the parallel progress on circuit and microarchitecture layers are discussed. Well suited for analog synapse and neuron implementation, RRAM device properties and characteristics are emphasized herein. 3D‐stackable RRAM and on‐chip training are introduced in large‐scale integration. The circuit design and system organization of RRAM‐based in‐memory computing are essential to breaking the von Neumann bottleneck. These outcomes illuminate the way for the large‐scale implementation of ultra‐low‐power and dense neural network accelerators.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here