Premium
Hardware Implementation of Energy Efficient Deep Learning Neural Network Based on Nanoscale Flash Computing Array
Author(s) -
Xiang Yachen,
Huang Peng,
Han Runze,
Zhou Zheng,
Shu Qingming,
Su Zhiqiang,
Hu Hong,
Liu Lu,
Liu Yongbo,
Liu Xiaoyan,
Kang Jinfeng
Publication year - 2019
Publication title -
advanced materials technologies
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.184
H-Index - 42
ISSN - 2365-709X
DOI - 10.1002/admt.201800720
Subject(s) - mnist database , computer science , massively parallel , deep learning , pooling , flash (photography) , artificial neural network , flash memory , in memory processing , computer hardware , convolutional neural network , computer architecture , process (computing) , artificial intelligence , computer engineering , parallel computing , search engine , art , information retrieval , query by example , visual arts , web search query , operating system
Deep learning neural network (DNN) can provide efficient approaches to process the increasing unstructured data, such as images, audio, and video. To improve the computing power and the energy efficiency of data processing in DNN, a universal and reconfigurable computing paradigm with the hardware implementation scheme including the convolution, pooling, and fully connected layers is developed based on nanoscale flash computing arrays, which can be massively fabricated. Via precisely tuning the threshold voltage, the fabricated 65 nm nanoscale flash cells can exhibit 16 levels (four bits) of storage states. To confirm the feasibility of the computing paradigm, an exemplary five‐layer DNN is simulated based on the measured data from the nor‐type (NOR) flash memory and exhibits 97.8% recognition accuracy of Modified National Institute of Standards and Technology (MNIST) handwritten digit database with the speed of 4.2 × 10 5 fps at 104 MHz operating frequency. The proposed paradigm with low energy and chip cost shows great promise for future energy efficient and massively parallel data processing of DNN.