z-logo
open-access-imgOpen Access
Research on Image Super-resolution Reconstruction Method Based on Improved SRCNN
Author(s) -
Zhongcai-Huo,
Zhongdong-Wu,
Weifu-Xu
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1345/2/022008
Subject(s) - computer science , convolutional neural network , artificial intelligence , image (mathematics) , magnification , convolution (computer science) , computer vision , image quality , artificial neural network , resolution (logic) , pattern recognition (psychology) , algorithm
Existing image convolutional neural network based on image super-resolution algorithm has a problem of image texture blurred to improve. In this paper, we first analyze the factors of reconstructed image quality, then use the parametric rectified linear unit (PRLU) to solve the problem of over-compression in the original network. And combining the existing network models and image processing algorithms adjusts the parameters of the neural network, The network model of 9-1-5 in Super-Resolution Convolutional Neural Network (SRCNN) is improved to a three-layer model of 5-3-5. The number of convolution cores in the first and second layers is adjusted to 32, and the magnification is increased by four times from three times. Finally, simulation experiments are carried out on DIV2K dataset in NTIRE2017. Experimental results show that the proposed algorithm achieves good super-resolution results, and the subjective visual effect and objective evaluation indices are both improved obviously. The image super-resolution technology has great application demand in the fields of video surveillance, virtual scene restoration and so on. There is still a great research prospect for the image super-resolution technology.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here