z-logo
Premium
Defocus and Motion Blur Detection with Deep Contextual Features
Author(s) -
Kim Beomseok,
Son Hyeongseok,
Park SeongJin,
Cho Sunghyun,
Lee Seungyong
Publication year - 2018
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.13567
Subject(s) - motion blur , deblurring , computer science , artificial intelligence , computer vision , convolutional neural network , encoder , image restoration , image (mathematics) , pattern recognition (psychology) , image processing , operating system
Abstract We propose a novel approach for detecting two kinds of partial blur, defocus and motion blur, by training a deep convolutional neural network. Existing blur detection methods concentrate on designing low‐level features, but those features have difficulty in detecting blur in homogeneous regions without enough textures or edges. To handle such regions, we propose a deep encoder‐decoder network with long residual skip‐connections and multi‐scale reconstruction loss functions to exploit high‐level contextual features as well as low‐level structural features. Another difficulty in partial blur detection is that there are no available datasets with images having both defocus and motion blur together, as most existing approaches concentrate only on either defocus or motion blur. To resolve this issue, we construct a synthetic dataset that consists of complex scenes with both types of blur. Experimental results show that our approach effectively detects and classifies blur, outperforming other state‐of‐the‐art methods. Our method can be used for various applications, such as photo editing, blur magnification, and deblurring.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here