
Blind text images deblurring based on a generative adversarial network
Author(s) -
Qi Qing,
Guo Jichang
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.6697
Subject(s) - deblurring , computer science , sharpening , kernel (algebra) , generator (circuit theory) , artificial intelligence , image (mathematics) , generative grammar , prior probability , pattern recognition (psychology) , generative adversarial network , function (biology) , image restoration , mathematics , image processing , bayesian probability , power (physics) , physics , combinatorics , quantum mechanics , evolutionary biology , biology
Recently, text images deblurring has achieved advanced development. Unlike previous methods based on hand‐crafted priors or assume specific kernel, the authors recognise the text deblurring problem as a semantic generation task, which can be achieved by a generative adversarial network. The structure is an essential property of text images; thus, they propose a structural loss function and a detailed loss function to regularise the recovery of text images. Furthermore, they learn from the coarse‐to‐fine strategy and present a multi‐scale generator, which is utilised for sharpening the generated text images. The model has a robust capability of generating realistic latent images with photo‐quality effect. Extensive experiments on the synthetic and real‐world blurry images have shown that the proposed network is comparable to the state‐of‐the‐art methods.