Which is the better inpainted image? Learning without subjective annotation
Author(s) -
Mariko Isogawa,
Dan Mikami,
Kosuke Takahashi,
Hideaki Kimata
Publication year - 2017
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.31.5
Subject(s) - computer science , annotation , artificial intelligence , image (mathematics) , computer vision
This paper proposes a learning-based quality evaluation framework for inpainted results that does not require any subjectively annotated training data. Image inpainting, which removes and restores unwanted regions in images, is widely acknowledged as a task whose results are quite difficult to evaluate objectively. Thus, existing learningbased image quality assessment (IQA) methods for inpainting require subjectively annotated data for training. However, subjective annotation requires huge cost and subjects’ judgment occasionally differs from person to person in accordance with the judgment criteria. To overcome these difficulties, the proposed framework uses simulated failure results of inpainted images whose subjective qualities are controlled as the training data. This approach enables preference order between pairwise inpainted images to be successfully estimated even if the task is quite subjective. To demonstrate the effectiveness of our approach, we test our algorithm with various datasets and show it outperforms state-of-the-art IQA methods for inpainting.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom