z-logo
open-access-imgOpen Access
SHAPELEARNER: TOWARDS SHAPE-BASED VISUAL KNOWLEDGE HARVESTING
Author(s) -
Zheng Wang,
Ti Liang
Publication year - 2016
Publication title -
the international archives of the photogrammetry, remote sensing and spatial information sciences/international archives of the photogrammetry, remote sensing and spatial information sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.264
H-Index - 71
eISSN - 1682-1777
pISSN - 1682-1750
DOI - 10.5194/isprsarchives-xli-b3-789-2016
Subject(s) - computer science , artificial intelligence , wordnet , exploit , robustness (evolution) , segmentation , computer vision , pattern recognition (psychology) , biochemistry , chemistry , computer security , gene
The explosion of images on the Web has led to a number of efforts to organize images semantically and compile collections of visual knowledge. While there has been enormous progress on categorizing entire images or bounding boxes, only few studies have targeted fine-grained image understanding at the level of specific shape contours. For example, given an image of a cat, we would like a system to not merely recognize the existence of a cat, but also to distinguish between the cat’s legs, head, tail, and so on. In this paper, we present ShapeLearner, a system that acquires such visual knowledge about object shapes and their parts. ShapeLearner jointly learns this knowledge from sets of segmented images. The space of label and segmentation hypotheses is pruned and then evaluated using Integer Linear Programming. ShapeLearner places the resulting knowledge in a semantic taxonomy based on WordNet and is able to exploit this hierarchy in order to analyze new kinds of objects that it has not observed before. We conduct experiments using a variety of shape classes from several representative categories and demonstrate the accuracy and robustness of our method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here