z-logo
open-access-imgOpen Access
Familiar configuration enables figure/ground assignment in natural scenes
Author(s) -
Xiaofeng Ren,
Charless C. Fowlkes,
Jitendra Malik
Publication year - 2010
Publication title -
journal of vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.126
H-Index - 113
ISSN - 1534-7362
DOI - 10.1167/5.8.344
Subject(s) - shape context , artificial intelligence , figure–ground , computer science , disjoint sets , pattern recognition (psychology) , similarity (geometry) , computer vision , context (archaeology) , classifier (uml) , grayscale , point (geometry) , mathematics , geometry , image (mathematics) , perception , geography , combinatorics , archaeology , neuroscience , biology
VSS05 Abstract Figure/ground organization is a step of perceptual organization that assigns a contour to one of the two abutting regions. Peterson et al showed that familiar configurations of contours, such as outlines of recognizable objects, provide a powerful cue that can dominate traditional f/g cues such as symmetry. In this work we: (1) provide an operationalization of "familiar configuration" in terms of prototypical local shapes, without requiring global object recognition; (2) show that a classifier based on this cue works well on images of natural scenes. Shape context [Belongie, Malik & Puzicha ICCV01,Berg & Malik CVPR01] is a shape descriptor which summarizes local arrangement of edges, relative to the center point, in a logpolar fashion. We cluster a large set of these descriptors to construct a small list of prototypical shape configurations, or "shapemes" (analogous to phonemes). Shapemes capture important local structures such as convexity and parallelism. For each point along a contour, we measure the similarity of its local shape descriptor to each shapeme. These measurements are combined using a logistic regression classifier to predict the figure/ground label. We test it on a Berkeley Figure/ground Dataset which consists of 200 natural images w/ human-marked f/g labels. By averaging the classifier outputs over all points on each contour, we obtain an accuracy of 72% (chance is 50%). This compares favorably to the traditional f/g cues used in [Fowlkes et al 03]. Enforcing consistency constraints at junctions increases the accuracy further to 79%, making it a promising model of figure/ground organization. Figure/Ground Organization Berkeley Figure/Ground Dataset

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom