z-logo
Premium
Data‐Driven Automatic Cropping Using Semantic Composition Search
Author(s) -
Samii A.,
Měch R.,
Lin Z.
Publication year - 2015
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.12465
Subject(s) - computer science , context (archaeology) , object (grammar) , artificial intelligence , amateur , hough transform , point (geometry) , cropping , composition (language) , computer vision , database , pattern recognition (psychology) , image (mathematics) , mathematics , agriculture , paleontology , ecology , linguistics , philosophy , geometry , political science , law , biology
We present a data‐driven method for automatically cropping photographs to be well‐composed and aesthetically pleasing. Our method matches the composition of an amateur's photograph to an expert's using point correspondences. The correspondences are based on a novel high‐level local descriptor we term the ‘Object Context’. Object Context is an extension of Shape Context: it is a descriptor encoding which objects and scene elements surround a given point. By searching a database of expertly composed images, we can find a crop window which makes an amateur's photograph closely match the composition of a database exemplar. We cull irrelevant matches in the database efficiently using a global descriptor which encodes the objects in the scene. For images with similar content in the database, we efficiently search the space of possible crops using generalized Hough voting. When comparing the result of our algorithm to expert crops, our crop windows overlap the expert crops by 83.6%. We also perform a user study which shows that our crops compare favourably to an expert humans' crops.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here