z-logo
open-access-imgOpen Access
A Videography Analysis Framework for Video Retrieval and Summarization
Author(s) -
Kang Li,
Sangmin Oh,
A.g. Amitha Perera,
Yun Fu
Publication year - 2012
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.26.126
Subject(s) - videography , automatic summarization , computer science , artificial intelligence , computer vision , motion (physics) , focus (optics) , physics , advertising , optics , business
: In this work, we focus on developing features and approaches to represent and analyze videography styles in unconstrained videos. By unconstrained videos, we mean typical consumer videos with significant content complexity and diverse editing artifacts, mostly with long duration. Our approach constructs a videography dictionary, which is used to represent each video clip as a series of varying videography words. In addition to conventional features such as camera motion and foreground object motion, two novel features including motion correlation and scale information are introduced to characterize videography. Then, we show that unique videography signatures from different events can be automatically identified, using statistical analysis methods. For practical applications, we explore the use of videography analysis for content-based video retrieval and video summarization. We compare our approaches with other methods on a large unconstrained video dataset, and demonstrate that our approach benefits video analysis.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom