Automatically estimating number of scenes for rushes summarization
Author(s) -
Koji Yamasaki,
Koichi Shinoda,
Sadaoki Furui
Publication year - 2008
Publication title -
tokyo tech research repository (tokyo institute of technology)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1463563.1463587
Subject(s) - automatic summarization , computer science , selection (genetic algorithm) , set (abstract data type) , shot (pellet) , artificial intelligence , mixture model , task (project management) , component (thermodynamics) , computer vision , gaussian , pattern recognition (psychology) , chemistry , physics , organic chemistry , quantum mechanics , thermodynamics , programming language , management , economics
This paper describes our video summarization system using a model selection technique to estimate the optimal number of scenes for a summary. It uses a minimum description length as a model selection criterion and carries out two-stage estimation. First, we estimate the number of scenes in each shot, and then we estimate the number of scenes in a whole video clip. We model a set of scenes with a Gaussian mixture model, where the mixture component is assumed to represent one scene. Our system was evaluated in the TRECVID 2008 rushes summarization task, where the test video set was unedited materials provided by the BBC. Our scores were about the same as the average of all the participants for the eight evaluation measures.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom