Multi-step flow fusion: towards accurate and dense correspondences in long video shots
Author(s) -
Tomás Crivelli,
Pierre-Henri Conze,
Philippe Robert,
Matthieu Fradet,
Patrick Pérez
Publication year - 2012
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.26.107
Subject(s) - computer science , fusion , computer vision , artificial intelligence , flow (mathematics) , computer graphics (images) , mathematics , geometry , philosophy , linguistics
The aim of this work is to estimate dense displacement fields over long video shots. Put in sequence they are useful for representing point trajectories but also for propagating (pulling) information from a reference frame to the rest of the video. Highly elaborated optical flow estimation algorithms are at hand, and they were applied before for dense point tracking by simple accumulation, however with unavoidable position drift. On the other hand, direct long-term point matching is more robust to such deviations, but it is very sensitive to ambiguous correspondences. Why not combining the benefits of both approaches? Following this idea, we develop a multi-step flow fusion method that optimally generates dense long-term displacement fields by first merging several candidate estimated paths and then filtering the tracks in the spatio-temporal domain. Our approach permits to handle small and large displacements with improved accuracy and it is able to recover a trajectory after temporary occlusions. Especially useful for video editing applications, we attack the problem of graphic element insertion and video volume segmentation, together with a number of quantitative comparisons on ground-truth data with state-of-the-art approaches.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom