An Efficient Method for Automatic Video Annotation and Retrieval in Visual Sensor Networks
Author(s) -
Jiangfan Feng,
Wenwen Zhou
Publication year - 2014
Publication title -
international journal of distributed sensor networks
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.324
H-Index - 53
eISSN - 1550-1477
pISSN - 1550-1329
DOI - 10.1155/2014/832512
Subject(s) - discriminative model , computer science , annotation , artificial intelligence , classifier (uml) , neural coding , constraint (computer aided design) , semantic gap , pattern recognition (psychology) , coding (social sciences) , machine learning , video retrieval , image retrieval , image (mathematics) , mechanical engineering , statistics , mathematics , engineering
Automatic video annotation has become an important issue in visual sensor networks, due to the existence of a semantic gap. Although it has been studied extensively, semantic representation of visual information is not well understood. To address the problem of pattern classification in video annotation, this paper proposes a discriminative constraint to find a solution to approach the sparse representative coefficients with discrimination. We study a general method of discriminative dictionary learning which is independent of the specific dictionary and classifier learning algorithms. Furthermore, a tightly coupled discriminative sparse coding model is introduced. Ultimately, the experimental results show that the provided method offers a better video annotation method that cannot be achieved with existing schemes.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom