z-logo
open-access-imgOpen Access
Learning Unsupervised Visual Representations using 3D Convolutional Autoencoder with Temporal Contrastive Modeling for Video Retrieval
Author(s) -
Vidit Kumar,
Vikas Tripathi,
Bhaskar Pant
Publication year - 2022
Publication title -
international journal of mathematical, engineering and management sciences
Language(s) - English
Resource type - Journals
ISSN - 2455-7749
DOI - 10.33889/ijmems.2022.7.2.018
Subject(s) - computer science , autoencoder , feature learning , artificial intelligence , convolutional neural network , deep learning , unsupervised learning , feature (linguistics) , task (project management) , representation (politics) , machine learning , multi task learning , pattern recognition (psychology) , philosophy , linguistics , management , politics , political science , law , economics
The rapid growth of tag-free user-generated videos (on the Internet), surgical recorded videos, and surveillance videos has necessitated the need for effective content-based video retrieval systems. Earlier methods for video representations are based on hand-crafted, which hardly performed well on the video retrieval tasks. Subsequently, deep learning methods have successfully demonstrated their effectiveness in both image and video-related tasks, but at the cost of creating massively labeled datasets. Thus, the economic solution is to use freely available unlabeled web videos for representation learning. In this regard, most of the recently developed methods are based on solving a single pretext task using 2D or 3D convolutional network. However, this paper designs and studies a 3D convolutional autoencoder (3D-CAE) for video representation learning (since it does not require labels). Further, this paper proposes a new unsupervised video feature learning method based on joint learning of past and future prediction using 3D-CAE with temporal contrastive learning. The experiments are conducted on UCF-101 and HMDB-51 datasets, where the proposed approach achieves better retrieval performance than state-of-the-art. In the ablation study, the action recognition task is performed by fine-tuning the unsupervised pre-trained model where it outperforms other methods, which further confirms the superiority of our method in learning underlying features. Such an unsupervised representation learning approach could also benefit the medical domain, where it is expensive to create large label datasets.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here