z-logo
open-access-imgOpen Access
Video Object Segmentation based on improved OSVOS
Author(s) -
Shizhan Hong,
Tieyong Cao,
Shengkai Xiang,
Zheng Fang,
Xiaotong Deng,
Lei Xiang
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1314/1/012196
Subject(s) - computer science , artificial intelligence , segmentation , convolution (computer science) , object (grammar) , convolutional neural network , computer vision , feature (linguistics) , pattern recognition (psychology) , fuse (electrical) , layer (electronics) , image segmentation , task (project management) , artificial neural network , philosophy , linguistics , chemistry , management , organic chemistry , electrical engineering , economics , engineering
Video object segmentation is a mainstream branch of current image processing direction. How to achieve deep learning from complete supervision to unsupervised is the key problem that people are trying to solve. In this process, One-Shot Video Object Segmentation (OSVOS) can successfully tackle the task of semi-supervised video object segmentation. It transfers the general semantic information learned on ImageNet to the foreground segmentation task, and then learns the mapping of a single annotated object in sequence. In this paper, based on the concept of OSVOS, an improved neural network structure with dilated convolution, multi-scale convolution fusion and skip layer is proposed. Dilated convolution can increase the size of the receiving field. Multi-scale convolution can obtain multi-scale feature maps and fuse them. Skip layers can transfer feature information from low layer to upper layer. All of them help to improve the final accuracy. The experimental results show that all indicators has been improved.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here