z-logo
open-access-imgOpen Access
Multi‐scale object detection by bottom‐up feature pyramid network
Author(s) -
Boya Zhao,
Baojun Zhao,
Linbo Tang,
Chen Wu
Publication year - 2019
Publication title -
the journal of engineering
Language(s) - English
Resource type - Journals
ISSN - 2051-3305
DOI - 10.1049/joe.2019.0314
Subject(s) - computer science , feature (linguistics) , pyramid (geometry) , artificial intelligence , pascal (unit) , pattern recognition (psychology) , scale (ratio) , convolutional neural network , backbone network , representation (politics) , object detection , set (abstract data type) , computer vision , object (grammar) , mathematics , computer network , philosophy , linguistics , physics , geometry , quantum mechanics , politics , political science , law , programming language
The deep neural networks has been developed fast and shown great successes in many significant fields, such as smart surveillance, self‐driving and face recognition. The detection of the object with multi‐scale and multi‐aspect‐ratio is still the key problem. In this study, the authors propose a bottom‐up feature pyramid network, coordinating with multi‐scale feature representation and multi‐aspect‐ratio anchor generation. Firstly, the multi‐scale feature representation is formed by a set of fully convolutional layers which is catenated after the backbone network. Secondly, in order to link the multi‐scale feature, the deconvolutional layer is involving after each multi‐scale feature map. Thirdly, to tackle the problem of adopting object with different aspect ratios, the anchors on each multi‐scale feature map are generated by six shapes. The proposed method is evaluated on PASCAL visual object detection dataset and reach the accuracy of 80.5%.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here