
Combining Low-Level Image Features with Features from A Simple Convolutional Neural Network
Author(s) -
Özge Öztimur Karadağ,
Özlem Erdaş
Publication year - 2019
Publication title -
akıllı sistemler ve uygulamaları dergisi
Language(s) - English
Resource type - Journals
ISSN - 2667-6893
DOI - 10.54856/jiswa.201912083
Subject(s) - computer science , artificial intelligence , convolutional neural network , deep learning , pattern recognition (psychology) , classifier (uml) , image processing , artificial neural network , image (mathematics) , contextual image classification , feature (linguistics) , machine learning , philosophy , linguistics
In the traditional image processing approaches, first low-level image features are extracted and then they are sent to a classifier or a recognizer for further processing. While the traditional image processing techniques employ this step-by-step approach, majority of the recent studies prefer layered architectures which both extract features and do the classification or recognition tasks. These architectures are referred as deep learning techniques and they are applicable if sufficient amount of labeled data is available and the minimum system requirements are met. Nevertheless, most of the time either the data is insufficient or the system sources are not enough. In this study, we experimented how it is still possible to obtain an effective visual representation by combining low-level visual features with features from a simple deep learning model. As a result, combinational features gave rise to 0.80 accuracy on the image data set while the performance of low-level features and deep learning features were 0.70 and 0.74 respectively.