z-logo
open-access-imgOpen Access
Analysis of produce recognition system with taxonomist's knowledge using computer vision and different classifiers
Author(s) -
Chaw JunKit,
Mokji Musa
Publication year - 2017
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2016.0381
Subject(s) - computer science , artificial intelligence , cognitive neuroscience of visual object recognition , barcode , pattern recognition (psychology) , process (computing) , machine learning , object (grammar) , bridging (networking) , operating system , computer network
Supermarkets nowadays are equipped with barcode scanners to speed up the checkout process. Nevertheless, most of the agricultural products cannot be pre‐packaged and thus must be weighted. The development of produce recognition system based on computer vision could help the cashiers in the supermarkets with the pricing of these weighted products. This work proposes a hybrid approach of object classification and attribute classification for the produce recognition system which involves the cooperation and integration of statistical approaches and semantic models. The integration of attribute learning into the produce recognition system was proposed due to the fact that attribute learning has emerged as a promising paradigm for bridging the semantic gap and assisting in object recognition in many fields of study. This could tackle problems occurred when less training data are available, i.e. less than 10 samples per class. The experiments show that the correct classification rate of the hybrid approach were 60.55, 75.37 and 86.42% with 2, 4 and 8 training examples, respectively, which were higher than other individual classifiers. A well‐balanced specificity, sensitivity and F 1 score were achieved by the hybrid approach for each produce type.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here