
A Comparison of vision-based and CNN-based detector for fish monitoring in complex environment
Author(s) -
Yijun Ling,
Phooi Yee Lau
Publication year - 2021
Publication title -
ecti transactions on computer and information technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.132
H-Index - 2
ISSN - 2286-9131
DOI - 10.37936/ecti-cit.2021152.240265
Subject(s) - convolutional neural network , computer science , aquaculture , artificial intelligence , overfishing , fish <actinopterygii> , task (project management) , scale invariant feature transform , computer vision , underwater , fishery , false positive paradox , pattern recognition (psychology) , feature extraction , geography , engineering , biology , systems engineering , archaeology
Aquaculture farming can help soften the environmental impact of overfishing by fulfilling seafood demands with farmed fishes. However, to maintain big scale farms can be challenging, even with the help of underwater cameras affixed in farm cages, because there are hours’ worth of footages to sift through, which can be a laborious task if performed manually. Vision-based system therefore could be deployed to filter useful information from video footages, automatically. This work proposed to solve the above mentioned problems by deploying the; 1) Extended UTAR Aquaculture Farm Fish Monitoring System Framework (UFFMS), being the handcrafted method, and 2) Faster Region Convolutional Neural Network (Faster RCNN), being the CNN-based method, for fish detection. These two methods could extract information about fishes from video footages. Experimental results show that for well-lit footages, Faster RCNN performs better, compared to the extended-UFFMS. However, accuracy of Faster RCNN drops drastically for non-well-lit footages, at an average of 28.57%, despite still having perfect precision scores.