z-logo
open-access-imgOpen Access
Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events
Author(s) -
Eyal Shamur,
Miri Zilka,
Tal Hassner,
Victor China,
Alex Liberzon,
Roi Holzman
Publication year - 2016
Publication title -
journal of experimental biology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.367
H-Index - 185
eISSN - 1477-9145
pISSN - 0022-0949
DOI - 10.1242/jeb.133751
Subject(s) - videography , computer science , kinematics , artificial intelligence , foraging , computer vision , digital video , video recording , pattern recognition (psychology) , machine learning , ecology , biology , physics , computer graphics (images) , classical mechanics , advertising , business , telecommunications , transmission (telecommunications)
Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom