
Efficient Frame Extraction: A Novel Approach Through Frame Similarity and Surgical Tool Tracking for Video Segmentation
Author(s) -
Huu Phong Nguyen,
Shekhar Madhav Khairnar,
Sofia Garces Palacios,
Amr Al-Abbas,
Melissa E. Hogg,
Amer H. Zureikat,
Patricio M. Polanco,
Herbert Zeh,
Ganesh Sankaranarayanan
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3573264
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The interest in leveraging Artificial Intelligence (AI) for surgical procedures to automate analysis has witnessed a significant surge in recent years. One of the primary tools for recording surgical procedures and conducting subsequent analyses, such as performance assessment, is through videos. However, these operative videos tend to be notably lengthy compared to other fields, spanning from thirty minutes to several hours, which poses a challenge for AI models to effectively learn from them. Despite this challenge, the foreseeable increase in the volume of such videos in the near future necessitates the development and implementation of innovative techniques to tackle this issue effectively. In this article, we propose a novel technique called Kinematics Adaptive Frame Recognition (KAFR) that can efficiently eliminate redundant frames to reduce dataset size and computation time while retaining useful frames to improve accuracy. Specifically, we compute the similarity between consecutive frames by tracking the movement of surgical tools. Our approach follows these steps: i ) Tracking phase: a YOLOv8 model is utilized to detect tools presented in the scene, ii ) Similarity phase: Similarities between consecutive frames are computed by estimating variation in the spatial positions and velocities of the tools, iii ) Classification phase: An X3D CNN is trained to classify segmentation. We evaluate the effectiveness of our approach by analyzing datasets obtained through retrospective reviews of cases at two referral centers. The newly annotated Gastrojejunostomy (GJ) dataset covers procedures performed between 2017 and 2021, while the previously annotated Pancreaticojejunostomy (PJ) dataset spans from 2011 to 2022 at the same centers. In the GJ dataset, each robotic GJ video is segmented into six distinct phases. By adaptively selecting relevant frames, we achieve a tenfold reduction in the number of frames while improving accuracy by 4.32% (from 0.749 to 0.7814) and the F1 score by 0.16%. Our approach is also evaluated on the PJ dataset, demonstrating its efficacy with a fivefold reduction of data and a 2.05% accuracy improvement (from 0.8801 to 0.8982), along with 2.54% increase in F1 score (from 0.8534 to 0.8751). In addition, we also compare our approach with the state-of-the-art approaches to highlight its competitiveness in terms of performance and efficiency. Although we examined our approach on the GJ and PJ datasets for phase segmentation, this could also be applied to broader, more general datasets. Furthermore, KAFR can serve as a supplement to existing approaches, enhancing their performance by reducing redundant data while retaining key information, making it a valuable addition to other AI models.