z-logo
open-access-imgOpen Access
A weakly supervised learning approach for surgical instrument segmentation from laparoscopic video sequences
Author(s) -
Zixin Yang,
Richard Simon,
Cristian A. Linte
Publication year - 2022
Publication title -
pubmed central
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.192
H-Index - 176
pISSN - 0277-786X
DOI - 10.1117/12.2610778
Subject(s) - segmentation , computer science , artificial intelligence , ground truth , computer vision , image segmentation , dice , supervised learning , graph , pattern recognition (psychology) , artificial neural network , mathematics , geometry , theoretical computer science
Fully supervised learning approaches for surgical instrument segmentation from video images usually require a time-consuming process of generating accurate ground truth segmentation masks. We propose an alternative way of labeling surgical instruments for binary segmentation that first commences with rough, scribble-like annotations of the surgical instruments using a disc-shaped brush. We then present a framework that starts with a graph-model-based method for generating initial segmentation labels based on the user-annotated paint-brush scribbles and then proceeds with a deep learning model that learns from the noisy, initial segmentation labels. Experiments conducted on the 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge have shown that the proposed framework achieved a 76.82% IoU and 85.70% Dice score on binary instrument segmentation. Based on these metrics, the proposed method out-performs other weakly supervised techniques and achieves a close performance to that achieved via fully supervised networks, but eliminates the need for ground truth segmentation masks.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here