
Clinical Translation of the LevelCheck Decision Support Algorithm for Target Localization in Spine Surgery
Author(s) -
Amir Manbachi,
Tharindu De Silva,
Ali Uneri,
M. Jacobson,
J. Goerres,
Michael D. Ketcha,
Runze Han,
Nafi Aygün,
David A. Thompson,
Xiaobu Ye,
Sebastian Vogt,
Gerhard Kleinszig,
Camilo A. Molina,
Rajiv R. Iyer,
Tomás Garzón-Muvdi,
Michael R. Raber,
Mari L. Groves,
Jean Paul Wolinsky,
Jeffrey H. Siewerdsen
Publication year - 2018
Publication title -
annals of biomedical engineering
Language(s) - English
Resource type - Journals
eISSN - 1573-9686
pISSN - 0090-6964
DOI - 10.1007/s10439-018-2099-2
Subject(s) - translation (biology) , computer science , algorithm , spine (molecular biology) , artificial intelligence , medicine , physical medicine and rehabilitation , bioinformatics , biology , biochemistry , messenger rna , gene
Recent work has yielded a method for automatic labeling of vertebrae in intraoperative radiographs as an assistant to manual level counting. The method, called LevelCheck, previously demonstrated promise in phantom studies and retrospective studies. This study aims to: (#1) Analyze the effect of LevelCheck on accuracy and confidence of localization in two modes: (a) Independent Check (labels displayed after the surgeon's decision) and (b) Active Assistant (labels presented before the surgeon's decision). (#2) Assess the feasibility and utility of LevelCheck in the operating room. Two studies were conducted: a laboratory study investigating these two workflow implementations in a simulated operating environment with 5 surgeons, reviewing 62 cases selected from a dataset of radiographs exhibiting a challenge to vertebral localization; and a clinical study involving 20 patients undergoing spine surgery. In Study #1, the median localization error without assistance was 30.4% (IQR = 5.2%) due to the challenging nature of the cases. LevelCheck reduced the median error to 2.4% for both the Independent Check and Active Assistant modes (p < 0.01). Surgeons found LevelCheck to increase confidence in 91% of cases. Study #2 demonstrated accuracy in all cases. The algorithm runtime varied from 17 to 72 s in its current implementation. The algorithm was shown to be feasible, accurate, and to improve confidence during surgery.