Premium
SPARC: Deep Learning for Stomach Segmentation with Contrast Enhanced Magnetic Resonance Imaging of the Gastrointestinal Tract
Author(s) -
Wang Xiaokai,
Lu Kun-Han,
Choi Minkyu,
Cao Jiayue,
Jaffey Deborah,
Powley Terry,
Liu Zhongming
Publication year - 2020
Publication title -
the faseb journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.709
H-Index - 277
eISSN - 1530-6860
pISSN - 0892-6638
DOI - 10.1096/fasebj.2020.34.s1.03483
Subject(s) - segmentation , artificial intelligence , computer science , gastrointestinal tract , magnetic resonance imaging , deep learning , pipeline (software) , pattern recognition (psychology) , convolutional neural network , stomach , image processing , image segmentation , contrast (vision) , computer vision , pixel , image (mathematics) , medicine , radiology , gastroenterology , programming language
Contrast‐enhanced magnetic resonance imaging of the gastrointestinal tract (gMRI) enables non‐invasive assessment of gastric emptying and motility and provides imaging‐based evaluation of pharmacological or bioelectrical treatment of gastric disorders. For both purposes, robust image processing is required to segment the gastrointestinal tract, separate its compartments, capture its dynamics, and quantify its physiology or pathology. Due to the complex anatomy and dynamics of the gastrointestinal tract, conventional image processing methods are hard to generalize across animals or species, are semi‐automatic, time‐consuming and in need of manual correction. Here, we reported a deep learning algorithm to segment the gastrointestinal tract based on 3‐dimensional (3D) contrast‐enhanced gMRI. Specifically, we constructed a 3D U‐net with an encoder‐decoder architecture in which convolutional layers were connected in a U‐shape. The encoder took the gMRI images as its input and learned a representation of the input; the decoder used the learned representation to generate pixel‐wise classification for segmentation of the stomach and small intestines. To train and validate the U‐net, we used gMRI images (from 54 rats) segmented with a semi‐automatic processing pipeline as described in a prior paper. We used 1,047 3D volumes from 48 rats for training the model and used 139 volumes from 6 rats for validating the model. We found that the model was able to extract meaningful features to achieve accurate pixel‐wise classification on new image sets. The segmentation results were more accurate, consistent and generalizable than were previous algorithms. Quantitative evaluation suggested that the segmentation was highly accurate for the stomach (DICE=0.982±0.011) and reasonable for the small intestines (DICE=0.918±0.042). Importantly, this algorithm significantly reduced the processing time from >2 minutes to <1 second and thus was expected to be well suited for real‐time assessment. In conclusion, our model results in high segmentation accuracy and fast processing speed. It has the advantage over conventional methods and holds the unique promise to enable real‐time gMRI‐based assessment of gastrointestinal physiology. Future studies are desirable for using this technique to guide neuromodulation or pharmacological treatment of gastric disorders, e.g. gastroparesis. Support or Funding Information National Institutes of Health’s SPARC ‐ Stimulating Peripheral Activity to Relieve Conditions ‐ program (OT2OD023847)