Premium
Object‐Oriented Segmentation of Cell Nuclei in Fluorescence Microscopy Images
Author(s) -
Koyuncu Can Fahrettin,
CetinAtalay Rengul,
GunduzDemir Cigdem
Publication year - 2018
Publication title -
cytometry part a
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.316
H-Index - 90
eISSN - 1552-4930
pISSN - 1552-4922
DOI - 10.1002/cyto.a.23594
Subject(s) - pixel , segmentation , artificial intelligence , boundary (topology) , computer vision , object (grammar) , computer science , enhanced data rates for gsm evolution , morphological gradient , pattern recognition (psychology) , representation (politics) , nucleus , spurious relationship , image segmentation , microscopy , physics , mathematics , biology , scale space segmentation , optics , microbiology and biotechnology , mathematical analysis , politics , political science , law , machine learning
Cell nucleus segmentation remains an open and challenging problem especially to segment nuclei in cell clumps. Splitting a cell clump would be straightforward if the gradients of boundary pixels in‐between the nuclei were always higher than the others. However, imperfections may exist: inhomogeneities of pixel intensities in a nucleus may cause to define spurious boundaries whereas insufficient pixel intensity differences at the border of overlapping nuclei may cause to miss some true boundary pixels. In contrast, these imperfections are typically observed at the pixel‐level, causing local changes in pixel values without changing the semantics on a large scale. In response to these issues, this article introduces a new nucleus segmentation method that relies on using gradient information not at the pixel level but at the object level. To this end, it proposes to decompose an image into smaller homogeneous subregions, define edge‐objects at four different orientations to encode the gradient information at the object level, and devise a merging algorithm, in which the edge‐objects vote for subregion pairs along their orientations and the pairs are iteratively merged if they get sufficient votes from multiple orientations. Our experiments on fluorescence microscopy images reveal that this high‐level representation and the design of a merging algorithm using edge‐objects (gradients at the object level) improve the segmentation results.