z-logo
open-access-imgOpen Access
AEDCN-Net: Accurate and Efficient Deep Convolutional Neural Network Model for Medical Image Segmentation
Author(s) -
Bekhzod Olimov,
Seok-Joo Koh,
Jeonghong Kim
Publication year - 2021
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2021.3128607
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Image segmentation was significantly enhanced after the emergence of deep learning (DL) methods. In particular, deep convolutional neural networks (DCNNs) have assisted DL-based segmentation models to achieve state-of-the-art performance in fields critical to human beings, such as medicine. However, the existing state-of-the-art methods often use computationally expensive operations to achieve high accuracy and lightweight networks often lack a precise medical image segmentation. Therefore, this study proposes an accurate and efficient DCNN model (AEDCN-Net) based on an elaborate preprocessing step and a resourceful model architecture. The AEDCN-Net exploits bottleneck, atrous, and asymmetric convolution-based residual skip connections in the encoding path that reduce the number of trainable parameters and floating point operations (FLOPs) to learn feature representations with a larger receptive field. The decoding path employs the nearest-neighbor based upsampling method instead of a computationally resourceful transpose convolution operation that requires an extensive number of trainable parameters. The proposed method attains a superior performance in both computational time and accuracy compared to the existing state-of-the-art methods. The results of benchmarking using four real-life medical image datasets specifically illustrate that the AEDCN-Net has a faster convergence compared to the computationally expensive state-of-the-art models while using significantly fewer trainable parameters and FLOPs that result in a considerable speed-up during inference. Moreover, the proposed method obtains a better accuracy in several evaluation metrics compared with the existing lightweight and efficient methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here