z-logo
open-access-imgOpen Access
PathCNN: interpretable convolutional neural networks for survival prediction and pathway analysis applied to glioblastoma
Author(s) -
Jung Hun Oh,
Wookjin Choi,
Euiseong Ko,
Mingon Kang,
Allen Tannenbaum,
Joseph O. Deasy
Publication year - 2021
Publication title -
bioinformatics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.599
H-Index - 390
eISSN - 1367-4811
pISSN - 1367-4803
DOI - 10.1093/bioinformatics/btab285
Subject(s) - interpretability , computer science , convolutional neural network , glioblastoma , artificial intelligence , source code , machine learning , key (lock) , visualization , identification (biology) , software , artificial neural network , data mining , pattern recognition (psychology) , biology , botany , computer security , cancer research , programming language , operating system
Convolutional neural networks (CNNs) have achieved great success in the areas of image processing and computer vision, handling grid-structured inputs and efficiently capturing local dependencies through multiple levels of abstraction. However, a lack of interpretability remains a key barrier to the adoption of deep neural networks, particularly in predictive modeling of disease outcomes. Moreover, because biological array data are generally represented in a non-grid structured format, CNNs cannot be applied directly.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom