z-logo
Premium
Intelligent and effective informatic deconvolution of “Big Data” and its future impact on the quantitative nature of neurodegenerative disease therapy
Author(s) -
Maudsley Stuart,
Devanarayan Viswanath,
Martin Bronwen,
Geerts Hugo
Publication year - 2018
Publication title -
alzheimer's and dementia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.713
H-Index - 118
eISSN - 1552-5279
pISSN - 1552-5260
DOI - 10.1016/j.jalz.2018.01.014
Subject(s) - big data , repurposing , computer science , drug repositioning , data science , drug development , disease , mechanism (biology) , drug discovery , artificial intelligence , drug , data mining , bioinformatics , medicine , biology , ecology , philosophy , epistemology , pathology , psychiatry
Biomedical data sets are becoming increasingly larger and a plethora of high‐dimensionality data sets (“Big Data”) are now freely accessible for neurodegenerative diseases, such as Alzheimer's disease. It is thus important that new informatic analysis platforms are developed that allow the organization and interrogation of Big Data resources into a rational and actionable mechanism for advanced therapeutic development. This will entail the generation of systems and tools that allow the cross‐platform correlation between data sets of distinct types, for example, transcriptomic, proteomic, and metabolomic. Here, we provide a comprehensive overview of the latest strategies, including latent semantic analytics, topological data investigation, and deep learning techniques that will drive the future development of diagnostic and therapeutic applications for Alzheimer's disease. We contend that diverse informatic “Big Data” platforms should be synergistically designed with more advanced chemical/drug and cellular/tissue‐based phenotypic analytical predictive models to assist in either de novo drug design or effective drug repurposing.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here