z-logo
open-access-imgOpen Access
Pretraining model for biological sequence data
Author(s) -
Bosheng Song,
Zimeng Li,
Xuan Lin,
Jianmin Wang,
Tian Wang,
Xiquan Fu
Publication year - 2021
Publication title -
briefings in functional genomics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.22
H-Index - 67
eISSN - 2041-2647
pISSN - 2041-2649
DOI - 10.1093/bfgp/elab025
Subject(s) - biological data , sequence (biology) , computer science , artificial intelligence , biological database , sequence labeling , biology , machine learning , bioinformatics , genetics , management , economics , task (project management)
With the development of high-throughput sequencing technology, biological sequence data reflecting life information becomes increasingly accessible. Particularly on the background of the COVID-19 pandemic, biological sequence data play an important role in detecting diseases, analyzing the mechanism and discovering specific drugs. In recent years, pretraining models that have emerged in natural language processing have attracted widespread attention in many research fields not only to decrease training cost but also to improve performance on downstream tasks. Pretraining models are used for embedding biological sequence and extracting feature from large biological sequence corpus to comprehensively understand the biological sequence data. In this survey, we provide a broad review on pretraining models for biological sequence data. Moreover, we first introduce biological sequences and corresponding datasets, including brief description and accessible link. Subsequently, we systematically summarize popular pretraining models for biological sequences based on four categories: CNN, word2vec, LSTM and Transformer. Then, we present some applications with proposed pretraining models on downstream tasks to explain the role of pretraining models. Next, we provide a novel pretraining scheme for protein sequences and a multitask benchmark for protein pretraining models. Finally, we discuss the challenges and future directions in pretraining models for biological sequences.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here