z-logo
open-access-imgOpen Access
Combining Multi-task Learning with Transfer Learning for Biomedical Named Entity Recognition
Author(s) -
Tahir Mehmood,
Alfonso Gerevini,
Alberto Lavelli,
Ivan Serina
Publication year - 2020
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2020.09.080
Subject(s) - computer science , multi task learning , task (project management) , transfer of learning , sequence labeling , conditional random field , artificial intelligence , machine learning , named entity recognition , field (mathematics) , simple (philosophy) , natural language processing , philosophy , mathematics , management , epistemology , pure mathematics , economics
Multi-task learning approaches have shown significant improvements in different fields by training different related tasks simultaneously. The multi-task model learns common features among different tasks where they share some layers. However, it is observed that the multi-task learning approach can suffer performance degradation with respect to single task learning in some of the natural language processing tasks, specifically in sequence labelling problems. To tackle this limitation we formulate a simple but effective approach that combines multi-task learning with transfer learning. We use a simple model that comprises of bidirectional long-short term memory and conditional random field. With this simple model, we are able to achieve better F1-score compared to our single task and the multi-task models as well as state-of-the-art multi-task models.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom