z-logo
open-access-imgOpen Access
Consolidation of Subtasks for Target Task in Pipelined NLP Model
Author(s) -
Son JeongWoo,
Yoon Heegeun,
Park SeongBae,
Cho Keeseong,
Ryu Won
Publication year - 2014
Publication title -
etri journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.295
H-Index - 46
eISSN - 2233-7326
pISSN - 1225-6463
DOI - 10.4218/etrij.14.2214.0035
Subject(s) - computer science , task (project management) , artificial intelligence , chunking (psychology) , backpropagation , consolidation (business) , natural language processing , language model , speech recognition , machine learning , artificial neural network , engineering , accounting , systems engineering , business
Most natural language processing tasks depend on the outputs of some other tasks. Thus, they involve other tasks as subtasks. The main problem of this type of pipelined model is that the optimality of the subtasks that are trained with their own data is not guaranteed in the final target task, since the subtasks are not optimized with respect to the target task. As a solution to this problem, this paper proposes a consolidation of subtasks for a target task (CST 2 ). In CST 2 , all parameters of a target task and its subtasks are optimized to fulfill the objective of the target task. CST 2 finds such optimized parameters through a backpropagation algorithm. In experiments in which text chunking is a target task and part‐of‐speech tagging is its subtask, CST 2 outperforms a traditional pipelined text chunker. The experimental results prove the effectiveness of optimizing subtasks with respect to the target task.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here