z-logo
open-access-imgOpen Access
Revisiting Embedding Features for Simple Semi-supervised Learning
Author(s) -
Jiang Guo,
Wanxiang Che,
Haifeng Wang,
Ting Liu
Publication year - 2014
Language(s) - English
Resource type - Conference proceedings
DOI - 10.3115/v1/d14-1012
Subject(s) - embedding , word embedding , computer science , word (group theory) , artificial intelligence , simple (philosophy) , task (project management) , machine learning , supervised learning , pattern recognition (psychology) , natural language processing , mathematics , artificial neural network , geometry , management , economics , philosophy , epistemology
Recent work has shown success in using continuous word embeddings learned from unlabeled data as features to improve supervised NLP systems, which is regarded as a simple semi-supervised learning mechanism. However, fundamental problems on effectively incorporating the word embedding features within the framework of linear models remain. In this study, we investigate and analyze three different approaches, including a new proposed distributional prototype approach, for utilizing the embedding features. The presented approaches can be integrated into most of the classical linear models in NLP. Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word embedding features, among which the distributional prototype approach performs the best. Moreover, the combination of the approaches provides additive improvements, outperforming the dense and continuous embedding features by nearly 2 points of F1 score.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom