z-logo
open-access-imgOpen Access
Representation of Words in Natural Language Processing: A Survey
Author(s) -
Y. Losieva
Publication year - 2019
Publication title -
vìsnik. serìâ fìziko-matematičnì nauki/vìsnik kiì̈vsʹkogo nacìonalʹnogo unìversitetu ìmenì tarasa ševčenka. serìâ fìziko-matematičnì nauki
Language(s) - English
Resource type - Journals
eISSN - 2218-2055
pISSN - 1812-5409
DOI - 10.17721/1812-5409.2019/2.10
Subject(s) - computer science , natural language processing , artificial intelligence , representation (politics) , word (group theory) , natural language , artificial neural network , context (archaeology) , relation (database) , transformer , linguistics , paleontology , philosophy , database , politics , political science , law , biology , physics , quantum mechanics , voltage
The article is devoted to research to the state-of-art vector representation of words in natural language processing. Three main types of vector representation of a word are described, namely: static word embeddings, use of deep neural networks for word representation and dynamic) word embeddings based on the context of the text. This is a very actual and much-demanded area in natural language processing, computational linguistics and artificial intelligence at all. Proposed to consider several different models for vector representation of the word (or word embeddings), from the simplest (as a representation of text that describes the occurrence of words within a document or learning the relationship between a pair of words) to the multilayered neural networks and deep bidirectional transformers for language understanding, are described chronologically in relation to the appearance of models. Improvements regarding previous models are described, both the advantages and disadvantages of the presented models and in which cases or tasks it is better to use one or another model.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here