Automatic Assessment Method of Oral English Based on Multimodality
Author(s) -
Chen Xiao-yan
Publication year - 2022
Publication title -
scientific programming
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.269
H-Index - 36
eISSN - 1875-919X
pISSN - 1058-9244
DOI - 10.1155/2022/4774677
Subject(s) - computer science , fluency , artificial intelligence , the internet , natural language processing , multimodality , coding (social sciences) , word2vec , multimedia , speech recognition , world wide web , linguistics , philosophy , statistics , mathematics , embedding
With the rapid development of Internet technology and educational informatization, there are more and more oral materials available on the Internet. Therefore, how to adapt to learners’ dynamic abilities and provide them with personalized learning materials has become a very important issue in educational technology. Aiming at the inefficiency of the existing automatic assessment of spoken English, a multimodal-based automatic assessment method of spoken English is proposed. The Word2Vec model is used to extract text features, and then, speech and text are input into GRU temporal structure, and encoder coding is used for multimodal fusion to realize automatic evaluation of multimodal spoken English. Simulation results show that the multimodal model proposed in this paper is superior to the traditional oral English automatic assessment model in terms of fluency, emotional expression, and sense of rhythm and can better improve learners’ oral English level.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom