z-logo
open-access-imgOpen Access
Automated Scoring of Speaking Tasks in the Test of English‐for‐Teaching ( TEFT ™)
Author(s) -
Zechner Klaus,
Chen Lei,
Davis Larry,
Evanini Keelan,
Lee Chong Min,
Leong Chee Wee,
Wang Xinhao,
Yoon SuYoun
Publication year - 2015
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12080
Subject(s) - set (abstract data type) , computer science , active listening , test (biology) , construct (python library) , natural language processing , component (thermodynamics) , reading (process) , modalities , artificial intelligence , psychology , linguistics , communication , biology , programming language , social science , sociology , paleontology , philosophy , physics , thermodynamics
This research report presents a summary of research and development efforts devoted to creating scoring models for automatically scoring spoken item responses of a pilot administration of the Test of English‐for‐Teaching ( TEFT ™ ) within the ELTeach ™ framework. The test consists of items for all four language modalities: reading, listening, writing, and speaking. This report only addresses the speaking items, which elicit responses ranging from highly predictable to semipredictable speech from nonnative English teachers or teacher candidates. We describe the components of the system for automated scoring, comprising an automatic speech recognition ( ASR ) system, a set of filtering models to flag nonscorable responses, linguistic measures relating to the various construct subdimensions, and multiple linear regression scoring models for each item type. Our system is set up to simulate a hybrid system whereby responses flagged as potentially nonscorable by any component of the filtering model are routed to a human rater, and all other responses are scored automatically by our system.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here