
A Proof‐of‐Concept Study on Scoring Oral Presentation Videos in Higher Education
Author(s) -
Feng Gary,
Joe Jilliam,
Kitchen Christopher,
Mao Liyang,
Roohr Katrina Crotts,
Chen Lei
Publication year - 2019
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12256
Subject(s) - inter rater reliability , computer science , psychology , natural language processing , speech recognition , rating scale , developmental psychology
This proof‐of‐concept study examined the feasibility of a new scoring procedure designed to reduce the time of scoring a video‐based public speaking assessment task. Instead of scoring the video in its entirety, the performance was evaluated based on content‐related (e.g., speech organization, word choice) and delivery‐related (e.g., vocal expression, nonverbal behaviors) dimensions. Content‐related dimensions were scored based on speech transcripts, while delivery dimensions were scored using a video thin‐slicing technique, where scores were assigned based on brief vignettes of a video rather than the complete performance. Initial feasibility data were collected from 4 novice raters scoring 10 video performances. Results indicated that for delivery dimensions, four 10 second thin slices yielded interrater consistency reliability somewhat similar to that of full‐video scoring, and additional slices resulted in only small improvements. For transcription‐based scoring, while raters were consistent, their scores had low correlations with criterion scores, which was likely due to differences in the modality (video vs. text). Video thin‐slicing appears to be a promising scoring technique for relevant constructs. Further testing of a combination of audio and transcript is recommended for scoring content‐related constructs.