z-logo
Premium
Linguistic analysis of extended examination answers: Differences between on‐screen and paper‐based, high‐ and low‐scoring answers
Author(s) -
Charman Melody
Publication year - 2014
Publication title -
british journal of educational technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 95
eISSN - 1467-8535
pISSN - 0007-1013
DOI - 10.1111/bjet.12100
Subject(s) - spelling , variety (cybernetics) , mode (computer interface) , mathematics education , psychology , sample (material) , linguistics , computer science , scale (ratio) , artificial intelligence , human–computer interaction , philosophy , chemistry , physics , chromatography , quantum mechanics
Abstract This small‐scale pilot study aimed to establish how the mode of response in an examination affects candidates' performances on items that require an extended answer. The sample comprised 46 17‐year‐old students from two classes (one in a state secondary school and one in a state sixth‐form college), who sat a mock A ‐level E nglish L iterature examination. The analysis compared writing produced on screen and on paper to try to uncover any systematic differences between the two modes of delivery. The study considered the linguistic features of the texts produced in each mode, the marks achieved and the views of the participants regarding the use of computers in essay‐based examinations. The study found that the response mode had a small effect on the length of essay produced, in that students using a computer wrote more, and on the type of language used, in that students writing on paper used denser but less varied language. There was very little effect on the marks achieved. Participants expressed a variety of concerns about computer‐based examinations, such as noisy keyboards, assessment of spelling, and unfairness towards those who are less comfortable with the technology.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here