z-logo
Premium
Towards greater precision in latent construct measurement: What's the Rasch ?
Author(s) -
Teo Timothy
Publication year - 2011
Publication title -
british journal of educational technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 95
eISSN - 1467-8535
pISSN - 0007-1013
DOI - 10.1111/j.1467-8535.2011.01213.x
Subject(s) - rasch model , construct (python library) , citation , library science , mathematics education , computer science , sociology , psychology , mathematics , statistics , programming language
Measuring attitudes and perceptions is an integral part of empirical studies in educational technology research. Some examples include measurement of students' attitude towards computers (Teo & Lee, 2008) or perceptions of educational robots among adolescents (Liu, 2010), and attitude as a construct in a theoretical model (Teo, 2009). In virtually all studies that involve the measurement of attitude and perception in the field of educational technology, the self‐report is used. Two mistakes that researchers often make in using the self‐report are (1) assuming that items are always on an interval scale (ie, equidistant measurement) and (2) that response options are always on an equivalent scale (ie, participants' responses indicate similar levels). For example, on a 5‐point Likert scale, researchers often treat the difference between a response of “1” or “2” as being equivalent to the difference between “4” and “5.” Another mistake is the assumption that item endorsements are always on an equivalent scale, where a response of “5” means the same thing for all respondents. For example, when asking respondents to answer a question “I like to use the computer,” respondents may be indicating different levels of liking, despite choosing the highest possible category of “5.”

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here