Premium
The short‐term stability of student ratings of instruction in medical school
Author(s) -
WEST R. F.
Publication year - 1988
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/j.1365-2923.1988.tb00419.x
Subject(s) - consistency (knowledge bases) , likert scale , medical education , psychology , class (philosophy) , identification (biology) , promotion (chess) , mathematics education , course evaluation , higher education , medicine , computer science , developmental psychology , botany , artificial intelligence , politics , political science , law , biology
Summary. The purpose of this study was to assess the degree of consistency in student ratings of teacher effectiveness during the first year of medical school. Student ratings of teaching effectiveness represent a commonly used source of information that enters into the academic decision‐making process. In medical school, student evaluations often represent a major source of information that is used in promotion and tenure decisions. It is essential that the precision of such ratings be ascertained so that decision‐makers will know how much confidence to place in this source of information on teaching effectiveness. In this study, each member of a first‐year medical school class was randomly assigned a two‐digit identification number at the beginning of the spring semester, 1986. As the semester progressed students were asked to evaluate each fulltime teacher in three major courses. Multiple instructors were utilized in each course ( n = 10 ). Each teacher was evaluated immediately after lectures during the first (T1) and second (T2) halves of the course. Students evaluated the teacher a third time (T3) as part of the end‐of‐semester overall course evaluation. The teachers were evaluated on a short eight‐item Likert‐type scale that identified several key indicators of effective teaching. Students attached their anonymous identification numbers to individual ratings so that their responses could be matched in the analysis. The results indicate that medical students are only moderately consistent in the extent to which they evaluate teachers. This inconsistency varied by course and by instructors within courses. Student perceptions of teaching effectiveness changed over the course of the semester, although the extent and direction of change varied by course. The results highlight a major difficulty in using end‐of‐semester student evaluations as a factor in academic decision‐making. Student ratings of effectiveness may vary considerably from one time period to another and may be influenced by many different factors. Decision‐makers should exercise caution in interpreting one‐shot student evaluations of instructional effectiveness.