z-logo
open-access-imgOpen Access
The Effect of Evaluation on Teacher Performance
Author(s) -
Eric Taylor,
John H. Tyler
Publication year - 2012
Publication title -
american economic review
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 16.936
H-Index - 297
eISSN - 1944-7981
pISSN - 0002-8282
DOI - 10.1257/aer.102.7.3628
Subject(s) - dismissal , productivity , value (mathematics) , theme (computing) , economics , mathematics education , psychology , political science , economic growth , computer science , statistics , mathematics , law , operating system
observable teacher characteristics like graduate education and experience (beyond the first few years) are not typically correlated with increased productivity. Many researchers and policymakers have suggested that, under these conditions, the only way to adjust the teacher distribution for the better is to gather information on individual productivity through evaluation and then dismiss low performers. This paper offers evidence that evaluation can shift the teacher effectiveness distribution through a different mechanism: by improving teacher skill, effort, or both in ways that persist long-run. We study a sample of mid-career math teachers in the Cincinnati Public Schools (CPS) who were assigned to evaluation in a manner that permits a quasi-experimental analysis. All teachers in our sample were evaluated by a year-long classroom observation–based program, the treatment, between 2003–2004 and 2009–2010; the timing of each teacher’s specific evaluation year was determined years earlier by a district planning process. To this setting we add measures of student achievement, which were not part of the evaluation, and use the within-teacher over-time variation to compare teacher performance before, during, and after their evaluation year. We find that teachers are more productive during the school year when they are being evaluated, but even more productive in the years after evaluation. A student taught by a teacher after that teacher has been through the Cincinnati evaluation will score about 10 percent of a standard deviation higher in math than a similar student taught by the same teacher before the teacher was evaluated. Under our identification strategy, these estimates may be biased by patterns of student assignment that favor previously evaluated teachers, or by preexisting positive

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom