Premium
How to Improve your Impact Factor: Questioning the Quantification of Academic Quality
Author(s) -
SMEYERS PAUL,
BURBULES NICHOLAS C.
Publication year - 2011
Publication title -
journal of philosophy of education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.501
H-Index - 41
eISSN - 1467-9752
pISSN - 0309-8249
DOI - 10.1111/j.1467-9752.2011.00787.x
Subject(s) - impact factor , scholarship , promotion (chess) , quality (philosophy) , unintended consequences , publishing , metric (unit) , bibliometrics , process (computing) , sociology , work (physics) , scale (ratio) , public relations , higher education , engineering ethics , psychology , marketing , epistemology , computer science , political science , business , law , philosophy , mechanical engineering , physics , quantum mechanics , politics , data mining , engineering , operating system
A broad‐scale quantification of the measure of quality for scholarship is under way. This trend has fundamental implications for the future of academic publishing and employment. In this essay we want to raise questions about these burgeoning practices, particularly how they affect philosophy of education and similar sub‐disciplines. First, details are given of how an ‘impact factor’ is calculated. The various meanings that can be attached to it are scrutinised. Second, we examine how impact factors are used to make various ‘high stakes’ academic decisions, such as hiring and promotion, funding of research projects and how much money is to be awarded to a particular area. By focusing on a particular practice, problems with the application of the metric generally are outlined. Finally, we offer some general observations about the unintended consequences and other problems arising from the widespread use of this metric, including attempts to ‘game the system’. We argue that the use of impact factors increasingly shapes the kind of topics and issues scholars write on, their choices of methodology, and their choice of publication venues for their work. Technical measures and mechanisms tend to ‘colonise’ the qualitative and professional judgments that must also be part of the process of evaluation, and for which bibliometrics alone cannot offer a substitute.