Premium
On the function of university rankings
Author(s) -
Bornmann Lutz
Publication year - 2014
Publication title -
journal of the association for information science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.903
H-Index - 145
eISSN - 2330-1643
pISSN - 2330-1635
DOI - 10.1002/asi.23019
Subject(s) - citation , library science , function (biology) , computer science , sociology , biology , evolutionary biology
Dear Sir, Modern science is an evaluating and evaluated science. The quality of research cannot be guaranteed without an evaluation. According to the founder of the modern sociology of science, Robert K. Merton (1973), one of the norms of science is “organized scepticism.” From the 17th century onward, the peer review process was used almost exclusively at first, although since the 1980s and 1990s indicator-based evaluations have been carried out or multistage methods used for the evaluation of research and teaching (Daniel, Mittag, & Bornmann, 2007). The first international university ranking (the so-called Shanghai ranking) was published in 2003. This was then followed by further, large-scale, indicator-based assessments of universities, which were published either as a ranking (individual institutions ranked according to certain criteria) or as a rating (individual institutions assessed according to certain criteria). The importance of rankings or ratings nowadays is evident from the fact that the lack of a German university among the best 20 or 50 universities in international rankings was one of the most important reasons for creating the Excellence Initiative in Germany (Hazelkorn, 2011). Although it is often claimed that the indicator-based assessment in rankings or ratings can be used by university managements for a meaningful analysis of the strengths and weaknesses of their institutions, rankings or ratings primarily provide (a) information on the performance of universities for students and junior scientists; (b) a comparative assessment of universities, on the national and international level; and (3) an account of the universities, which are being given more and more autonomy (Hazelkorn, 2012). This is shown by a survey of university managements reported by Hazelkorn (2011). After students and parents, politicians are considered to be the group most influenced by rankings. Studies have been able to determine a correlation between “the quality of campus facilities and the ability to attract (international) students” (Hazelkorn, 2011, p. 103). University managements assume that “high rankings can boost an institution’s ‘competitive position in relationship to government’” (Hazelkorn, 2011, p. 91). Governments desire independent and objective information on where the research of a country and that of the individual research institutions stands overall. The transparency created by the numbers also has the desired side effect that competition among the institutions (for researchrelated and staff funding) is stimulated (Hazelkorn, 2012), and an increase in the performance of the institutions is to be expected. Universities and nonuniversity research institutions have hardly any need of rankings or ratings for their strategic decisions or for the internal optimization of their performance. Multistage evaluations are carried out at the institutions for this purpose (usually based on informed peer review), which are organized either by the institution itself or by evaluation agencies (Bornmann, Mittag, & Daniel, 2006; Daniel et al., 2007). The Max Planck Institutes of the Max Planck Society have Scientific Advisory Boards; for example, the TU Darmstadt and the University of Zurich even have their own evaluation office. Lower Saxony’s universities are evaluated by the Central Evaluation and Accreditation Agency (ZEvA) and universities in northern Germany by the Association of North German Universities (Bornmann et al., 2006). Because these evaluations are very labor and time intensive, are hardly practicable for a large number of research institutions, and can be carried out effectively only in an atmosphere of absolute discretion, they are not suitable for a largescale comparison of research institutions. Rankings or ratings have become established instead for this purpose. They address primarily the general public and not academics or university management. Rankings or ratings are thus important to universities not as an analysis of strengths and weaknesses that can be used internally, but as a demonstration of performance to external parties (for the students and junior scientists of the future or for politicians). Although there are a number of “dos and don’ts” when designing rankings or ratings, there will probably never be one that will do justice to the heterogeneity of the institutions covered and able to produce a valid image of the performance of all institutions.