z-logo
open-access-imgOpen Access
Metrics and the Scientific Literature: Deciding What to Read
Author(s) -
DiBartola S.P.,
Hinchcliff K.W.
Publication year - 2017
Publication title -
journal of veterinary internal medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.356
H-Index - 103
eISSN - 1939-1676
pISSN - 0891-6640
DOI - 10.1111/jvim.14732
Subject(s) - medicine , medline , medical physics , law , political science
With the ready availability of scientific articles online, many of which are open access and do not require subscription or pay-per-view, current information about science and medicine has become easier to access than ever before. In the past 14 years, the number of new articles that appeared in PubMed more than doubled from 593,740 in 2003 to 1,255,875 in 2016. This explosion of information and the revolution in how this information is distributed have made it more challenging than ever for scientists and clinicians to keep up with research activity in their areas of endeavor. Investigators must find ways to filter all of this information and find the highest quality and most relevant articles and journals for the limited amount of time they have to read the literature. Currently, readers can evaluate the available scientific literature using three different methods: citation metrics, usage metrics, and alternative metrics (so-called altmetrics). One of the most time-honored quality indicators of the scientific literature is the impact factor, a citation metric first proposed by linguist Eugene Garfield in 1955 and developed in the 1960s to compare the quality of one journal to another in a given field. Thus, impact factor is a journal level as opposed to an article level metric. It is calculated as the number of citations in the literature of the current year (census year) to papers published in a journal in the preceding 2 years (the target period) divided by the number of citable items published in the journal during those 2 years. For example, if a journal published 100 articles in the time period 2014–2015 and 150 citations were made to those articles in 2016, the journal’s 2016 impact factor would be 1.5. The usefulness of the impact factor depends on the accuracy of the citation counts used in its calculation. The citation data used to calculate impact factor are derived from the Web of Science database, a subscription-based scientific citation indexing service operated by Clarivate Analytics. The 2015 impact factor of the Journal of Veterinary Internal Medicine was 1.821, and the Journal ranked 19th of 138 journals in the Veterinary Sciences category of Journal Citation Reports. For many years, impact factor has been the “gold standard” for assessing quality in the scientific literature, and it has been used in many ways, some of which likely were not intended when it was first developed and for which it is not well suited. It has been used by scientists and clinicians to determine which journals they read and where they submit their work, and by academic administrators to assess the quality of the research of faculty members as well as their funding potential and suitability for promotion and tenure. Since the 1980s, however, the supremacy of the impact factor has been called into question for various reasons. One major concern is that impact factor is a lagging indicator. Citations to published articles accrue slowly. For example, it may take a year from submission of a manuscript until its publication in a traditional print journal and another 1–2 years before citations to the article start to appear in the literature. Such a time frame simply is not fast enough in today’s internet-driven world. Furthermore, impact factor is not a direct measure of quality. Journals in different disciplines and even within a given discipline cannot necessarily be compared to one another. For example, rate of publication typically is lower in the humanities as compared to the sciences, and niche journals in a given discipline typically are cited less frequently than are general journals. Less frequent publication of articles by authors in some fields and less frequent citation of niche journals can adversely affect impact factor regardless of journal quality. Impact factors are subject to gaming by authors, editors, and publishers. A journal that publishes large numbers of review articles may receive a higher impact factor than one that publishes only original research because review articles tend to be heavily cited. Self-citation by authors and encouragement by journal editors for authors to cite other papers previously published in their journals also can affect impact factor. Citation stacking is another method of gaming that involves reciprocal citation between colluding journals in an attempt to boost the impact factors of both journals without resorting to self-citation. Other journal level metrics calculated in Journal Citation Reports include immediacy index, eigenfactor, and article influence score. The immediacy index is the average number of times an article was cited in the year it was published and reflects how quickly articles appearing in a given journal are cited in the literature. The eigenfactor score was developed by Jevin West and Carl Bergstrom and is an indicator of the importance of a given journal to the scientific community. Journals are rated according to number of citations received, but citations are weighted such that citations from more highly ranked journals contribute more than do citations from lower ranked journals. The eigenfactor score is influenced by the size (i.e., number of articles published per year) of the journal such that it doubles when journal size doubles. The article influence score is a reflection of the average influence of a given journal’s articles over the first 5 years after their publication. It is derived from the eigenfactor score and is a ratio of the journal’s citation influence to the size of the journal’s article contribution over a 5-year period. Google Scholar is a free citation index operated by Google. It covers not only journals but books, theses, and other items deemed to be academic in nature. Several journal level metrics are provided by Google Scholar, including the H5 Index, a variation on the h-index. The h-index was proposed by Jorge Hirsch in 2005 as a means to determine the scientific productivity and impact of individual scientists, but its use has been extended to groups of scientists, as well as to individual Editorial

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here