Breaking Down SJR Scores: A Guide to Understanding Academic Journal Performance


What is SJR?
The SJR (Scimago Journal Rank) is a metric that measures the prestige and impact of scientific journals. It is based on the concept of prestige transfer via citation links. Developed by the Scimago Lab, the metric ranks journals based on the citations received by their articles and the SJR scores of the citing journals. The SJR metric considers not only the total number of citations but also the quality of the citing journals as the subject field, quality and reputation of the journal have a direct effect on the citation of SJR.
A higher SJR score indicates that a journal has received more citations from other prestigious journals, signifying a higher level of influence and impact within the scientific community. However, the Scimago Journal Rank is just one of many metrics utilised to evaluate the quality and impact of scientific journals, and it should be considered alongside other measures such as the impact factor, h-index, and expert opinion when assessing the significance of a journal.
Why should you utilise SJR?
The Scimago Journal Rank is a public resource, meaning no subscription is needed to access and view any journal’s rank or score. SJR covers all disciplines, taking into account all relevant aspects of a journal tailored to the subject area. Moreover, the rankings are optimised to factor in the differences in citation behaviour between disciplines. It can be argued that SJR is a well-rounded metric, here are some key benefits of utilising it:
- Evaluate journal quality
SJR provides a quantitative measure of the prestige and impact of scientific journals. The score considers both the number of citations received by a journal and the quality of the citing journal. By utilising SJR, you can easily assess the relative importance and influence of different journals within a discipline.
- Identify influential journals
SJR scores journals based on their impact and visibility within the scientific community. The score can identify the most influential journals in your area of research, allowing you to target your publications to maximise their impact and reach.
- Compare journals within a field
SJR provides a comprehensive comparison of different journals within a discipline. You can assess the standing and ranks of journals based on their SJR scores and determine which ones are more widely recognised by the scientific community.
- Benchmark research output
SJR also provides rankings at national and institutional levels. It can assist in benchmarking the research output of different countries or institutions, enabling you to assess their scientific productivity.
- Stay updated on scientific trends
By regularly consulting SJR, you can keep track of the evolving landscape of scientific journals, including emerging journals, new research areas, and trends within your field of interest.
How is SJR calculated?

The SJR (Scimago Journal Rank) is calculated using a methodology that counts the number of citations a journal receives. The source of citations is also taken into account; citations from prestigious citing journals. The steps involved in calculating the SJR score are:
- Collection of data: The methodology is initiated by collecting data on citations from Scopus, which is a comprehensive bibliographic database of scientific literature.
- Weighting citations: Each citation received by an article within the journal is weighted based on the importance of the citing journal. The methodology considers the SJR of the citing journal as an indicator of its prestige. Higher-ranked journals contribute more to the SJR score of the journal being evaluated.
- Normalisation: To account for differences in citation practices between fields of study, the SJR algorithm implements a normalisation process. This process adjusts variations in citation patterns and citation potential across different disciplines.
- Prestige of the citing journals: Journals that receive citations from more prestigious and influential journals are given higher weight in the calculation.
- Journal self-citations: Self-citations, which are citations made by a journal to its own articles, are excluded from the SJR calculation. This ensures that self-referencing does not influence a journal's SJR score.
- Iterative calculation: The Scimago Journal Rank is calculated iteratively, taking into account the rank scores of the citing journals. This iterative process helps adjust the scores and establish a relative ranking of journals within specific subject categories.
What are the limitations of SJR?
While the SJR (Scimago Journal Rank) metric is widely used and provides valuable insights regarding the impact of scientific journals, it is important to consider its limitations. Some of the limitations of the SJR metric are:
- Subjectivity of Journal Rankings
The rankings provided by SJR are based on algorithms that consider citation data and the prestige of citing journals. However, the determination of prestige is subjective and can vary across different research communities or disciplines. The choice of specific journals in the Scopus database can also have biases in the rankings.
- Limited Coverage
SJR relies on the Scopus database for citation data, which may not include all journals across all disciplines. Certain fields or niche journals may be underrepresented in the database, leading to an incomplete representation of the research landscape.
- Focus on Citations
SJR heavily relies on citation data as the primary focus of a journal's impact. While citations can be a significant unit of measurement, they do not capture other aspects of a journal's quality, such as editorial standards, scientific rigour, or societal impact. The metric does not assess factors like the published research's novelty, originality, or practical applicability.
- Time Lag
SJR scores are updated annually, which means there can be a time lag in reflecting the most recent developments and impact of journals. This delay may not capture the immediate influence of newly published research.
- Field Normalisation Challenges
While the Scimago Journal Rank attempts to normalise citations across different fields, variations in citation practices and publishing patterns can still have biases. Certain disciplines may have higher citation rates due to their nature or popularity, leading to potential imbalances in the rankings.
- Limited Transparency
The specific details of the algorithm used to calculate SJR scores, including the weighting and normalisation methods, are proprietary information and not publicly disclosed. This lack of transparency can make it difficult to fully understand and critique the metric.
What is the difference between Scimago Journal Rank and Journal Impact Factor?
The journal impact factor is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times its articles are cited. The calculation is based on a two-year period and involves dividing the number of times articles were cited by the number of articles that are citable.
The main differentiating point of SJR and Journal Impact Factor is that the Scimago Journal Rank measures prestige and Journal Impact Factor measures citation impact. Both metrics utilise citations to settle a journal’s score. Moreover, both metrics rely on different databases, SJR relies on Scopus while Journal Impact Factor relies on Journal Citation Reports (JCR), these well-reputed databases assist the algorithms with which each score is decided. Additionally, SJR scores are optimised to compare journals across disciplines while the journal impact factor is not; using this metric you may only compare journals within one discipline.
In conclusion, the Scimago Journal Rank is a valuable metric that determines the prestige of a specific journal. This article explored why as researchers, you should be utilising SJR and how it is calculated while highlighting the metric's limitations. To help you gain a better understanding, the article also included a brief comparison between the Scimago Journal Rank and Journal Impact Factor. While SJR is arguably a well-rounded metric, it should not be the only method of analysis and should be considered along with other metrics and expert opinions to draw final conclusions about a specific journal.

Qualitative VS. Quantitative Research: How To Use Appropriately and Depict Research Results
What is qualitative and quantitative research? Before a researcher begins their research, they would need to establish whether their research results will be quantitative or qualitative. Qualitative research observes any subjective matter that can’t be measured with numbers or units, usually answering the questions “how” or “why”. This type of data is usually derived from exploratory sources like, journal entries, semi-structured interviews, videos, and photographs. On the other hand, quantitative research is numeric and objective, which usually answers the questions “when” or “where”. This data is derived from controlled environments like surveys, structured interviews, and traditional experimental designs. Quantitative data is meant to find objective information. What are the main differences between qualitative and quantitative research? The main factor of differentiation between qualitative and quantitative data are the sources that the data is gathered from, as this effects the format of the results. Sources of Qualitative DataSources of Quantitative DataParticipants’ recollection of eventsPolls, surveys and experimentsFocus groupsDatabases of records and informationObserving ethnographic studiesAnalysis of other research to identify patternsSemi-structured interviewsQuestionnaires with close-ended questionsQuestionnaires with open-ended questionsStructured Interviews When to use qualitative and quantitative research? When conducting a study, knowing how the results will be depicted drive the methodology and overall approach to the study. To understand whether qualitative or quantitative research results are best suited for your current project, we take a deeper dive at the several advantages and disadvantages of each. Qualitative research Advantages: Allows researchers to understand “human experience” that cannot be quantified Has fewer limitations, out-of-the-box answers, opinions and beliefs are included in data gathering and analysis Researchers can utilise personal instinct and subjective experience to identify and extract information Easier to derive and conduct as researchers can adapt to any changes to optimise results Disadvantages: Responses can be biased, as participants may opt for answers that are desirable. Qualitative studies usually have small sample sizes, this impacts the reliability of the study as it cannot be generalised to certain demographics. Researchers and other’s who read the study can have interpretation bias as the information is subjective and open to interpretation Quantitative research Advantages: Usually observes a large sample, ensuring a broad percentage is taken into consideration and reflected Produces precise results that can be widely interpreted Minimises any research bias through the collection and representation of objective information Data driven research method that depicts effectiveness, comparisons and further analysis. Disadvantages: Does not derive “meaningful” and in-depth responses, only precise figures are included in findings Quantitative studies are expensive to conduct as they require a large sample When designing a quantitative study, it is important to pay extra attention to all factors within the study, as a small fault can largely impact all results. How to effectively analyse qualitative and quantitative data? Since the data collection method for qualitative and quantitative studies are different, so is the analysis and organisation of the gathered information. In this section, we dive into a step-by-step guide to effectively analyse both types of data and information to derive accurate findings and results. Analysing qualitative data Types of qualitative data analysis Content analysisIdentifies patterns derived from text. This is done by categorising information into themes, concepts and keywords.Narrative analysisObserves the manner in which people tell stories and the specific language they use to describe their narrative experience.Discourse analysisUsed to understand political, cultural and power dynamics. This methos specifically focuses on the manner in which individuals express themselves in social contexts.Thematic analysisThis method is used to understand the meaning behind the words participants use. This can be deduced by observing repeated themes in text.Grounded theoryMostly used when very little information is known about a case or phenomenon. The grounded theory is an “origin” theory and other cases and experiences are examined in comparison to the grounded theory. Steps to analyse qualitative data Once your data has been collected, it is important to code and categorise the information to easily identify the source. After organising the information, you will need to correlate the information logically and derive valuable insights. Once the correlations are solid, you will need to choose how to depict the information. In qualitative data, researchers usually provide transcripts from interviews and visual evidence from various sources. Analysing quantitative data Types of quantitative data analysis Descriptive analysisThis method focuses on summarising the collected data and describing its attributes. This is when mean, median, mode, frequency or distribution is calculated.Inferential analysisThis method allows researchers to draw conclusions from the gathered statistics. It allows researchers to analyse the relationship between variables and make predictions; this includes cross-tabulation, t-tests and factor analysis. Steps to analyse quantitative data Once the data has been collected, you will need to “clean” the data. This essentially means that you’ll need to observe any duplications, errors or omissions and remove them. This ensures the data is accurate and clear before analysis. You will now need to decide whether you will analyse the data using descriptive or inferential analysis, depending on the gathered data set and the findings you’d like to depict. Now, you’ll need to visualise the data using charts and graphs to easily communicate the information in your research paper. Conduct your research on Zendy todayThis blog thoroughly covered qualitative and quantitative data and took you through how to analyse, depict and utilise each type appropriately. Continue your research into different types of studies on Zendy today, search and read through millions of studies, research and experiments now.

What is a DOI? Strengths, Limitations & Components
DOI is short for Digital Object Identifier. It is a unique alphanumeric sequence assigned to digital objects, it is used to identify intellectual property on the internet. DOI’s are usually assigned to scholarly articles, datasets, books, videos and even pieces of software. Understanding DOI's The digital object identifier is a unique number made up of a prefix and suffix, segregated by a forward slash. For example: 10.1000/182 The sequence always begins with a 10. The prefix is a unique 4 or more digit number assigned to establishments and the suffix is assigned by publisher as it is designed to be flexible with publisher identification standards. Where can I find a DOI? In most scholarly articles, the DOI should be on the cover page. If the DOI isn't included in the article, you may search for it on CrossRef.org by using the "Search Metadata" function. How can I use the digital object identifier to find the article it refers to? If the DOI starts with http:// or https://, pasting it on your web browder will help you locate the article. You can turn any DOI starting with 10 into a URL by adding http://doi.org/ before the DOI. For example, 10.3352/jeehp.2013.10.3 becomes https://doi.org/10.3352/jeehp.2013.10.3 If you're off campus when you do this, you'll need to use this URL prefix in front of the DOI to gain access to UIC's full text journal subscriptions: https://proxy.cc.uic.edu/login?url=https://doi.org/ . For example: https://proxy.cc.uic.edu/login?url=http://doi.org/10.3352/jeehp.2013.10.3 Strengths of Digital Object Identifier Permanent identification: Digital object identifier provides a permanent link to digital content, making sure it remains accessible even if URL or metadata is updated. Citations: It uniquely identifies research papers, which facilitates accurate referencing and citing. Interoperability: DOIs are widely recognized as they can be utilised across different platforms, databases and systems. Tracking and metrics: DOIs provide key information like publication date, authors, keywords and more. This can be used to track usage metrics, measuring impact and improving discoverability Integration with services: DOIs are integrated with various tools like reference managers, academic search engines, and digital libraries. These mediums enhance the visibility and accessibility of research material with DOIs. Limitations of Digital Object Identifier Cost: Digital object identifiers are costly for smaller organisations or individual researchers. While some services offer free digital object identifier registration for certain content, there may be fees associated with others, particularly for maintenance and updates. Accessibility: There may still be barriers to access for individual researchers or organisations in regions with limited resources. Ensuring equitable access to digital object identifier services and content remains a challenge. Content Preservation: While the sequence provide persistent links to digital content, they do not guarantee the preservation or long-term accessibility of that content. Ensuring the preservation of digital objects linked to DOIs require additional efforts and infrastructure beyond the system itself. Granularity: Sequences are assigned to individual digital objects, such as articles, datasets, or books. However, there may be cases where more granular identification is required, such as specific sections within a larger work or versions of a dataset. Addressing these granularity issues within the digital object identifier system can be complex. Conduct your research on Zendy today Now that you’ve gained a better understanding of how DOI works and impacts the world of research, you may begin your search and find your next academic discovery on Zendy! Our advanced search allows you to input DOI, ISSN, ISBN, publication, author, date, keyword and title. Give it a go on Zendy now. ul { margin-top: 5px !important; margin-bottom: 5px !important; } p, ul, li, h1, h2, h4 { word-break: normal !important; }

Decolonising and diversifying academia: Interview with Nahil Nasr, the Community Engagement Manager at F.O.R.M.
This January, the Forum of Open Reseach MENA hosted its first community development activity of 2024. The “Decolonising Open Science Symposium: Dismantling Global Heirarchies of Knowledge” addressed the influence of western prominence on knowledge distribution and research, highlighting how these ideologies and standards impact the Arab region. Within the landscape of research, conversations and collaborations not only address inequalities but also break barriers to accessibility. In this blog, we interviewed Nahil Nassar who is the community engagement manager at the Forum of Open Research MENA. At the symposium, Nahil touched on the work that open science has in building stronger foundations for diverse research consumption and the biases that exist in the research landscape. We take a deeper dive into this conversation. How does F.O.R.M. facilitate conversations around decolonising academia? FORM is a community based organisation that centers its attention on the Arab region. That means prioritising Arab voices in academia to develop a regionally and culturally relevant model of Open Science to implement across the board. While we do, of course, work with organisations that are based in the Global North, we try to be transparent when it comes to power dynamics, and recognise that we are only as strong as our community. What role does open science play in escalating research outside western europe? Open Science has the potential to really build an even playing field for researchers in the Global South because of its financially and digitally accessible model. In its best form, Open Science should allow researchers from the Global South to publish their work without limitations in cost or geography. The problem is that Open Science publishing is not always functioning in its most optimum form, and things like APCs, metric frameworks, and language hierarchies (English being a dominant language across the research landscape) can still limit researchers in the same ways that traditional academic publishing models do. What are some biases that exist in the open science landscape? A major bias that comes out of the Open Science landscape, especially when it comes to the Global South, is that Open Science research is bad research. There’s this assumption that if research isn’t published in perfect English, or focuses on a very niche subject that’s really only relevant to specific local contexts, then that means the research is either low quality or irrelevant. This is especially because of how research is prioritised in its value these days, and this is one of the many places where commodification enters the conversation as a major issue. Often times, major funding is only allocated to research that is deemed important by multinational corporations or prestigous research institutions in the Global North who sort of set the agenda of what is necessary to study and what isn’t - and these topics are usually prioritised based on the needs of these entities and their contexts, and completely ignore the localised needs of researchers in the Global South, who then don’t have access to that same funding. Please explain how absolute objectivity is colonial ideology This is a really interesting ideology to ponder on in decolonial discourse, because it seems very out there to say that there’s no such thing as objective truth, especially in a world that is run by scientific innovation. The idea of objectivity may seem to be clear and cut, but it goes back to the idea of intellectual dominance and colonialism. There was an ideological hierarchy set by colonial powers that placed their “truth” as the only “truth”, and took objectivity to mean that their truth is the only one with any substance or value. Many indigenous knowledge systems question this idea of absolute objectivity, because it is often rooted in inherently colonial, patriarchal, and violent understandings of nature, human experience, and society. I was first introduced to this philosophy through postcolonial gender theory, where researchers like Vandana Shiva questioned the very idea of scientific knowledge as we know it today as something that was forced on us as the only virtuous fact, but is sometimes actually the most harmful opinion. What is the direct impact of colonisation on knowledge production today? The impact of colonisation on knowledge production today can be found in a plethora of arenas. While colonisation as we once knew it is not nearly as prominent as it was in the 19th and 20th centuries, neo-imperial and neo-colonial ideologies are still very much strong holding the majority of the world’s systems. You can see legacies of it in how we think about scientific studies, methodologies, or even the metrics that we use to classify ‘good’ and ‘bad’ research. It informs how we think about credibility, and determines who gets to speak the loudest and whose voice gets silenced. It marginalises researchers who use indigenous knowledge methodologies (often rooted in intuition and connection to land and spirit) and prioritises the voices of liberal scientists who believe in objective fact rooted in numbers and rationality. Overall, it prioritises knowledge produced and disseminated by Western organisations and researchers that then have an impact on Western communities, and leave the global majority out of the conversation. Watch the webinar here
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom