Top 4 Journals Classification Systems Every Researcher Should Know
If you’ve ever tried to figure out which journal is the best fit for your research or wondered how journals classification is carried out, you’ve probably come across terms like Quartiles (q1 q2 q3 q4 journal), h-index, Impact Factor (IF), and Source Normalised Impact per Paper (SNIP). These metrics might sound technical, but they are simply tools to measure how much attention a journal’s research gets. Here’s a straightforward explanation of what they mean and how they work
Quartiles in Journals Classification: Ranking by Performance
The system of dividing journals into four quartiles, Q1, Q2, Q3, and Q4, was created to make it easier to compare their quality and impact within a specific field. This idea became popular through Scopus and Journal Citation Reports (JCR) databases, which rank journals based on metrics like citations. The concept builds on the work of Eugene Garfield, who introduced the Impact Factor, offering a way to see how journals stand up against others. Quartiles break things down further: Q1 represents the top 25% of journals in a category, while Q4 includes those at the lower end. It's a straightforward way to help researchers determine which journals are most influential in their areas of study.
- Q1: Top 25% of journals in the field (highest-ranked).
- Q2: 25-50% (mid-high-ranked).
- Q3: 50-75% (mid-low-ranked).
- Q4: Bottom 25% (lowest-ranked).

However, not all Q3 or Q4 journals are necessarily a disadvantage. While they may not be as well-known, they are still important in scientific research. Some of the benefits include:
- Affordability: These journals are easier for researchers to access, especially for those on a tight budget.
- Focused Topics: They tend to cover more specific, niche areas of study, making them great for in-depth exploration of certain subjects.
- Great for New Researchers: Q3 and Q4 journals classification can be a good place for new researchers to publish their first paper and gain experience in the publishing world.
- Ideal for Basic Research: They’re a great option for research that focuses on the basics of science
Finally, publishing your article in a Q3 or Q4 journal doesn’t mean it lacks value or won’t make an impact. If your work presents new findings that address a real problem, it can still attract attention, even when published in a lower-ranked journal.
h-index: A Balance of Quantity and Quality
The h-index score is an important factor in journal classification. It looks at the number of articles a journal has published and how often those articles are cited. It balances quantity (how many articles a journal publishes) with quality (how many of its articles are referenced).
For example, if a journal has an h-index of 15, it means it has published 15 articles, each cited at least 15 times. It’s a simple way to measure a journal’s influence without focusing too much on just one super-cited article or a bunch of rarely cited ones.
How h-index works:
Let’s say a journal has published 4 articles, and the number of citations for each article looks like this:
- The 1st article has 10 citations – exceeds 1 citation.
- The 2nd article has 24 citations – exceeds 2 citations.
- The 3rd article has 5 citations – exceeds 3 citations.
- The 4th article falls short of 4 citations.
In this case, the journal has three articles that each have at least three citations. The fourth article doesn’t hit the mark, so the h-index stops at 3.

This metric can help researchers, professionals, and institutions decide if a journal publishes research that gets noticed and cited by the academic community. It’s not the full picture, but it’s a useful starting point for understanding the journal’s influence.
Impact Factor: Citation Average
The Impact Factor (IF) is a number that shows how often a journal’s articles are cited on average over the past two years. It helps you understand how much attention the journal’s research gets from other scholars and it also helps with journals classification.
How it works?
To calculate the IF, look at how many times articles from a journal were cited in the past two years. Then, you divide that by the total number of articles the journal published in those two years. This gives you an average citation count per article.
Example:
Let’s say we want to figure out the IF for Journal A in 2023:
- In 2021 and 2022, Journal A published 50 articles.
- In 2023, those articles were cited 200 times in total.
- You take the total citations (200) and divide it by the total number of articles (50): 200 ÷ 50 = 4
So, Journal A has an Impact Factor of 4, meaning its articles were cited, on average, four times each. A higher Impact Factor often places journals higher in classification, but keep in mind that it’s not the full story. Some specialised journals may have lower Impact Factors even though they’re highly respected in their niche.

SNIP: Fair Comparisons Across Fields
SNIP (Source Normalised Impact per Paper) is a valuable metric in journals classification because it goes one step further. It measures contextual citation impact and takes into account the fact that different research fields have different citation habits. For instance, medical papers often get cited a lot, while mathematics papers don’t, even if they’re equally important in their fields.
SNIP adjusts the average citations a journal receives based on these differences, making it easier to compare journals across disciplines.
Example:
- Journal A publishes in a low-citation field like social sciences and averages 3 citations per article. Adjusted for its field, its SNIP might be 1.6.
- Journal B publishes in a high-citation field like biomedicine and has an average of 8 citations per article. After adjustment, its SNIP might be 1.2.
SNIP makes sure journals in fields with fewer citations still get the recognition they deserve.
What it tells you:
SNIP is especially useful for journal classification because it levels the playing field between disciplines. A higher SNIP score suggests that a journal’s articles are cited more often than expected for its field. It’s a helpful tool for comparing journals, but it’s just one of many ways to evaluate a journal’s influence or importance.
Below is a concise summary table of the four journal classification systems, followed by key considerations:
Journal ranking system comparison
| System | Purpose | Calculation | Key Insights |
|---|---|---|---|
| Quartiles (Q1-Q4) | Ranks journals by performance within a field (e.g., biology, engineering). | Journals divided into four equal groups based on citation metrics (e.g., Impact Factor): • Q1: Top 25% • Q2: 25-50% • Q3: 50-75% • Q4: Bottom 25%. | • Q1/Q2 = high prestige. • Q3/Q4 = affordable, niche-focused, beginner-friendly. • Lower quartiles ≠ low-value research. |
| h-index | Measures journal influence by balancing article productivity and citations. | A journal has index h if it published h articles each cited ≥ h times.Example: h-index=15 means 15 articles cited ≥15 times each. | • Avoids over-reliance on single highly cited papers. • Useful for gauging consistent impact. |
| Impact Factor (IF) | Indicates average citation attention per article. | IF = (Citations in year Y to articles from Y-1 and Y-2) ÷ (Articles published in Y-1 and Y-2). Example: 200 citations ÷ 50 articles = IF 4. | • Higher IF = higher ranking. • Field-dependent: STEM > humanities. • Less meaningful for niche fields. |
| SNIP | Compares journals fairly across fields by normalizing citation practices. | Adjusts raw citations per paper by field’s typical citation density. Example: 3 citations in social sciences (SNIP=1.6) vs. 8 in biomedicine (SNIP=1.2). | • Levels comparison between high/low-citation fields. • SNIP >1 = above-field-average impact. |
Key Considerations for All Systems
- No single metric tells the whole story – A journal may rank highly in one system but lower in another.
- Field-specific biases – Metrics like IF and SNIP adjust for disciplinary differences (e.g., mathematics vs. medicine).
- Beyond rankings – Lower-quartile/niche journals offer unique advantages (accessibility, specialization).
- Research goals matter – Choose a journal based on audience fit, not just classification.


From Curator to Digital Navigator: Evolving Roles for Modern Librarians
With the growing integration of digital technologies in academia, librarians are becoming facilitators of discovery. They play a vital role in helping students and researchers find credible information, use digital tools effectively, and develop essential research skills. At Zendy, we believe this shift represents a new chapter for librarians, one where they act as mentors, digital strategists, and AI collaborators. Zendy’s AI-powered research assistant, ZAIA, is one example of how librarians can enhance their work using technology. Librarians can utilise ZAIA to assist users in clarifying research questions, discovering relevant papers more efficiently, and understanding complex academic concepts in simpler terms. This partnership between human expertise and AI efficiency allows librarians to focus more on supporting critical thinking, rather than manual searching. According to our latest survey, AI in Education for Students and Researchers: 2025 Trends and Statistics, over 70% of students now rely on AI for research. Librarians are adapting to this shift by integrating these technologies into their services, offering guidance on ethical AI use, research accuracy, and digital literacy. However, this evolution also comes with challenges. Librarians must ensure users understand how to evaluate AI-generated content, check for biases, and verify sources. The focus is moving beyond access to information, it’s now about ensuring that information is used responsibly and critically. To support this changing role, here are some tools and practices modern librarians can integrate into their workflows: AI-Enhanced DiscoveryUsing tools like ZAIA to help researchers refine queries and find relevant studies faster. Research Data Management Organising, preserving, and curating datasets for long-term academic use. Ethical AI and Digital Literacy Training Teaching researchers how to verify AI outputs, evaluate bias, and maintain academic integrity. Collaborative Digital Spaces Facilitating research communication through online repositories and discussion platforms. In conclusion, librarians today are more than curators, they are digital navigators shaping how knowledge is accessed, evaluated, and shared. As technology continues to evolve, so will its role in guiding researchers and students through the expanding world of digital information. .wp-block-image img { max-width: 65% !important; margin-left: auto !important; margin-right: auto !important; }

Strategic AI Skills Every Librarian Must Develop
In 2026, librarians who understand how AI works will be better equipped to support students and researchers, organise collections, and help patrons find reliable information faster. Developing a few key AI skills can make everyday tasks easier and open up new ways to serve your community. Why AI Skills Matter for Librarians AI tools that recommend books, manage citations, or answer basic questions are becoming more common. Learning how these tools work helps librarians: Offer smarter, faster search results. Improve cataloguing accuracy. Provide better guidance to researchers and students. Remember, AI isn’t replacing professional judgment; it’s supporting it. Core AI Literacy Foundations Before diving into specific tools, it helps to understand some basic ideas behind AI. Machine Learning Basics:Machine learning means teaching a computer to recognise patterns in data. In a library setting, this could mean analysing borrowing habits to suggest new titles or resources. Natural Language Processing (NLP):NLP is what allows a chatbot or search tool to understand and respond to human language. It’s how virtual assistants can answer questions like “What are some journals about public health policy?” Quick Terms to Know: Algorithm: A set of steps an AI follows to make a decision. Training Data: The information used to “teach” an AI system. Neural Network: A type of computer model inspired by how the brain processes information. Bias: When data or systems produce unfair or unbalanced results. Metadata Enrichment With AI Cataloguing is one of the areas where AI makes a noticeable difference. Automated Tagging: AI tools can read through titles and abstracts to suggest keywords or subject headings. Knowledge Graphs: These connect related materials, for example, linking a book on climate change with recent journal articles on the same topic. Bias Checking: Some systems can flag outdated or biased terminology in subject classifications. Generative Prompt Skills Knowing how to “talk” to AI tools is a skill in itself. The clearer your request, the better the result. Try experimenting with prompts like these: Research Prompt: “List three recent studies on community reading programs and summarise their findings.” Teaching Prompt: “Write a short activity plan for a workshop on evaluating online information sources.” Summary Prompt: “Give me a brief overview of this article’s key arguments and methods.” Adjusting tone or adding detail can change the outcome. It’s about learning how to guide the tool rather than letting it guess. Ethical Data Practices AI tools can be useful, but they also raise questions about privacy and fairness. Librarians have always cared deeply about protecting patron information, and that remains true with AI. Keep personal data anonymous wherever possible. Review AI outputs for signs of bias or misinformation. Encourage clear policies around how data is stored and used. Ethical AI is part of a librarian’s duty to maintain trust and fairness. Automating Everyday Tasks AI can take over some of the small, routine jobs that fill up a librarian’s day. Circulation: Systems can send overdue reminders automatically or manage renewals. Chatbots: Basic questions like “What are the library hours?” can be handled instantly. Collection Management: AI can spot patterns in borrowing data to suggest which books to keep, reorder, or retire. Building Your Learning Path Getting comfortable with AI doesn’t have to mean earning a new degree. Start small: Take short online courses or micro-certifications in AI literacy. Join librarian groups or online forums where people share practical tips. Block out one hour a week to try out a new tool or attend a webinar. A little consistent learning goes a long way. Making AI Affordable Many smaller libraries worry about cost, but not every tool is expensive. Free Tools: Some open-access AI platforms, like Zendy, offer affordable access to research databases and AI-powered features. Shared Purchases: Partnering with other libraries to share licenses can cut costs. Cloud Services: Pay-as-you-go plans mean you only pay for what you actually use. There’s usually a way to experiment with AI without stretching the budget. Showing Impact Once AI tools are in use, it’s important to show their value. Track things like: Time saved on cataloguing or circulation tasks. Patron feedback on new services. How often are AI tools used compared to manual systems? Numbers matter, but so do stories. Sharing examples, like a student who found research faster thanks to a new search feature, can make your case even stronger. And remember, the future of librarianship is about using AI tools in libraries thoughtfully to keep libraries relevant, reliable, and welcoming spaces for everyone. .wp-block-image img { max-width: 75% !important; margin-left: auto !important; margin-right: auto !important; }

Key Considerations for Training Library Teams on New Research Technologies
The integration of Generative AI into academic life appears to be a significant moment for university libraries. As trusted guides in the information ecosystem, librarians are positioned to help researchers explore this new terrain, but this transition requires developing a fresh set of skills. Training your library team on AI-powered research tools could move beyond technical instruction to focus on critical thinking, ethical understanding, and human judgment. Here is a proposed framework for a training program, organised by the new competencies your team might need to explore. Foundational: Understanding Access and Use This initial module establishes a baseline understanding of the technology itself. Accessing the Platform: Teach the technical steps for using the institution's approved AI tools, including authentication, subscription models, and any specific interfaces (e.g., vendor-integrated AI features in academic databases, institutional LLMs, etc.). Core Mechanics: Explain what a Generative AI platform (like a Large Language Model) is and, crucially, what it is not. Cover foundational concepts like: Training Data: Familiarise staff with how to access the institution’s chosen AI tools, noting any specific authentication requirements or limitations tied to vendor-integrated AI features in academic databases. Prompting Basics: Introduce basic prompt engineering, the art of crafting effective, clear queries to get useful outputs. Hallucinations: Directly address the concept of "hallucinations," or factually incorrect/fabricated outputs and citations, and emphasise the need for human verification. Conceptual: Critical Evaluation and Information Management This module focuses on the librarian's core competency: evaluating information in a new context. Locating and Organising: Train staff on how to use AI tools for practical, time-saving tasks, such as: Generating keywords for better traditional database searches. Summarising long articles to quickly grasp the core argument. Identifying common themes across a set of resources. Evaluating Information: This is perhaps the most critical skill. Teach a new layer of critical information literacy: Source Verification: Always cross-check AI-generated citations, summaries, and facts against reliable, academic sources (library databases, peer-reviewed journals). Bias Identification: Examine AI outputs for subtle biases, especially those related to algorithmic bias in the training data, and discuss how to mitigate this when consulting with researchers. Using and Repurposing: Demonstrate how AI-generated material should be treated—as a raw output that must be heavily edited, critiqued, and cited, not as a final product. Social: Communicating with AI as an Interlocutor The quality of AI output is often dependent on the user’s conversational ability. This module suggests treating the AI platform as a possible partner in a dialogue. Advanced Prompt Engineering: Move beyond basic queries to teach techniques for generating nuanced, high-quality results: Assigning the AI a role (such as a 'sceptical editor' or 'historical analyst') to potentially shape a more nuanced response. Practising iterative conversation, where librarians refine an output by providing feedback and further instructions, treating the interaction as an ongoing intellectual exchange. Shared Understanding: Practise using the platform to help users frame their research questions more effectively. Librarians can guide researchers in using the AI to clarify a vague topic or map out a conceptual framework, turning the tool into a catalyst for deeper thought rather than a final answer generator. Socio-Emotional Awareness: Recognising Impact and Building Confidence This module addresses the human factor, building resilience and confidence Recognising the Impact of Emotions: Acknowledge the possibility of emotional responses, such as uncertainty about shifting professional roles or discomfort with rapid technological change, and facilitate a safe space for dialogue. Knowing Strengths and Weaknesses: Reinforce the unique, human-centric value of the librarian: critical thinking, contextualising information, ethical judgment, and deep disciplinary knowledge, skills that AI cannot replicate. The AI could be seen as a means to automate lower-level tasks, allowing librarians to focus on high-value consultation. Developing Confidence: Implement hands-on, low-stakes practice sessions using real-world research scenarios. Confidence grows from successful interaction, not just theoretical knowledge. Encourage experimentation and a "fail-forward" mentality. Ethical: Acting Ethically as a Digital Citizen Ethical use is the cornerstone of responsible AI adoption in academia. Librarians must be the primary educators on responsible usage. Transparency and Disclosure: Discuss the importance of transparency when utilizing AI. Review institutional and journal guidelines that may require students and faculty to disclose how and when AI was used in their work, and offer guidance on how to properly cite these tools. Data Privacy and Security: Review the potential risks associated with uploading unpublished, proprietary, or personally identifiable information (PII) to public AI services. Establish and enforce clear library policies on what data should never be shared with external tools. Copyright and Intellectual Property (IP): Discuss the murky legal landscape of AI-generated content and IP. Emphasise that AI models are often trained on copyrighted material and that users are responsible for ensuring their outputs do not infringe on existing copyrights. Advocate for using library-licensed, trusted-source AI tools whenever possible. Combating Misinformation: Position the librarian as the essential arbiter against the spread of AI-generated misinformation. Training should include spotting common AI red flags, teaching users how to think sceptically, and promoting the library’s curated, authoritative resources as the gold standard. .wp-block-image img { max-width: 65% !important; margin-left: auto !important; margin-right: auto !important; }
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom