z-logo
Discover

Top 4 Journals Classification Systems Every Researcher Should Know

calendarDec 13, 2024 |clock16 Mins Read

If you’ve ever tried to figure out which journal is the best fit for your research or wondered how journals classification is carried out, you’ve probably come across terms like Quartiles (q1 q2 q3 q4 journal), h-index, Impact Factor (IF), and Source Normalised Impact per Paper (SNIP). These metrics might sound technical, but they are simply tools to measure how much attention a journal’s research gets. Here’s a straightforward explanation of what they mean and how they work

Quartiles in Journals Classification: Ranking by Performance

The system of dividing journals into four quartiles, Q1, Q2, Q3, and Q4, was created to make it easier to compare their quality and impact within a specific field. This idea became popular through Scopus and Journal Citation Reports (JCR) databases, which rank journals based on metrics like citations. The concept builds on the work of Eugene Garfield, who introduced the Impact Factor, offering a way to see how journals stand up against others. Quartiles break things down further: Q1 represents the top 25% of journals in a category, while Q4 includes those at the lower end. It's a straightforward way to help researchers determine which journals are most influential in their areas of study.

  • Q1: Top 25% of journals in the field (highest-ranked).
  • Q2: 25-50% (mid-high-ranked).
  • Q3: 50-75% (mid-low-ranked).
  • Q4: Bottom 25% (lowest-ranked).
Q1, q2, q3, and q4
Quartiles in Journals Classification
q3 journal meaning
q4 journal meaning

However, not all Q3 or Q4 journals are necessarily a disadvantage. While they may not be as well-known, they are still important in scientific research. Some of the benefits include:

  • Affordability: These journals are easier for researchers to access, especially for those on a tight budget.
  • Focused Topics: They tend to cover more specific, niche areas of study, making them great for in-depth exploration of certain subjects.
  • Great for New Researchers: Q3 and Q4 journals classification can be a good place for new researchers to publish their first paper and gain experience in the publishing world.
  • Ideal for Basic Research: They’re a great option for research that focuses on the basics of science

Finally, publishing your article in a Q3 or Q4 journal doesn’t mean it lacks value or won’t make an impact. If your work presents new findings that address a real problem, it can still attract attention, even when published in a lower-ranked journal.

h-index: A Balance of Quantity and Quality

The h-index score is an important factor in journal classification. It looks at the number of articles a journal has published and how often those articles are cited. It balances quantity (how many articles a journal publishes) with quality (how many of its articles are referenced).

For example, if a journal has an h-index of 15, it means it has published 15 articles, each cited at least 15 times. It’s a simple way to measure a journal’s influence without focusing too much on just one super-cited article or a bunch of rarely cited ones.

How h-index works:

Let’s say a journal has published 4 articles, and the number of citations for each article looks like this:

  • The 1st article has 10 citations – exceeds 1 citation.
  • The 2nd article has 24 citations – exceeds 2 citations.
  • The 3rd article has 5 citations – exceeds 3 citations.
  • The 4th article falls short of 4 citations.

In this case, the journal has three articles that each have at least three citations. The fourth article doesn’t hit the mark, so the h-index stops at 3.

How H-index works
Journals classification

This metric can help researchers, professionals, and institutions decide if a journal publishes research that gets noticed and cited by the academic community. It’s not the full picture, but it’s a useful starting point for understanding the journal’s influence.

Impact Factor: Citation Average

The Impact Factor (IF) is a number that shows how often a journal’s articles are cited on average over the past two years. It helps you understand how much attention the journal’s research gets from other scholars and it also helps with journals classification.

How it works?

To calculate the IF, look at how many times articles from a journal were cited in the past two years. Then, you divide that by the total number of articles the journal published in those two years. This gives you an average citation count per article. 

Example:

Let’s say we want to figure out the IF for Journal A in 2023:

  • In 2021 and 2022, Journal A published 50 articles.
  • In 2023, those articles were cited 200 times in total.
  • You take the total citations (200) and divide it by the total number of articles (50): 200 ÷ 50 = 4

So, Journal A has an Impact Factor of 4, meaning its articles were cited, on average, four times each. A higher Impact Factor often places journals higher in classification, but keep in mind that it’s not the full story. Some specialised journals may have lower Impact Factors even though they’re highly respected in their niche.

How impact factor work?
Journals classifications

SNIP: Fair Comparisons Across Fields

SNIP (Source Normalised Impact per Paper) is a valuable metric in journals classification because it goes one step further. It measures contextual citation impact and takes into account the fact that different research fields have different citation habits. For instance, medical papers often get cited a lot, while mathematics papers don’t, even if they’re equally important in their fields.

SNIP adjusts the average citations a journal receives based on these differences, making it easier to compare journals across disciplines.

Example:

  • Journal A publishes in a low-citation field like social sciences and averages 3 citations per article. Adjusted for its field, its SNIP might be 1.6.
  • Journal B publishes in a high-citation field like biomedicine and has an average of 8 citations per article. After adjustment, its SNIP might be 1.2.

SNIP makes sure journals in fields with fewer citations still get the recognition they deserve.

What it tells you:

SNIP is especially useful for journal classification because it levels the playing field between disciplines. A higher SNIP score suggests that a journal’s articles are cited more often than expected for its field. It’s a helpful tool for comparing journals, but it’s just one of many ways to evaluate a journal’s influence or importance.

Below is a concise summary table of the four journal classification systems, followed by key considerations:

Journal ranking system comparison

SystemPurposeCalculationKey Insights
Quartiles (Q1-Q4)Ranks journals by performance within a field (e.g., biology, engineering).Journals divided into four equal groups based on citation metrics (e.g., Impact Factor):
Q1: Top 25%
Q2: 25-50%
Q3: 50-75%
Q4: Bottom 25%.
• Q1/Q2 = high prestige.
• Q3/Q4 = affordable, niche-focused, beginner-friendly.
• Lower quartiles ≠ low-value research.
h-indexMeasures journal influence by balancing article productivity and citations.A journal has index h if it published h articles each cited ≥ h times.
Example: h-index=15 means 15 articles cited ≥15 times each.
• Avoids over-reliance on single highly cited papers.
• Useful for gauging consistent impact.
Impact Factor (IF)Indicates average citation attention per article.IF = (Citations in year Y to articles from Y-1 and Y-2) ÷ (Articles published in Y-1 and Y-2).
Example: 200 citations ÷ 50 articles = IF 4.
• Higher IF = higher ranking.
• Field-dependent: STEM > humanities.
• Less meaningful for niche fields.
SNIPCompares journals fairly across fields by normalizing citation practices.Adjusts raw citations per paper by field’s typical citation density.
Example: 3 citations in social sciences (SNIP=1.6) vs. 8 in biomedicine (SNIP=1.2).
• Levels comparison between high/low-citation fields.
• SNIP >1 = above-field-average impact.

Key Considerations for All Systems

  1. No single metric tells the whole story – A journal may rank highly in one system but lower in another.
  2. Field-specific biases – Metrics like IF and SNIP adjust for disciplinary differences (e.g., mathematics vs. medicine).
  3. Beyond rankings – Lower-quartile/niche journals offer unique advantages (accessibility, specialization).
  4. Research goals matter – Choose a journal based on audience fit, not just classification.
zaia
zendy
journals classification
You might also like
From Boolean to Intelligent Search: A Librarian’s Guide to Smarter Information Retrieval
Oct 15, 20255 Mins ReadDiscover

From Boolean to Intelligent Search: A Librarian’s Guide to Smarter Information Retrieval

For decades, librarians have been the trusted guides in the vast world of information. But today, that world has grown into something far more complex. Databases multiply, metadata standards evolve, and users expect instant answers. Traditional search still relies on structured logic, keywords, operators, and carefully crafted queries. AI enhances this by interpreting intent rather than just words. Instead of matching text, AI tools for librarians analyse meaning. A researcher looking for “climate change effects on migration” won’t just get papers containing those words, but research exploring environmental displacement, socioeconomic factors, and regional studies. This shift from keyword to context means librarians can spend less time teaching a researcher how to “speak database” and more time helping them evaluate and use the results effectively. The Evolution of Library Search Traditional search engines focus on keywords and often return long lists of potential matches. With AI, libraries can now benefit from search engines that employ natural language processing (NLP) and machine learning (ML) to understand user queries and map them to the most relevant resources, even when key terms are missing or imprecise. Semantic search, embedding-based retrieval, and vector databases allow AI to find conceptually similar resources and suggest new directions for research.​ Examples of AI Tools for Librarians AI ToolMain FunctionLibrarian BenefitZendyAI-powered platform offering literature discovery, summarisation, keyphrase highlighting, and PDF analysisSupports researchers with instant insights, simplifies literature reviews, and improves discovery across 40M+ publicationsConsensusAI-powered academic search enginemanaging citation libraries, efficient literature reviewEx Libris PrimoIntegrates AI for discovery and metadata managementImproves record accuracy and user experienceMeilisearchFast, scalable vector search with NLPEnhanced search for large content databases The Ethics of Intelligent Search AI doesn’t just retrieve; it prioritises. AI tools for librarians determine which results appear first, whose research receives visibility, and what remains hidden. This creates ethical questions around transparency and bias. Librarians are uniquely positioned to question those algorithms, advocate for equitable access, and ensure users understand how results are ranked. In an AI-driven world, digital literacy extends beyond knowing how to search—it’s about learning how machines think. In conclusion AI tools for librarians are becoming more accessible. Platforms now integrate summarisation, concept mapping, and citation analysis directly into search. helping librarians and users avoid unreliable content. For libraries, experimenting with these tools can mean faster reference responses, smarter cataloguing, and better support for researchers drowning in information overload. .wp-block-image img { max-width: 85% !important; margin-left: auto !important; margin-right: auto !important; }

Why AI like ChatGPT still quotes retracted papers?
Oct 10, 202516 Mins ReadDiscover

Why AI like ChatGPT still quotes retracted papers?

AI models like ChatGPT are trained on massive datasets collected at specific moments in time, which means they lack awareness of papers retracted after their training cutoff. When a scientific paper gets retracted, whether due to errors, fraud, or ethical violations, most AI systems continue referencing it as if nothing happened. This creates a troubling scenario where researchers using AI assistants might unknowingly build their work on discredited foundations. In other words: retracted papers are the academic world's way of saying "we got this wrong, please disregard." Yet the AI tools designed to help us navigate research faster often can't tell the difference between solid science and work that's been officially debunked. ChatGPT and other assistants tested Recent studies examined how popular AI research tools handle retracted papers, and the results were concerning. Researchers tested ChatGPT, Google's Gemini, and similar language models by asking them about known retracted papers. In many cases, they not only failed to flag the retractions but actively praised the withdrawn studies. One investigation found that ChatGPT referenced retracted cancer imaging research without any warning to users, presenting the flawed findings as credible. The problem extends beyond chatbots to AI-powered literature review tools that researchers increasingly rely on for efficiency. Common failure scenarios The risks show up across different domains, each with its own consequences: Medical guidance: Healthcare professionals consulting AI for clinical information might receive recommendations based on studies withdrawn for data fabrication or patient safety concerns Literature reviews: Academic researchers face citation issues when AI assistants suggest retracted papers, damaging credibility and delaying peer review Policy decisions: Institutional leaders making evidence-based choices might rely on AI-summarised research without realising the underlying studies have been retracted A doctor asking about treatment protocols could unknowingly follow advice rooted in discredited research. Meanwhile, detecting retracted citations manually across hundreds of references proves nearly impossible for most researchers. How Often Retractions Slip Into AI Training Data The scale of retracted papers entering AI systems is larger than most people realise. Crossref, the scholarly metadata registry that tracks digital object identifiers (DOIs) for academic publications, reports thousands of retraction notices annually. Yet many AI models were trained on datasets harvested years ago, capturing papers before retraction notices appeared. Here's where timing becomes critical. A paper published in 2020 and included in an AI training dataset that same year might get retracted in 2023. If the model hasn't been retrained with updated data, it remains oblivious to the retraction. Some popular language models go years between major training updates, meaning their knowledge of the research landscape grows increasingly outdated. Lag between retraction and model update Training Large Language Models requires enormous computational resources and time, which explains why most AI companies don't continuously update their systems. Even when retraining occurs, the process of identifying and removing retracted papers from massive datasets presents technical challenges that many organisations haven't prioritised solving. The result is a growing gap between the current state of scientific knowledge and what AI assistants "know." You might think AI systems could simply check retraction databases in real-time before responding, but most don't. Instead, they generate responses based solely on their static training data, unaware that some information has been invalidated. Risks of Citing Retracted Papers in Practice The consequences of AI-recommended retracted papers extend beyond embarrassment. When flawed research influences decisions, the ripple effects can be substantial and long-lasting. Clinical decision errors Healthcare providers increasingly turn to AI tools for quick access to medical literature, especially when facing unfamiliar conditions or emerging treatments. If an AI assistant recommends a retracted study on drug efficacy or surgical techniques, clinicians might implement approaches that have been proven harmful or ineffective. The 2020 hydroxychloroquine controversy illustrated how quickly questionable research spreads. Imagine that dynamic accelerated by AI systems that can't distinguish between valid and retracted papers. Policy and funding implications Government agencies and research institutions often use AI tools to synthesise large bodies of literature when making funding decisions or setting research priorities. Basing these high-stakes choices on retracted work wastes resources and potentially misdirects entire fields of inquiry. A withdrawn climate study or economic analysis could influence policy for years before anyone discovers the AI-assisted review included discredited research. Academic reputation damage For individual researchers, citing retracted papers carries professional consequences. Journals may reject manuscripts, tenure committees question research rigour, and collaborators lose confidence. While honest mistakes happen, the frequency of such errors increases when researchers rely on AI tools that lack retraction awareness, and the responsibility still falls on the researcher, not the AI. Why Language Models Miss Retraction Signals The technical architecture of most AI research assistants makes them inherently vulnerable to the retraction problem. Understanding why helps explain what solutions might actually work. Corpus quality controls lacking AI models learn from their training corpus, the massive collection of text they analyse during development. Most organisations building these models prioritise breadth over curation, scraping academic databases, preprint servers, and publisher websites without rigorous quality checks. The assumption is that more data produces better models, but this approach treats all papers equally regardless of retraction status. Even when training data includes retraction notices, the AI might not recognise them as signals to discount the paper's content. A retraction notice is just another piece of text unless the model has been specifically trained to understand its significance. Sparse or inconsistent metadata Publishers handle retractions differently, creating inconsistencies that confuse automated systems: Some journals add "RETRACTED" to article titles Others publish separate retraction notices A few quietly remove papers entirely This lack of standardisation means AI systems trained to recognise one retraction format might miss others completely. Metadata، the structured information describing each paper, often fails to consistently flag retraction status across databases. A paper retracted in PubMed might still appear without warning in other indexes that AI training pipelines access. Hallucination and overconfidence AI hallucination occurs when models generate plausible-sounding but false information, and it exacerbates the retraction problem. Even if a model has no information about a topic, it might confidently fabricate citations or misremember details from its training data. This overconfidence means AI assistants rarely express uncertainty about the papers they recommend, leaving users with no indication that additional verification is needed. Real-Time Retraction Data Sources Researchers Should Trust While AI tools struggle with retractions, several authoritative databases exist for manual verification. Researchers concerned about citation integrity can cross-reference their sources against these resources. Retraction Watch Database Retraction Watch operates as an independent watchdog, tracking retractions across all academic disciplines and publishers. Their freely accessible database includes detailed explanations of why papers were withdrawn, from honest error to fraud. The organisation's blog also provides context about patterns in retractions and systemic issues in scholarly publishing. Crossref metadata service Crossref maintains the infrastructure that assigns DOIs to scholarly works, and publishers report retractions through this system. While coverage depends on publishers properly flagging retractions, Crossref offers a comprehensive view across multiple disciplines and publication types. Their API allows developers to build tools that automatically check retraction status, a capability that forward-thinking platforms are beginning to implement. PubMed retracted publication tag For medical and life sciences research, PubMed provides reliable retraction flagging with daily updates. The National Library of Medicine maintains this database with rigorous quality control, ensuring retracted papers receive prominent warning labels. However, this coverage is limited to biomedical literature, leaving researchers in other fields without equivalent resources. DatabaseCoverageUpdate SpeedAccessRetraction WatchAll disciplinesReal-timeFreeCrossrefPublisher-reportedVariableFree APIPubMedMedical/life sciencesDailyFree Responsible AI Starts with Licensing When AI systems access research papers, articles, or datasets, authors and publishers have legal and ethical rights that need protection. Ignoring these rights can undermine the sustainability of the research ecosystem and diminish trust between researchers and technology providers. One of the biggest reasons AI tools get it wrong is that they often cite retracted papers as if they’re still valid. When an article is retracted, e.g. due to peer review process not being conducted properly or failing to meet established standards, most AI systems don’t know, it simply remains part of their training data. This is where licensing plays a crucial role. Licensed data ensures that AI systems are connected to the right sources, continuously updated with accurate, publisher-verified information. It’s the foundation for what platforms like Zendy aim to achieve: making sure the content is clean and trustworthy.  Licensing ensures that content is used responsibly. Proper agreements between AI companies and copyright holders allow AI systems to access material legally while providing attribution and, when appropriate, compensation. This is especially important when AI tools generate insights or summaries that are distributed at scale, potentially creating value for commercial platforms without benefiting the sources of the content. in conclusion, consent-driven licensing helps build trust. Publishers and authors can choose whether and how their work is incorporated into AI systems, ensuring that content is included only when rights are respected. Advanced AI platforms, such as Zendy, can even track which licensed sources contributed to a particular output, providing accountability and a foundation for equitable revenue sharing. .wp-block-image img { max-width: 85% !important; margin-left: auto !important; margin-right: auto !important; }

5 Tools Every Librarian Should Know in 2025
Oct 1, 20255 Mins ReadDiscover

5 Tools Every Librarian Should Know in 2025

The role of librarians has always been about connecting people with knowledge. But in 2025, with so much information floating around online, the challenge isn’t access, it’s sorting through the noise and finding what really matters. This is where AI for libraries is starting to make a difference. Here are five that are worth keeping in your back pocket this year. 1. Zendy Zendy is a one-stop AI-powered research library that blends open access with subscription-based resources. Instead of juggling multiple platforms, librarians can point students and researchers to one place where they’ll find academic articles, reports, and AI tools to help with research discovery and literature review. With its growing use of AI for libraries, Zendy makes it easier to summarise research, highlight key ideas, and support literature reviews without adding to the librarian’s workload. 2. LibGuides Still one of the most practical tools for librarians, LibGuides makes it easy to create tailored resource guides for courses, programs, or specific assignments. Whether you’re curating resources for first-year students or putting together a subject guide for advanced research, it helps librarians stay organised while keeping information accessible to learners. 3. OpenRefine Cleaning up messy data is nobody’s favourite job, but it’s a reality when working with bibliographic records or digital archives. OpenRefine is like a spreadsheet, but with superpowers, it can quickly detect duplicates, fix formatting issues, and make large datasets more manageable. For librarians working in cataloguing or digital collections, it saves hours of tedious work. 4. PressReader Library patrons aren’t just looking for academic content; they often want newspapers, magazines, and general reading material too. PressReader gives libraries a simple way to provide access to thousands of publications from around the world. It’s especially valuable in public libraries or institutions with international communities. 5. OCLC WorldShare Managing collections and sharing resources across institutions is a constant task. OCLC WorldShare helps libraries handle cataloguing, interlibrary loans, and metadata management. It’s not flashy, but it makes collaboration between libraries smoother and ensures that resources don’t sit unused when another community could benefit from them. Final thought The tools above aren’t just about technology, they’re about making everyday library work more practical. Whether it’s curating resources with Zendy, cleaning data with OpenRefine, or sharing collections through WorldShare, these platforms help librarians do what they do best: guide people toward knowledge that matters. .wp-block-image img { max-width: 85% !important; margin-left: auto !important; margin-right: auto !important; }

Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom