Making Scholarly Research Accessible for Independent Researchers in 2025

Many researchers work outside of universities or formal institutions. These independent researchers often rely on public access to scholarly research to study, write, or contribute to their fields.
However, access to scholarly research is not equal. Most academic journals are behind paywalls, which means users must pay to read them unless they are affiliated with an institution that pays for access.
In this blog, we’ll explore the structure of academic publishing and how it affects independent researchers. We’ll break down the current challenges, the systems in place, and recent developments designed to improve research accessibility.
Why Research Accessibility Matters
Research accessibility refers to how easily someone can read, use, and build upon academic studies. For independent researchers, access is often limited because they lack university or library credentials required to unlock paywalled content.
A large portion of scholarly research remains behind subscription paywalls. Many journal articles cost between $30 and $50 each, and full journal subscriptions can reach thousands of dollars per year.
These costs create a divide between researchers affiliated with institutions and those working independently. Independent researchers may be excluded from current findings, which restricts their ability to contribute to academic conversations.
Without equal access, knowledge development becomes uneven. Some communities and individuals are left out, creating a gap in who can participate in scientific and scholarly work.
Understanding Open Access Models
Open access (OA) refers to academic research that anyone can read online without paying. There are different types of open access, and each works in a specific way.
1. Gold Open Access to Scholarly Research
Gold open access means that the final version of a research article is freely available on the publisher's website. The author or their funder usually pays a fee to make the article open.
Researchers can find gold open access content in fully open access journals listed in the Directory of Open Access Journals (DOAJ). These journals allow anyone to read and download the scholarly research directly from the publisher.
- Reader benefit: Immediate access to the final, formatted version of articles
- Limitation: Authors often pay fees ranging from $500-$3000 to publish
2. Green Open Access
Green open access is when authors share a version of their article in a free online repository. This version may be a preprint (before peer review) or a postprint (after peer review but before journal formatting).
Repositories like arXiv.org specialise in many disciplines, and bioRxiv.org for biology, host these papers. These platforms do not require any affiliation to access the content.
- Reader benefit: Free access to research content, often before formal publication
- Limitation: The version available might not be the final published version
3. Diamond Open Access
Diamond open access journals make articles freely available to read and do not charge authors any fees to publish. Neither readers nor authors pay. One good example of diamond open access is KnE Publishing, an open access publishing service by Knowledge E, provides high-quality publishing services to support the development and advancement of diamond open access journals, with a particular focus on increasing the visibility and accessibility of scholarly research.
This model is often supported by academic institutions or non-profit organisations. The Free Journal Network lists many of these journals.
- Reader benefit: Completely free access with no barriers
- Author benefit: No publication fees to share research
| Open Access Model | Who Pays | Where to Find | Version Available |
| Gold | Authors/funders | Publisher websites | Final published version |
| Green | No one (usually) | Repositories | Preprint or postprint |
| Diamond | Institutions/grants | Publisher websites | Final published version |
Practical Tools For Independent Researchers
Independent researchers need affordable ways to find and use scholarly research. Several tools make this process easier.
AI Summarisers
AI summarisers extract the main points from academic papers. These AI tools help researchers quickly understand if a paper is relevant to their work without reading the entire document.
Zendy's AI summarisation tool identifies key findings, methods, and conclusions from scholarly research papers. This saves time when reviewing large amounts of literature.
- Time-saving: Condenses hours of reading into minutes
- Comprehension aid: Helps readers understand complex academic language
Literature Discovery Tools
Discovery tools help researchers find academic papers and locate free versions when available.
Google Scholar indexes scholarly research and sometimes links to free versions.
Zendy uses AI to recommend relevant papers based on your interests.
Browser extensions like Unpaywall and Open Access Button automatically find legal, free versions of paywalled articles.
- Broader search: Searches across multiple journals and repositories at once
- Free alternatives: Identifies open access versions of paywalled content
Scholarly Research Reference Manager Tools
Reference manager tools help organise research papers and create citations. These tools are essential for independent researchers writing their own papers.
Zotero is a free, open-source reference manager that saves papers, creates citations, and integrates with word processors. Mendeley offers similar features with some social networking elements.
- Organisation: Keeps research papers in one searchable library
- Citation help: Automatically formats citations in different styles
Policy Shifts Empowering Independent Scholars
Recent policy changes are increasing the amount of research that is freely available to everyone. These changes help independent researchers access more content without institutional subscriptions.
Plan S requires that research funded by certain organisations be published with open access. This means more high-quality scholarly research is becoming freely available to read.
Many funding agencies now require researchers to share their findings openly. The National Institutes of Health in the US and UK Research and Innovation have policies requiring funded research to be publicly accessible.
Authors are also finding ways to keep their rights to share their work. Rights retention strategies allow researchers to post copies of their articles in public repositories even when publishing in traditional journals.
The trend toward open science continues to grow. More institutions are adopting policies that make research outputs—including data, software, and educational materials—freely available by default.
Ensuring Accessibility For All Researchers
Accessibility in scholarly research goes beyond open access. It also means making content usable for people with disabilities and those using different devices or internet connections.
Universal Design Principles
Universal design makes scholarly research usable by as many people as possible. This includes clear structure, readable text, and compatibility with assistive tools.
Well-designed articles use proper headings, include descriptions for images, and create documents that work with screen readers. These features help all users navigate and understand the content more easily.
Examples of accessible design in scholarly research:
- Structured headings that create a logical outline
- Alternative text for images and diagrams
- Tables with proper headers and simple layouts
- PDF files with proper tagging for screen readers
Assistive Technology Compatibility
Assistive technologies help people with disabilities access digital content. Researchers need to work well with these tools.
Screen readers convert text to speech for people who are blind or have low vision. Text enlargement tools and colour contrast adjusters help people with different visual needs.
When looking for accessible research content:
- PDF accessibility: Look for tagged PDFs that work with screen readers
- HTML versions: Often more accessible than PDFs for assistive technologies
- Plain text options: Simple format that works with most assistive tools
If you need a more accessible version of any scholarly research, you can contact the publisher directly. Many journals now provide alternative formats upon request.
New Innovations in Research Access
The landscape of scholarly access continues to evolve with new models and technologies making research more available to independent scholars.
AI-powered research assistants are changing how people interact with academic literature. These tools can summarise articles, extract key information, and help researchers find connections between papers.
Digital libraries like Zendy are creating alternatives to traditional subscription models. With AI assistants like ZAIA (Zendy's AI assistant for researchers), these platforms not only partner with publishers to offer access to both open and paywalled content at affordable rates for individual researchers, but also enhance the research experience through AI support.
The future of scholarly research access looks increasingly open and innovative. New technologies and business models continue to break down barriers between knowledge and those who seek it.
FAQs about Accessing Scholarly Research
How can independent researchers find free academic articles legally?
Independent researchers can use open access repositories like PubMed Central and preprint servers like arXiv. Public libraries sometimes offer access to academic databases, and contacting authors directly often results in them sharing their papers.
What makes scholarly research accessible to people with disabilities?
Accessible scholarly research uses proper document structure with headings, provides alternative text for images, creates tables that screen readers can navigate, and offers formats compatible with assistive technologies. Articles in HTML format are typically more accessible than PDFs, and properly tagged PDFs are more accessible than untagged ones.
How do researchers evaluate the quality of open access journals?
Researchers can check if an open access journal is listed in the Directory of Open Access Journals (DOAJ), look for clear peer review policies, verify the journal's impact factor, and research the editorial board members. Quality open access journals maintain the same rigorous standards as traditional subscription journals.
What AI tool helps independent researchers conduct a literature review?
ZAIA, Zendy's AI research assistant, helps independent researchers conduct efficient literature reviews by automatically summarising academic papers, extracting key findings, and identifying connections between related studies. Researchers can also use reference managers like Zotero or Mendeley to organise papers and create citations. Literature mapping tools like VOSviewer help visualise research networks and identify influential papers. For comprehensive literature reviews, ZAIA can recommend relevant papers based on your research interests, saving hours of manual searching across multiple databases.

Digital Information Literacy Guidelines for Academic Libraries
Information literacy is the skill of finding, evaluating, and using information effectively. Data literacy is the skill of understanding numbers and datasets, reading charts, checking how data was collected, and spotting mistakes. Critical thinking is the skill of analysing information, questioning assumptions, and making sound judgments. With so many digital tools today, students and researchers need all three skills, not just to find information, but also to make sense of it and communicate it clearly. Why Academic Libraries Should Offer Literacy Programs Let’s face it: research can be overwhelming. Over 5 million research papers are published every year. This information overload means researchers spend 25-30%1 of their time finding and reviewing academic literature, according to the International Study: Perceptions and Behavior of Researchers. Predatory journals, low-quality datasets, and confusing search results can make learning stressful. Libraries are more than book storage, they’re a place to build practical skills. Programs that teach information and data literacy help students think critically, save time, and feel more confident with research. Key Skills Students, Researchers, and Librarians Need Finding and Using Scholarly Content Knowing how to search a database efficiently is a big deal. Students should learn how to use filters, Boolean logic, subject headings and, of course, intelligent search. They should also know the difference between journal articles, conference papers, and open-access resources. Evaluating Sources and Data Not all information is equal. Programs should teach students how to check if sources are reliable, understand peer review, and spot bias in datasets. A few practical techniques, like cross-checking sources or looking for data provenance, can make research much stronger. Managing Information Ethically Citing sources properly, avoiding plagiarism, and respecting copyright are essentials. Tools like Zotero or Mendeley help keep references organised, so students spend less time managing files and more time on research. Sharing Findings Clearly Communicating is sharing, and sharing is caring. It’s one thing to collect information; it’s another to communicate it. Using infographics, slides, or storytelling techniques to make research more memorable. Ultimately, clear communication ensures that the work they’ve done can be understood, used, and appreciated by others. Frameworks That Guide Literacy Programs ACRL Framework: Provides six key concepts for teaching information literacy. EU DigComp / DigCompEdu: Covers digital skills for students and educators. Data Literacy Project: Helps students understand how to work with datasets, complementing traditional research skills. These frameworks help librarians structure programs so students get consistent, practical guidance. Steps to Build a Digital Literacy Program Audit Campus Needs: Talk to students and faculty, see what resources exist, and find gaps. Set Learning Goals: Decide what students should be able to do at the end, and make goals measurable. Select Content and Tools: Choose databases, software, and datasets that fit the library’s budget and tech setup. Create Short, Modular Lessons: Break skills into manageable pieces that build on each other. Launch and Improve: Introduce the program, gather feedback, and adjust lessons based on what works and what doesn’t. Teaching Strategies and Online Tools Flipped and Embedded Instruction Students watch a short video about search techniques at home, then practice in class. A librarian might join a research methods class, helping students build search strings live. Pre-class quizzes on topics like peer review versus predatory journals prepare students for hands-on exercises. Short Videos and Tutorials Quick videos (2–5 minutes) can teach one skill at a time, like citation management, evaluating sources, or basic data visualisation. Include captions, transcripts, and small practice exercises to reinforce learning. AI Summaries and Chatbots AI tools can summarise articles, suggest keywords, highlight main points, and even draft bibliographies. But they aren’t perfect, they can make mistakes, miss nuances, or misread complex tables. Human oversight is still important. Free Resources and Open Datasets Students can practice with free databases and datasets like DOAJ, arXiv, Kaggle, or Zenodo. Using one of the open-access resources keeps programs affordable while providing real-world examples. Checking if Students Are Learning Before and After Assessments: Simple quizzes or tasks to see how skills improve. Performance Rubrics: Compare beginner, developing, and advanced levels in searching, evaluating, and presenting data. Analytics: Track which videos or tools students use most to improve future lessons. Working With Faculty Embedded Workshops: Librarians teach skills directly tied to assignments. Joint Assignments: Faculty design research projects that naturally teach literacy skills. Faculty Training: Show instructors how to integrate digital literacy into their courses. Tackling Challenges Staff Training: Librarians may need extra help with data tools. Peer mentoring and workshops work well. Limited Budgets: Open access tools, collaborative licensing, and free platforms help make programs feasible. Distance Learners: Make videos and tutorials accessible anytime, account for different time zones and internet access. Looking Ahead AI, open science, and global collaboration are changing research. AI can personalise learning, but it still needs oversight. Open science and FAIR data principles (set of guidelines for making research dataFindable,Accessible,Interoperable, andReusable to both humans and machines) encourage transparency and reproducibility. Libraries can also connect with international partners to share resources and best practices. FAQs How long does a program take to launch?Basic services can start in six months; full programs usually take 1–2 years. Do humanities students need data skills?Yes, focus is more on qualitative analysis and digital humanities tools. Where can libraries find free datasets?Government repositories, Kaggle, Zenodo, and university archives. Can small libraries succeed without data specialists?Yes, faculty collaboration and online resources can cover most needs. .wp-block-image img { max-width: 75% !important; margin-left: auto !important; margin-right: auto !important; }

From Boolean to Intelligent Search: A Librarian’s Guide to Smarter Information Retrieval
As a librarian, you’ve always been the person people turn to when they need help finding answers. But the way we search for information is changing fast. Databases are growing, new tools keep appearing, and students expect instant results. Only then will you know the true benefit of AI for libraries, to help you make sense of it all. From Boolean to Intelligent Search Traditional search is still part of everyday library work. It depends on logic and structure, keywords, operators, and carefully built queries. But AI adds something new. It doesn’t just look for words; it tries to understand what someone means. If a researcher searches for “climate change effects on migration,” an AI-powered tool doesn’t just pull results with those exact words. It also looks for studies about environmental displacement, regional challenges, and social impacts. This means you can spend less time teaching people how to “speak database” and more time helping them understand the research they find. The Evolution of Library Search Traditional search engines focus on matching keywords, which often leads to long lists of results. With AI, search tools can now read queries in natural language, just the way people ask questions, and still find accurate, relevant material. Natural language processing (NLP) and machine learning (ML) make it possible for search systems to connect related ideas, even when the exact words aren’t used. Features like semantic search and vector databases help AI recognise patterns and suggest other useful directions for exploration. Examples of AI Tools Librarians Can Use Tool / PlatformWhat It DoesWhy It Helps LibrariansZendyA platform that combines literature discovery, AI summaries, keyphrase highlighting, and PDF analysisHelps librarians and researchers access, read, and understand academic papers more easilyConsensusAn AI-powered academic search engine that summarises findings from peer-reviewed studiesHelps with literature reviews and citation managementEx Libris PrimoUses AI to support discovery and manage metadataImproves record accuracy and helps users find what they need fasterMeilisearchA fast, flexible search engine that uses NLPMakes it easier to search large databases efficiently The Ethics of Intelligent Search Algorithms influence what users see and what they might miss. That’s why your role is so important. You can help users question why certain results appear on top, encourage critical thinking, and remind them that algorithms are not neutral. Digital literacy today isn’t just about knowing how to search, it’s about understanding how the search works. In Conclusion AI tools for librarians are becoming easier to use and more helpful every day. Some platforms now include features like summarisation, citation analysis, and even plans to highlight retracted papers, something Zendy is working toward. Trying out these tools can make your work smoother: faster reference responses, smarter cataloguing, and better guidance for researchers who often feel lost in the flood of information. AI isn’t replacing your expertise, it’s helping you use it in new ways. And that’s what makes this moment exciting for librarians everywhere. .wp-block-image img { max-width: 85% !important; margin-left: auto !important; margin-right: auto !important; }

Why AI like ChatGPT still quotes retracted papers?
AI models like ChatGPT are trained on massive datasets collected at specific moments in time, which means they lack awareness of papers retracted after their training cutoff. When a scientific paper gets retracted, whether due to errors, fraud, or ethical violations, most AI systems continue referencing it as if nothing happened. This creates a troubling scenario where researchers using AI assistants might unknowingly build their work on discredited foundations. In other words: retracted papers are the academic world's way of saying "we got this wrong, please disregard." Yet the AI tools designed to help us navigate research faster often can't tell the difference between solid science and work that's been officially debunked. ChatGPT and other assistants tested Recent studies examined how popular AI research tools handle retracted papers, and the results were concerning. Researchers tested ChatGPT, Google's Gemini, and similar language models by asking them about known retracted papers. In many cases, they not only failed to flag the retractions but actively praised the withdrawn studies. One investigation found that ChatGPT referenced retracted cancer imaging research without any warning to users, presenting the flawed findings as credible. The problem extends beyond chatbots to AI-powered literature review tools that researchers increasingly rely on for efficiency. Common failure scenarios The risks show up across different domains, each with its own consequences: Medical guidance: Healthcare professionals consulting AI for clinical information might receive recommendations based on studies withdrawn for data fabrication or patient safety concerns Literature reviews: Academic researchers face citation issues when AI assistants suggest retracted papers, damaging credibility and delaying peer review Policy decisions: Institutional leaders making evidence-based choices might rely on AI-summarised research without realising the underlying studies have been retracted A doctor asking about treatment protocols could unknowingly follow advice rooted in discredited research. Meanwhile, detecting retracted citations manually across hundreds of references proves nearly impossible for most researchers. How Often Retractions Slip Into AI Training Data The scale of retracted papers entering AI systems is larger than most people realise. Crossref, the scholarly metadata registry that tracks digital object identifiers (DOIs) for academic publications, reports thousands of retraction notices annually. Yet many AI models were trained on datasets harvested years ago, capturing papers before retraction notices appeared. Here's where timing becomes critical. A paper published in 2020 and included in an AI training dataset that same year might get retracted in 2023. If the model hasn't been retrained with updated data, it remains oblivious to the retraction. Some popular language models go years between major training updates, meaning their knowledge of the research landscape grows increasingly outdated. Lag between retraction and model update Training Large Language Models requires enormous computational resources and time, which explains why most AI companies don't continuously update their systems. Even when retraining occurs, the process of identifying and removing retracted papers from massive datasets presents technical challenges that many organisations haven't prioritised solving. The result is a growing gap between the current state of scientific knowledge and what AI assistants "know." You might think AI systems could simply check retraction databases in real-time before responding, but most don't. Instead, they generate responses based solely on their static training data, unaware that some information has been invalidated. Risks of Citing Retracted Papers in Practice The consequences of AI-recommended retracted papers extend beyond embarrassment. When flawed research influences decisions, the ripple effects can be substantial and long-lasting. Clinical decision errors Healthcare providers increasingly turn to AI tools for quick access to medical literature, especially when facing unfamiliar conditions or emerging treatments. If an AI assistant recommends a retracted study on drug efficacy or surgical techniques, clinicians might implement approaches that have been proven harmful or ineffective. The 2020 hydroxychloroquine controversy illustrated how quickly questionable research spreads. Imagine that dynamic accelerated by AI systems that can't distinguish between valid and retracted papers. Policy and funding implications Government agencies and research institutions often use AI tools to synthesise large bodies of literature when making funding decisions or setting research priorities. Basing these high-stakes choices on retracted work wastes resources and potentially misdirects entire fields of inquiry. A withdrawn climate study or economic analysis could influence policy for years before anyone discovers the AI-assisted review included discredited research. Academic reputation damage For individual researchers, citing retracted papers carries professional consequences. Journals may reject manuscripts, tenure committees question research rigour, and collaborators lose confidence. While honest mistakes happen, the frequency of such errors increases when researchers rely on AI tools that lack retraction awareness, and the responsibility still falls on the researcher, not the AI. Why Language Models Miss Retraction Signals The technical architecture of most AI research assistants makes them inherently vulnerable to the retraction problem. Understanding why helps explain what solutions might actually work. Corpus quality controls lacking AI models learn from their training corpus, the massive collection of text they analyse during development. Most organisations building these models prioritise breadth over curation, scraping academic databases, preprint servers, and publisher websites without rigorous quality checks. The assumption is that more data produces better models, but this approach treats all papers equally regardless of retraction status. Even when training data includes retraction notices, the AI might not recognise them as signals to discount the paper's content. A retraction notice is just another piece of text unless the model has been specifically trained to understand its significance. Sparse or inconsistent metadata Publishers handle retractions differently, creating inconsistencies that confuse automated systems: Some journals add "RETRACTED" to article titles Others publish separate retraction notices A few quietly remove papers entirely This lack of standardisation means AI systems trained to recognise one retraction format might miss others completely. Metadata، the structured information describing each paper, often fails to consistently flag retraction status across databases. A paper retracted in PubMed might still appear without warning in other indexes that AI training pipelines access. Hallucination and overconfidence AI hallucination occurs when models generate plausible-sounding but false information, and it exacerbates the retraction problem. Even if a model has no information about a topic, it might confidently fabricate citations or misremember details from its training data. This overconfidence means AI assistants rarely express uncertainty about the papers they recommend, leaving users with no indication that additional verification is needed. Real-Time Retraction Data Sources Researchers Should Trust While AI tools struggle with retractions, several authoritative databases exist for manual verification. Researchers concerned about citation integrity can cross-reference their sources against these resources. Retraction Watch Database Retraction Watch operates as an independent watchdog, tracking retractions across all academic disciplines and publishers. Their freely accessible database includes detailed explanations of why papers were withdrawn, from honest error to fraud. The organisation's blog also provides context about patterns in retractions and systemic issues in scholarly publishing. Crossref metadata service Crossref maintains the infrastructure that assigns DOIs to scholarly works, and publishers report retractions through this system. While coverage depends on publishers properly flagging retractions, Crossref offers a comprehensive view across multiple disciplines and publication types. Their API allows developers to build tools that automatically check retraction status, a capability that forward-thinking platforms are beginning to implement. PubMed retracted publication tag For medical and life sciences research, PubMed provides reliable retraction flagging with daily updates. The National Library of Medicine maintains this database with rigorous quality control, ensuring retracted papers receive prominent warning labels. However, this coverage is limited to biomedical literature, leaving researchers in other fields without equivalent resources. DatabaseCoverageUpdate SpeedAccessRetraction WatchAll disciplinesReal-timeFreeCrossrefPublisher-reportedVariableFree APIPubMedMedical/life sciencesDailyFree Responsible AI Starts with Licensing When AI systems access research papers, articles, or datasets, authors and publishers have legal and ethical rights that need protection. Ignoring these rights can undermine the sustainability of the research ecosystem and diminish trust between researchers and technology providers. One of the biggest reasons AI tools get it wrong is that they often cite retracted papers as if they’re still valid. When an article is retracted, e.g. due to peer review process not being conducted properly or failing to meet established standards, most AI systems don’t know, it simply remains part of their training data. This is where licensing plays a crucial role. Licensed data ensures that AI systems are connected to the right sources, continuously updated with accurate, publisher-verified information. It’s the foundation for what platforms like Zendy aim to achieve: making sure the content is clean and trustworthy. Licensing ensures that content is used responsibly. Proper agreements between AI companies and copyright holders allow AI systems to access material legally while providing attribution and, when appropriate, compensation. This is especially important when AI tools generate insights or summaries that are distributed at scale, potentially creating value for commercial platforms without benefiting the sources of the content. in conclusion, consent-driven licensing helps build trust. Publishers and authors can choose whether and how their work is incorporated into AI systems, ensuring that content is included only when rights are respected. Advanced AI platforms, such as Zendy, can even track which licensed sources contributed to a particular output, providing accountability and a foundation for equitable revenue sharing. .wp-block-image img { max-width: 85% !important; margin-left: auto !important; margin-right: auto !important; }
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom