z-logo
open-access-imgOpen Access
In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning
Author(s) -
Azza Mohamed Basiouni,
Mohamed El Rashid,
Khaled Shaalan
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3575303
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The rapid growth of Large Language Models (LLMs) and their in-context learning (ICL) capabilities has profoundly changed paradigms in artificial intelligence (AI) and natural language processing. Notable models, such as OpenAI’s GPT series, have displayed previously unprecedented advancements in verbal comprehension and adaptability, dynamically responding to new tasks offered via contextual prompts. This study gives a detailed survey of recent advances in theoretical research on LLMs and ICL. The search was conducted across several scholarly databases including Google Scholar, arXiv, IEEE Xplore, ACM Digital Library, and SpringerLink, covering publications from January 2019 to March 2024. We investigate how LLMs encode and use knowledge via ICL, the evolving reasoning skills that result from this process, and the considerable impact of prompt design on LLM reasoning performance, particularly in symbolic reasoning tasks. Furthermore, we investigate the theoretical frameworks that explain or challenge LLM behaviors in ICL contexts and address the significance of these findings for the development of complex knowledge representation and reasoning systems. Using a systematic methodology consistent with accepted research criteria, this review synthesizes significant observations, highlights existing gaps and obstacles, and discusses implications for future research and practice. Our goal is to connect theoretical ideas with actual advances in Artificial Intelligence, ultimately contributing to the continuing discussion about the capabilities and applications of LLMs in knowledge representation and reasoning.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Empowering knowledge with every search

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom