z-logo
open-access-imgOpen Access
How LLMs are Shaping the Future of Virtual Reality
Author(s) -
Sueda Ozkaya,
Santiago Berrezueta-Guzman,
Stefan Wagner
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3631594
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Integrating Large Language Models (LLMs) into Virtual Reality (VR) games marks a paradigm shift in the design of immersive, adaptive, and intelligent digital experiences. This paper presents a scoping review of recent research at the intersection of LLMs and VR, aiming to map current work, identify key applications, and highlight open challenges. We examine how LLMs transform non-player character (NPC) interactions, narrative generation, intelligent game mastering, personalization, and accessibility. Drawing from an analysis of 66 peer-reviewed studies published between 2018 and 2025, we outline major application domains—ranging from emotionally intelligent NPCs and procedurally generated storytelling to AI-driven adaptive systems and inclusive gameplay interfaces.We also discuss critical challenges facing this convergence, including real-time performance constraints, memory limitations, ethical risks, and scalability barriers. The findings suggest that while LLMs significantly enhance realism, creativity, and engagement in VR environments, their effective deployment requires robust design strategies integrating multimodal interaction, hybrid AI architectures, and ethical safeguards. The paper outlines future research directions in multimodal AI, affective computing, reinforcement learning, and open-source development, aiming to guide intelligent VR systems’ responsible and inclusive advancement.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom