z-logo
open-access-imgOpen Access
MUXAS-VR: A Multi-dimensional User Experience Assessment System for Virtual Reality
Author(s) -
Peerawat Pannattee,
Yosuke Fukuchi,
Nobuyuki Nishiuchi
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3573382
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Virtual Reality (VR) technology offers immersive experiences across various domains, but assessing user experience (UX) remains a challenge. Traditional methods, such as questionnaires, are time-consuming and fail to capture the complexity of user interactions, while automated approaches which leverage sensors and deep learning (DL) techniques show promise but have several architectural limitations. Existing frameworks often rely on data from contact-based physiological sensors. They also tend to overlook attributes of the visual stimuli, use simplistic multimodal fusion techniques, and focus on a single UX dimension, all of which reduce their ability to provide a comprehensive and accurate assessment of UX. This study proposes MUXAS-VR, a DL-based framework that overcomes these limitations by leveraging multimodal behavioral cues. These are seamlessly captured using embedded sensors in commercially available VR devices, eliminating the need for contact-based physiological sensors. MUXAS-VR integrates individual behavioral data with visual attributes, such as motion dynamics and visual complexity, to account for individual interactions and visual influences. Its Multimodal Fusion Module (MFM) selectively combines data from multiple modalities to enhance feature representation and predictive performance. Unlike conventional systems, MUXAS-VR evaluates multiple UX dimensions—cybersickness (CS), sense of presence (SOP), and overall satisfaction (OS)—to provide a comprehensive view of VR UX. To validate its effectiveness, we proposed a rigorous evaluation strategy across three scenarios—Missing Content, New Content, and New User—to ensure generalizability and practical applicability. Experimental results showed that multimodal behavioral cues significantly improved predictive performance. Combining these cues with visual attributes provided complementary information, and the MFM outperformed simpler fusion methods by capturing complex interdependencies. These comprehensive evaluations across diverse scenarios demonstrate MUXAS-VR’s usefulness in predicting multiple UX dimensions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here