Premium
Context‐Aware Mixed Reality: A Learning‐Based Framework for Semantic‐Level Interaction
Author(s) -
Chen L.,
Tang W.,
John N. W.,
Wan T. R.,
Zhang J. J.
Publication year - 2020
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.13887
Subject(s) - computer science , human–computer interaction , context (archaeology) , mixed reality , object (grammar) , augmented reality , semantic data model , semantic computing , virtual reality , key (lock) , semantic web , artificial intelligence , paleontology , computer security , biology
Mixed reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic‐based interactive MR framework that is beyond current geometry‐based approaches, offering a step change in generating high‐level context‐aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object‐specific behaviours, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real‐world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material‐aware prototype system for context‐aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real‐time semantic‐level interactions.