z-logo
open-access-imgOpen Access
Three-Dimensional Image Reconstruction for Virtual Talent Training Scene
Author(s) -
Tanbo Zhu,
Wang Die,
Yuhua Li,
Wenjie Dong
Publication year - 2021
Publication title -
traitement du signal/ts. traitement du signal
Language(s) - English
Resource type - Journals
eISSN - 1958-5608
pISSN - 0765-0019
DOI - 10.18280/ts.380615
Subject(s) - training (meteorology) , artificial intelligence , computer science , computer vision , virtual reality , image (mathematics) , segmentation , virtual training , perception , limit (mathematics) , mathematics , geography , mathematical analysis , neuroscience , meteorology , biology
In real training, the training conditions are often undesirable, and the use of equipment is severely limited. These problems can be solved by virtual practical training, which breaks the limit of space, lowers the training cost, while ensuring the training quality. However, the existing methods work poorly in image reconstruction, because they fail to consider the fact that the environmental perception of actual scene is strongly regular by nature. Therefore, this paper investigates the three-dimensional (3D) image reconstruction for virtual talent training scene. Specifically, a fusion network model was deigned, and the deep-seated correlation between target detection and semantic segmentation was discussed for images shot in two-dimensional (2D) scenes, in order to enhance the extraction effect of image features. Next, the vertical and horizontal parallaxes of the scene were solved, and the depth-based virtual talent training scene was reconstructed three dimensionally, based on the continuity of scene depth. Finally, the proposed algorithm was proved effective through experiments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here