z-logo
open-access-imgOpen Access
Research on visual‐tactile cross‐modality based on generative adversarial network
Author(s) -
Li Yaoyao,
Zhao Huailin,
Liu Huaping,
Lu Shan,
Hou Yueyang
Publication year - 2021
Publication title -
cognitive computation and systems
Language(s) - English
Resource type - Journals
ISSN - 2517-7567
DOI - 10.1049/ccs2.12008
Subject(s) - computer science , computer vision , artificial intelligence , modality (human–computer interaction) , transformation (genetics) , key (lock) , modal , texture (cosmology) , generative grammar , mode (computer interface) , image (mathematics) , object (grammar) , human–computer interaction , biochemistry , chemistry , computer security , polymer chemistry , gene
Aiming at the research of assisted blind technology, a generative adversarial network model was proposed to complete the transformation of the mode from vision to touch. Firstly, two key representations of visual to tactile sense are identified: the texture image of the object and the audio frequency that generates vibrotactile. It is essentially a matter of generating audio from images. The authors propose a cross‐modal network framework that generates corresponding vibrotactile signals based on texture images. More importantly, the network structure is an end‐to‐end, which eliminates the traditional intermediate form of converting texture image to spectrum image, and can directly carry out the transformation from visual to tactile. A quantitative evaluation system is proposed in this study, which can evaluate the performance of the network model. The experimental results show that the network can complete the conversion of visual information to tactile signals. The proposed method is proved to be superior to the existing method of indirectly generating vibrotactile signals, and the applicability of the model is verified.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here