z-logo
open-access-imgOpen Access
Mining a multimodal corpus of doctor’s training for virtual patient’s feedbacks
Author(s) -
Chris Porhet,
Magalie Ochs,
Jorane Saubesty,
Grégoire de Montcheuil,
Roxane Bertrand
Publication year - 2017
Publication title -
hal (le centre pour la communication scientifique directe)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/3136755.3136816
Subject(s) - computer science , training (meteorology) , natural language processing , virtual patient , artificial intelligence , virtual reality , human–computer interaction , multimedia , medical education , medicine , physics , meteorology
Doctors should be trained not only to perform medical or surgical acts but also to develop competences in communication for their interaction with patients. For instance, the way doctors deliver bad news has a significant impact on the therapeutic process. In order to facilitate the doctors’ training to break bad news, we aim at developing a virtual patient ables to interact in a multimodal way with doctors announcing an undesirable event. One of the key elements to create an engaging interaction is the feedbacks’ behavior of the virtual character. In order to model the virtual patient’s feedbacks in the context of breaking bad news, we have analyzed a corpus of real doctor’s training. The verbal and nonverbal signals of both the doctors and the patients have been annotated. In order to identify the types of feedbacks and the elements that may elicit a feedback, we have explored the corpus based on sequences mining methods. Rules, that have been extracted from the corpus, enable us to determine when a virtual patient should express which feedbacks when a doctor announces a bad new

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom