z-logo
open-access-imgOpen Access
Cross-Domain Few-Shot Micro-Expression Recognition Incorporating Action Units
Author(s) -
Yi Dai,
Ling Feng
Publication year - 2021
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2021.3120542
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Micro-expression, different from ordinary facial expressions, is an involuntary, spontaneous, and subtle facial movement that reveals true emotions which people intend to conceal. As it usually occurs within a fraction of a second (less than 1/2 second) with a low action intensity, capturing micro-expressions among facial movements in a video is difficult. Moreover, when a micro-expression recognition system works in cold-start conditions, it has to recognize novel classes of micro-expressions in a new scenario, suffering from the lack of sufficient labeled samples. Inconsistency in micro-expression labeling criteria makes it difficult to use existing labeled datasets in other scenarios. To tackle the challenges, we present a micro-expression recognizer, which on one hand leverages the knowledge of facial action units (AU) to enhance facial representations, and on the other hand performs cross-domain few-shot learning to transfer knowledge acquired from other domains with different data labeling protocols and feature distribution to overcome the scarcity of labeled samples in the cold-starting scenario. In particular, we draw inspirations from the correlation between micro-expression and facial action units (AUs), and design an action unit module, aiming to extract subtle AU-related features from videos. We then fuse AU-related features and general features extracted by optical-flow facial images. Through fine-tuning, we transfer knowledge from datasets in different domains to the target domain. The experimental results on two datasets show that: (1) the proposed recognizer can effectively learn to recognize new categories of micro-expressions in different domains with a very few labeled samples with the UF1 score of 0.544 on CASME dataset, outperforming the state-of-the-art methods by 0.089; (2) the performance of the recognizer is more competitive when it distinguishes micro-expression videos of more categories; and (3) the action unit module enables to improve the recognition performance by 0.072 and 0.047 on CASME and SMIC, respectively.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here