Authors: JIALI BIAN, XUE MEI, YU XUE, LIANG WU, YAO DING
Abstract: Temporal segmentation of facial expression sequences is important to understand and analyze human facial expressions. It is, however, challenging to deal with the complexity of facial muscle movements by finding a suitable metric to distinguish among different expressions and to deal with the uncontrolled environmental factors in the real world. This paper presents a two-step unsupervised segmentation method composed of rough segmentation and fine segmentation stages to compute the optimal segmentation positions in video sequences to facilitate the segmentation of different facial expressions. The proposed method performs localization of facial expression patches to aid in recognition and extraction of specific features. In the rough segmentation stage, facial sequences are segmented into distinct facial behaviors based on the similarity between sequence frames, while similarity between segments is computed to obtain optimal segmentation positions in the fine segmentation stage. The proposed method has been evaluated in experiments using the MMI dataset and real videos. Experiment results compared to other state-of-the-art methods indicate better performance of the proposed method.
Keywords: Clustering, temporal segmentation, similarity calculation, facial image analysis
Full Text: PDF