Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity
![Thumbnail](/bitstream/handle/1808/35116/fpsyg-14-1221081.pdf.jpg?sequence=4&isAllowed=y)
View/ Open
Issue Date
2023-09-12Author
Kim, Hyunwoo
Küster, Dennis
Girard, Jeffrey M.
Krumhuber, Eva G.
Publisher
Frontiers Media
Type
Article
Article Version
Scholarly/refereed, publisher version
Rights
Copyright © 2023 Kim, Küster, Girard and Krumhuber.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Metadata
Show full item recordAbstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collections
Citation
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol. 2023 Sep 12;14:1221081. doi: 10.3389/fpsyg.2023.1221081. PMID: 37794914; PMCID: PMC10546417
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.