ATTENTION: The software behind KU ScholarWorks is being upgraded to a new version. Starting July 15th, users will not be able to log in to the system, add items, nor make any changes until the new version is in place at the end of July. Searching for articles and opening files will continue to work while the system is being updated. If you have any questions, please contact Marianne Reed at mreed@ku.edu .

Show simple item record

dc.contributor.authorKim, Hyunwoo
dc.contributor.authorKüster, Dennis
dc.contributor.authorGirard, Jeffrey M.
dc.contributor.authorKrumhuber, Eva G.
dc.date.accessioned2024-06-10T17:05:17Z
dc.date.available2024-06-10T17:05:17Z
dc.date.issued2023-09-12
dc.identifier.citationKim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol. 2023 Sep 12;14:1221081. doi: 10.3389/fpsyg.2023.1221081. PMID: 37794914; PMCID: PMC10546417en_US
dc.identifier.urihttps://hdl.handle.net/1808/35116
dc.description.abstractA growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.en_US
dc.publisherFrontiers Mediaen_US
dc.rightsCopyright © 2023 Kim, Küster, Girard and Krumhuber. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.en_US
dc.rights.urihttps://www.ncbi.nlm.nih.gov/pmc/about/copyright/en_US
dc.subjectEmotion facial expressionen_US
dc.subjectDynamicen_US
dc.subjectMovementen_US
dc.subjectPrototypicalityen_US
dc.subjectAmbiguityen_US
dc.titleHuman and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexityen_US
dc.typeArticleen_US
kusw.kuauthorGirard, Jeffrey M.
kusw.kudepartmentPsychologyen_US
dc.identifier.doi10.3389/fpsyg.2023.1221081en_US
kusw.oaversionScholarly/refereed, publisher versionen_US
kusw.oapolicyThis item meets KU Open Access policy criteria.en_US
dc.identifier.pmidPMC10546417en_US
dc.rights.accessrightsopenAccessen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record