ATTENTION: The software behind KU ScholarWorks is being upgraded to a new version. Starting July 15th, users will not be able to log in to the system, add items, nor make any changes until the new version is in place at the end of July. Searching for articles and opening files will continue to work while the system is being updated.
If you have any questions, please contact Marianne Reed at mreed@ku.edu .
Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity
dc.contributor.author | Kim, Hyunwoo | |
dc.contributor.author | Küster, Dennis | |
dc.contributor.author | Girard, Jeffrey M. | |
dc.contributor.author | Krumhuber, Eva G. | |
dc.date.accessioned | 2024-06-10T17:05:17Z | |
dc.date.available | 2024-06-10T17:05:17Z | |
dc.date.issued | 2023-09-12 | |
dc.identifier.citation | Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol. 2023 Sep 12;14:1221081. doi: 10.3389/fpsyg.2023.1221081. PMID: 37794914; PMCID: PMC10546417 | en_US |
dc.identifier.uri | https://hdl.handle.net/1808/35116 | |
dc.description.abstract | A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed. | en_US |
dc.publisher | Frontiers Media | en_US |
dc.rights | Copyright © 2023 Kim, Küster, Girard and Krumhuber. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | en_US |
dc.rights.uri | https://www.ncbi.nlm.nih.gov/pmc/about/copyright/ | en_US |
dc.subject | Emotion facial expression | en_US |
dc.subject | Dynamic | en_US |
dc.subject | Movement | en_US |
dc.subject | Prototypicality | en_US |
dc.subject | Ambiguity | en_US |
dc.title | Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity | en_US |
dc.type | Article | en_US |
kusw.kuauthor | Girard, Jeffrey M. | |
kusw.kudepartment | Psychology | en_US |
dc.identifier.doi | 10.3389/fpsyg.2023.1221081 | en_US |
kusw.oaversion | Scholarly/refereed, publisher version | en_US |
kusw.oapolicy | This item meets KU Open Access policy criteria. | en_US |
dc.identifier.pmid | PMC10546417 | en_US |
dc.rights.accessrights | openAccess | en_US |