ADFAC: Automatic detection of facial articulatory features

View/ Open
Issue Date
2020-07-22Author
Garg, Saurabh
Hamarneh, Ghassan
Jongman, Allard
Sereno, Joan A.
Wang, Yue
Publisher
Elsevier
Type
Article
Article Version
Scholarly/refereed, publisher version
Rights
© 2020 The Authors. Published by Elsevier B.V.
Metadata
Show full item recordAbstract
Using computer-vision and image processing techniques, we aim to identify specific visual cues as induced by facial movements made during monosyllabic speech production. The method is named ADFAC: Automatic Detection of Facial Articulatory Cues. Four facial points of interest were detected automatically to represent head, eyebrow and lip movements: nose tip (proxy for head movement), medial point of left eyebrow, and midpoints of the upper and lower lips. The detected points were then automatically tracked in the subsequent video frames. Critical features such as the distance, velocity, and acceleration describing local facial movements with respect to the resting face of each speaker were extracted from the positional profiles of each tracked point. In this work, a variant of random forest is proposed to determine which facial features are significant in classifying speech sound categories. The method takes in both video and audio as input and extracts features from any video with a plain or simple background. The method is implemented in MATLAB and scripts are made available on GitHub for easy access.
• Using innovative computer-vision and image processing techniques to automatically detect and track keypoints on the face during speech production in videos, thus allowing more natural articulation than previous sensor-based approaches.• Measuring multi-dimensional and dynamic facial movements by extracting time-related, distance-related and kinematics-related features in speech production.• Adopting the novel random forest classification approach to determine and rank the significance of facial features toward accurate speech sound categorization.
Description
This work is licensed under a Creative Commons Attribution 4.0 International License.
Collections
Citation
Garg, S., Hamarneh, G., Jongman, A., Sereno, J. A., & Wang, Y. (2020). ADFAC: Automatic detection of facial articulatory features. MethodsX, 7, 101006. https://doi.org/10.1016/j.mex.2020.101006
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.