A Caltech Library Service

Discriminative Few Shot Learning of Facial Dynamics in Interview Videos for Autism Trait Classification

Zhang, Na and Ruan, Mindi and Wang, Shuo and Paul, Lynn and Li, Xin (2022) Discriminative Few Shot Learning of Facial Dynamics in Interview Videos for Autism Trait Classification. IEEE Transactions on Affective Computing . ISSN 1949-3045. doi:10.1109/taffc.2022.3178946. (In Press)

[img] PDF - Accepted Version
See Usage Policy.


Use this Persistent URL to link to this item:


Autism is a prevalent neurodevelopmental disorder characterized by impairments in social and communicative behaviors. The possible connections between autism and facial expression recognition have been studied in the literature recently. However, most works are based on facial images or short videos. Few works aim at Autism Diagnostic Observation Schedule (ADOS) videos due to their complexity (e.g., interaction between interviewer and interviewee) and length (e.g., usually last for hours). In this paper, we attempt to fill this gap by developing a novel discriminative few shot learning method to analyze hour-long video data and exploring the fusion of facial dynamics for the trait classification of ASD. Leveraging well-established computer vision tools from spatio-temporal feature extraction and marginal fisher analysis to few-shot learning and scene-level fusion, we have constructed a three-category system to classify an individual into Autism, Autism Spectrum, and Non-Spectrum. For the first time, we have shown that certain interview scenes carry more discriminative information for ASD trait classification than others. Experimental results are reported to demonstrate the potential of the proposed automatic ASD trait classification system (reaching 91.72% accuracy on Caltech ADOS video dataset) and the benefits of few-shot learning and scene-level fusion strategy by extensive ablation studies.

Item Type:Article
Related URLs:
URLURL TypeDescription
Wang, Shuo0000-0003-2562-0225
Paul, Lynn0000-0002-3128-8313
Additional Information:© 2021 IEEE. This research was supported by an NSF CAREER Award (BCS-1945230), Air Force Young Investigator Program Award (FA9550-21-l-0088), Dana Foundation Clinical Neuroscience Award, ORAU Ralph E. Powe Junior Faculty Enhancement Award (to SW), and an NSF grant (IIS-1908215 and IIS-2114644) and the WV Higher Education Policy Commission grant (HEPC.dsr.18.5; to XL). Thanks to Drs. Ralph Adolphs and Umit Keles for providing the Caltech dataset of ADOS interview videos.
Funding AgencyGrant Number
Air Force Office of Scientific Research (AFOSR)FA9550-21-l-0088
Dana FoundationUNSPECIFIED
Oak Ridge Associated UniversitiesUNSPECIFIED
West Virginia Higher Education Policy CommissionHEPC.dsr.18.5
Subject Keywords:Autism Spectrum Disorder (ASD), Autism trait classification, facial dynamic features, marginal fisher analysis (MFA), few-shot learning (FSL), scene-level fusion
Record Number:CaltechAUTHORS:20220602-273896100
Persistent URL:
Official Citation:N. Zhang, M. Ruan, S. Wang, L. Paul and X. Li, "Discriminative Few Shot Learning of Facial Dynamics in Interview Videos for Autism Trait Classification," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2022.3178946
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:115006
Deposited By: Tony Diaz
Deposited On:02 Jun 2022 23:03
Last Modified:02 Jun 2022 23:03

Repository Staff Only: item control page