1
|
Banos O, Comas-González Z, Medina J, Polo-Rodríguez A, Gil D, Peral J, Amador S, Villalonga C. Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review. Int J Med Inform 2024; 187:105469. [PMID: 38723429 DOI: 10.1016/j.ijmedinf.2024.105469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 04/05/2024] [Accepted: 04/28/2024] [Indexed: 05/23/2024]
Abstract
BACKGROUND Human Emotion Recognition (HER) has been a popular field of study in the past years. Despite the great progresses made so far, relatively little attention has been paid to the use of HER in autism. People with autism are known to face problems with daily social communication and the prototypical interpretation of emotional responses, which are most frequently exerted via facial expressions. This poses significant practical challenges to the application of regular HER systems, which are normally developed for and by neurotypical people. OBJECTIVE This study reviews the literature on the use of HER systems in autism, particularly with respect to sensing technologies and machine learning methods, as to identify existing barriers and possible future directions. METHODS We conducted a systematic review of articles published between January 2011 and June 2023 according to the 2020 PRISMA guidelines. Manuscripts were identified through searching Web of Science and Scopus databases. Manuscripts were included when related to emotion recognition, used sensors and machine learning techniques, and involved children with autism, young, or adults. RESULTS The search yielded 346 articles. A total of 65 publications met the eligibility criteria and were included in the review. CONCLUSIONS Studies predominantly used facial expression techniques as the emotion recognition method. Consequently, video cameras were the most widely used devices across studies, although a growing trend in the use of physiological sensors was observed lately. Happiness, sadness, anger, fear, disgust, and surprise were most frequently addressed. Classical supervised machine learning techniques were primarily used at the expense of unsupervised approaches or more recent deep learning models. Studies focused on autism in a broad sense but limited efforts have been directed towards more specific disorders of the spectrum. Privacy or security issues were seldom addressed, and if so, at a rather insufficient level of detail.
Collapse
Affiliation(s)
- Oresti Banos
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain.
| | - Zhoe Comas-González
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain; Department of Computer Science and Electronics, Universidad de la Costa, Barranquilla, Colombia
| | - Javier Medina
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | - Aurora Polo-Rodríguez
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain; Department of Computer Science, University of Jaén, Jaén, Spain
| | - David Gil
- Department of Computer Technology and Computation, University of Alicante, Alicante, Spain
| | - Jesús Peral
- Department of Sotware and Computing Systems, University of Alicante, Alicante, Spain.
| | - Sandra Amador
- Department of Computer Technology and Computation, University of Alicante, Alicante, Spain
| | - Claudia Villalonga
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| |
Collapse
|
2
|
Pandya S, Jain S, Verma J. A comprehensive analysis towards exploring the promises of AI-related approaches in autism research. Comput Biol Med 2024; 168:107801. [PMID: 38064848 DOI: 10.1016/j.compbiomed.2023.107801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 11/09/2023] [Accepted: 11/29/2023] [Indexed: 01/10/2024]
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that presents challenges in communication, social interaction, repetitive behaviour, and limited interests. Detecting ASD at an early stage is crucial for timely interventions and an improved quality of life. In recent times, Artificial Intelligence (AI) has been increasingly used in ASD research. The rise in ASD diagnoses is due to the growing number of ASD cases and the recognition of the importance of early detection, which leads to better symptom management. This study explores the potential of AI in identifying early indicators of autism, aligning with the United Nations Sustainable Development Goals (SDGs) of Good Health and Well-being (Goal 3) and Peace, Justice, and Strong Institutions (Goal 16). The paper aims to provide a comprehensive overview of the current state-of-the-art AI-based autism classification by reviewing recent publications from the last decade. It covers various modalities such as Eye gaze, Facial Expression, Motor skill, MRI/fMRI, and EEG, and multi-modal approaches primarily grouped into behavioural and biological markers. The paper presents a timeline spanning from the history of ASD to recent developments in the field of AI. Additionally, the paper provides a category-wise detailed analysis of the AI-based application in ASD with a diagrammatic summarization to convey a holistic summary of different modalities. It also reports on the successes and challenges of applying AI for ASD detection while providing publicly available datasets. The paper paves the way for future scope and directions, providing a complete and systematic overview for researchers in the field of ASD.
Collapse
Affiliation(s)
- Shivani Pandya
- Department of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat 382481, India.
| | - Swati Jain
- Department of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat 382481, India.
| | - Jaiprakash Verma
- Department of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat 382481, India.
| |
Collapse
|
3
|
Li Y, Huang WC, Song PH. A face image classification method of autistic children based on the two-phase transfer learning. Front Psychol 2023; 14:1226470. [PMID: 37720633 PMCID: PMC10501480 DOI: 10.3389/fpsyg.2023.1226470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 07/17/2023] [Indexed: 09/19/2023] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, which seriously affects children's normal life. Screening potential autistic children before professional diagnose is helpful to early detection and early intervention. Autistic children have some different facial features from non-autistic children, so the potential autistic children can be screened by taking children's facial images and analyzing them with a mobile phone. The area under curve (AUC) is a more robust metrics than accuracy in evaluating the performance of a model used to carry out the two-category classification, and the AUC of the deep learning model suitable for the mobile terminal in the existing research can be further improved. Moreover, the size of an input image is large, which is not fit for a mobile phone. A deep transfer learning method is proposed in this research, which can use images with smaller size and improve the AUC of existing studies. The proposed transfer method uses the two-phase transfer learning mode and the multi-classifier integration mode. For MobileNetV2 and MobileNetV3-Large that are suitable for a mobile phone, the two-phase transfer learning mode is used to improve their classification performance, and then the multi-classifier integration mode is used to integrate them to further improve the classification performance. A multi-classifier integrating calculation method is also proposed to calculate the final classification results according to the classifying results of the participating models. The experimental results show that compared with the one-phase transfer learning, the two-phase transfer learning can significantly improve the classification performance of MobileNetV2 and MobileNetV3-Large, and the classification performance of the integrated classifier is better than that of any participating classifiers. The accuracy of the integrated classifier in this research is 90.5%, and the AUC is 96.32%, which is 3.51% greater than the AUC (92.81%) of the previous studies.
Collapse
Affiliation(s)
- Ying Li
- Guangxi Key Laboratory of Human-machine Interaction and Intelligent Decision, School of Logistics Management and Engineering, Nanning Normal University, Nanning, China
| | - Wen-Cong Huang
- Department of Sports and Health, Guangxi College for Preschool Education, Nanning, China
| | - Pei-Hua Song
- Guangxi Key Laboratory of Human-machine Interaction and Intelligent Decision, School of Logistics Management and Engineering, Nanning Normal University, Nanning, China
| |
Collapse
|
4
|
Washington P, Wall DP. A Review of and Roadmap for Data Science and Machine Learning for the Neuropsychiatric Phenotype of Autism. Annu Rev Biomed Data Sci 2023; 6:211-228. [PMID: 37137169 PMCID: PMC11093217 DOI: 10.1146/annurev-biodatasci-020722-125454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Autism spectrum disorder (autism) is a neurodevelopmental delay that affects at least 1 in 44 children. Like many neurological disorder phenotypes, the diagnostic features are observable, can be tracked over time, and can be managed or even eliminated through proper therapy and treatments. However, there are major bottlenecks in the diagnostic, therapeutic, and longitudinal tracking pipelines for autism and related neurodevelopmental delays, creating an opportunity for novel data science solutions to augment and transform existing workflows and provide increased access to services for affected families. Several efforts previously conducted by a multitude of research labs have spawned great progress toward improved digital diagnostics and digital therapies for children with autism. We review the literature on digital health methods for autism behavior quantification and beneficial therapies using data science. We describe both case-control studies and classification systems for digital phenotyping. We then discuss digital diagnostics and therapeutics that integrate machine learning models of autism-related behaviors, including the factors that must be addressed for translational use. Finally, we describe ongoing challenges and potential opportunities for the field of autism data science. Given the heterogeneous nature of autism and the complexities of the relevant behaviors, this review contains insights that are relevant to neurological behavior analysis and digital psychiatry more broadly.
Collapse
Affiliation(s)
- Peter Washington
- Department of Information and Computer Sciences, University of Hawai'i at Mānoa, Honolulu, Hawai'i, USA
| | - Dennis P Wall
- Departments of Pediatrics (Systems Medicine), Biomedical Data Science, and Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA;
| |
Collapse
|
5
|
Babu PRK, Di Martino JM, Chang Z, Perochon S, Carpenter KLH, Compton S, Espinosa S, Dawson G, Sapiro G. Exploring Complexity of Facial Dynamics in Autism Spectrum Disorder. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2023; 14:919-930. [PMID: 37266390 PMCID: PMC10231874 DOI: 10.1109/taffc.2021.3113876] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Atypical facial expression is one of the early symptoms of autism spectrum disorder (ASD) characterized by reduced regularity and lack of coordination of facial movements. Automatic quantification of these behaviors can offer novel biomarkers for screening, diagnosis, and treatment monitoring of ASD. In this work, 40 toddlers with ASD and 396 typically developing toddlers were shown developmentally-appropriate and engaging movies presented on a smart tablet during a well-child pediatric visit. The movies consisted of social and non-social dynamic scenes designed to evoke certain behavioral and affective responses. The front-facing camera of the tablet was used to capture the toddlers' face. Facial landmarks' dynamics were then automatically computed using computer vision algorithms. Subsequently, the complexity of the landmarks' dynamics was estimated for the eyebrows and mouth regions using multiscale entropy. Compared to typically developing toddlers, toddlers with ASD showed higher complexity (i.e., less predictability) in these landmarks' dynamics. This complexity in facial dynamics contained novel information not captured by traditional facial affect analyses. These results suggest that computer vision analysis of facial landmark movements is a promising approach for detecting and quantifying early behavioral symptoms associated with ASD.
Collapse
Affiliation(s)
| | - J Matias Di Martino
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Zhuoqing Chang
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Sam Perochon
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Kimberly L H Carpenter
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, USA
| | - Scott Compton
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, USA
| | - Steven Espinosa
- Office of Information Technology, Duke University, Durham, NC, USA
| | - Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC. USA
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Biomedical Engineering, Mathematics, and Computer Sciences, Duke University, Durham, NC, USA
| |
Collapse
|
6
|
Lahiri R, Nasir M, Kumar M, Kim SH, Bishop S, Lord C, Narayanan S. Interpersonal synchrony across vocal and lexical modalities in interactions involving children with autism spectrum disorder. JASA EXPRESS LETTERS 2022; 2:095202. [PMID: 36097603 PMCID: PMC9462442 DOI: 10.1121/10.0013421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 07/21/2022] [Indexed: 06/15/2023]
Abstract
Quantifying behavioral synchrony can inform clinical diagnosis, long-term monitoring, and individualised interventions in neuro-developmental disorders characterized by deficit in communication and social interaction, such as autism spectrum disorder. In this work, three different objective measures of interpersonal synchrony are evaluated across vocal and linguistic communication modalities. For vocal prosodic and spectral features, dynamic time warping distance and squared cosine distance of (feature-wise) complexity are used, and for lexical features, word mover's distance is applied to capture behavioral synchrony. It is shown that these interpersonal vocal and linguistic synchrony measures capture complementary information that helps in characterizing overall behavioral patterns.
Collapse
Affiliation(s)
- Rimita Lahiri
- Signal Analysis and Interpretation Laboratory, University of Southern California, Los Angeles, California 90089, USA
| | - Md Nasir
- Microsoft Artificial Intelligence for Good Research Lab, Redmond, Washington 98052, USA
| | - Manoj Kumar
- Amazon Alexa Artificial Intelligence, Cambridge, Massachusetts 02142, USA
| | - So Hyun Kim
- Center for Autism and the Developing Brain, Weill Cornell Medicine, New York, New York 10065, USA
| | - Somer Bishop
- Department of Psychiatry, University of California, San Francisco, California 94143, USA
| | - Catherine Lord
- Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, California 90024, USA , , , , , ,
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Laboratory, University of Southern California, Los Angeles, California 90089, USA
| |
Collapse
|
7
|
Liu J, Wang Z, Xu K, Ji B, Zhang G, Wang Y, Deng J, Xu Q, Xu X, Liu H. Early Screening of Autism in Toddlers via Response-To-Instructions Protocol. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3914-3924. [PMID: 32966227 DOI: 10.1109/tcyb.2020.3017866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Early screening of autism spectrum disorder (ASD) is crucial since early intervention evidently confirms significant improvement of functional social behavior in toddlers. This article attempts to bootstrap the response-to-instructions (RTIs) protocol with vision-based solutions in order to assist professional clinicians with an automatic autism diagnosis. The correlation between detected objects and toddler's emotional features, such as gaze, is constructed to analyze their autistic symptoms. Twenty toddlers between 16-32 months of age, 15 of whom diagnosed with ASD, participated in this study. The RTI method is validated against human codings, and group differences between ASD and typically developing (TD) toddlers are analyzed. The results suggest that the agreement between clinical diagnosis and the RTI method achieves 95% for all 20 subjects, which indicates vision-based solutions are highly feasible for automatic autistic diagnosis.
Collapse
|
8
|
Whelpley CE, May CP. Seeing is Disliking: Evidence of Bias Against Individuals with Autism Spectrum Disorder in Traditional Job Interviews. J Autism Dev Disord 2022; 53:1363-1374. [PMID: 35294714 DOI: 10.1007/s10803-022-05432-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/05/2022] [Indexed: 11/25/2022]
Abstract
Job interviews are an integral component of the hiring process in most fields. Our research examines job interview performance of those with autism spectrum disorder (ASD) compared to neurotypical (NT) individuals. ASD and NT individuals were taped engaging in mock job interviews. Candidates were rated on a variety of dimensions by respondents who either watched the interview videos or read the interview transcripts and were naïve to the neurodiversity of the interviewees. NT candidates outperformed ASD candidates in the video condition, but in the absence of visual and social cues (transcript condition), individuals with ASD outperformed NT candidates. Our findings suggest that social style significantly influences hiring decisions in traditional job interviews and may bias evaluators against otherwise qualified candidates.
Collapse
Affiliation(s)
- Christopher E Whelpley
- Department of Management and Entrepreneurship, School of Business, Virginia Commonwealth University, 301 West Main Street, Richmond, VA, 23284-4000, USA
| | - Cynthia P May
- Department of Psychology, College of Charleston, 66 George St., Charleston, SC, 29424, USA.
| |
Collapse
|
9
|
Zhang K, Yuan Y, Chen J, Wang G, Chen Q, Luo M. Eye Tracking Research on the Influence of Spatial Frequency and Inversion Effect on Facial Expression Processing in Children with Autism Spectrum Disorder. Brain Sci 2022; 12:brainsci12020283. [PMID: 35204046 PMCID: PMC8870542 DOI: 10.3390/brainsci12020283] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/11/2022] [Accepted: 02/16/2022] [Indexed: 12/10/2022] Open
Abstract
Facial expression processing mainly depends on whether the facial features related to expressions can be fully acquired, and whether the appropriate processing strategies can be adopted according to different conditions. Children with autism spectrum disorder (ASD) have difficulty accurately recognizing facial expressions and responding appropriately, which is regarded as an important cause of their social disorders. This study used eye tracking technology to explore the internal processing mechanism of facial expressions in children with ASD under the influence of spatial frequency and inversion effects for improving their social disorders. The facial expression recognition rate and eye tracking characteristics of children with ASD and typical developing (TD) children on the facial area of interest were recorded and analyzed. The multi-factor mixed experiment results showed that the facial expression recognition rate of children with ASD under various conditions was significantly lower than that of TD children. TD children had more visual attention to the eyes area. However, children with ASD preferred the features of the mouth area, and lacked visual attention and processing of the eyes area. When the face was inverted, TD children had the inversion effect under all three spatial frequency conditions, which was manifested as a significant decrease in expression recognition rate. However, children with ASD only had the inversion effect under the LSF condition, indicating that they mainly used a featural processing method and had the capacity of configural processing under the LSF condition. The eye tracking results showed that when the face was inverted or facial feature information was weakened, both children with ASD and TD children would adjust their facial expression processing strategies accordingly, to increase the visual attention and information processing of their preferred areas. The fixation counts and fixation duration of TD children on the eyes area increased significantly, while the fixation duration of children with ASD on the mouth area increased significantly. The results of this study provided theoretical and practical support for facial expression intervention in children with ASD.
Collapse
Affiliation(s)
- Kun Zhang
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Yishuang Yuan
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Jingying Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
- Correspondence:
| | - Guangshuai Wang
- School of Computer Science, Wuhan University, Wuhan 430072, China;
| | - Qian Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Meijuan Luo
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| |
Collapse
|
10
|
Ghosh S, Guha T. Towards Autism Screening through Emotion-guided Eye Gaze Response. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:820-823. [PMID: 34891416 DOI: 10.1109/embc46164.2021.9630888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Individuals with Autism Spectrum Disorder (ASD) are known to have significantly limited social interaction abilities, which are often manifested in different non-verbal cues of communication such as facial expression, atypical eye gaze response. While prior works leveraged the role of pupil response for screening ASD, limited works have been carried out to find the influence of emotion stimuli on pupil response for ASD screening. We, in this paper, design, develop, and evaluate a light-weight LSTM (Long-short Term Memory) model that captures pupil responses (pupil diameter, fixation duration, and fixation location) based on the social interaction with a virtual agent and detects ASD sessions based on short interactions. Our findings demonstrate that all the pupil responses vary significantly in the ASD sessions in response to the different emotion (angry, happy, neutral) stimuli applied. These findings reinforce the ASD screening with an average accuracy of 77%, while the accuracy improves further (>80%) with respect to angry and happy emotion stimuli.
Collapse
|
11
|
Berry M, Brown S. The dynamic mask: Facial correlates of character portrayal in professional actors. Q J Exp Psychol (Hove) 2021; 75:936-953. [PMID: 34499014 PMCID: PMC8958566 DOI: 10.1177/17470218211047935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Actors make modifications to their face, voice, and body to match standard gestural conceptions of the fictional characters they are portraying during stage performances. However, the gestural manifestations of acting have not been quantified experimentally, least of all in group-level analyses. To quantify the facial correlates of character portrayal in professional actors for the first time, we had 24 actors portray a contrastive series of nine stock characters (e.g., king, bully, lover) that were organised according to a predictive scheme based on the two statistically independent personality dimensions of assertiveness (i.e., the tendency to satisfy personal concerns) and cooperativeness (i.e., the tendency to satisfy others’ concerns). We used three-dimensional motion capture to examine changes in facial dimensions, with an emphasis on the relative expansion/contraction of four facial segments related to the brow, eyebrows, lips, and jaw, respectively. The results demonstrated that expansions in both upper- and lower-facial segments were related to increases in the levels of character cooperativeness, but not assertiveness. These findings demonstrate that actors reliably manipulate their facial features in a contrastive manner to differentiate characters based on their underlying personality traits.
Collapse
Affiliation(s)
- Matthew Berry
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
12
|
Wang G, Wang Z, Jiang K, Huang B, He Z, Hu R. Silicone mask face anti-spoofing detection based on visual saliency and facial motion. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
13
|
Using 2D video-based pose estimation for automated prediction of autism spectrum disorders in young children. Sci Rep 2021; 11:15069. [PMID: 34301963 PMCID: PMC8302646 DOI: 10.1038/s41598-021-94378-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 07/09/2021] [Indexed: 11/10/2022] Open
Abstract
Clinical research in autism has recently witnessed promising digital phenotyping results, mainly focused on single feature extraction, such as gaze, head turn on name-calling or visual tracking of the moving object. The main drawback of these studies is the focus on relatively isolated behaviors elicited by largely controlled prompts. We recognize that while the diagnosis process understands the indexing of the specific behaviors, ASD also comes with broad impairments that often transcend single behavioral acts. For instance, the atypical nonverbal behaviors manifest through global patterns of atypical postures and movements, fewer gestures used and often decoupled from visual contact, facial affect, speech. Here, we tested the hypothesis that a deep neural network trained on the non-verbal aspects of social interaction can effectively differentiate between children with ASD and their typically developing peers. Our model achieves an accuracy of 80.9% (F1 score: 0.818; precision: 0.784; recall: 0.854) with the prediction probability positively correlated to the overall level of symptoms of autism in social affect and repetitive and restricted behaviors domain. Provided the non-invasive and affordable nature of computer vision, our approach carries reasonable promises that a reliable machine-learning-based ASD screening may become a reality not too far in the future.
Collapse
|
14
|
Webster PJ, Wang S, Li X. Review: Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention. Front Psychol 2021; 12:653112. [PMID: 34305720 PMCID: PMC8300960 DOI: 10.3389/fpsyg.2021.653112] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 06/02/2021] [Indexed: 12/03/2022] Open
Abstract
Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.
Collapse
Affiliation(s)
- Paula J Webster
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
15
|
Tunçgenç B, Pacheco C, Rochowiak R, Nicholas R, Rengarajan S, Zou E, Messenger B, Vidal R, Mostofsky SH. Computerized Assessment of Motor Imitation as a Scalable Method for Distinguishing Children With Autism. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2021; 6:321-328. [PMID: 33229247 PMCID: PMC7943651 DOI: 10.1016/j.bpsc.2020.09.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 09/02/2020] [Accepted: 09/02/2020] [Indexed: 11/29/2022]
Abstract
BACKGROUND Imitation deficits are prevalent in autism spectrum conditions (ASCs) and are associated with core autistic traits. Imitating others' actions is central to the development of social skills in typically developing populations, as it facilitates social learning and bond formation. We present a Computerized Assessment of Motor Imitation (CAMI) using a brief (1-min), highly engaging video game task. METHODS Using Kinect Xbox motion tracking technology, we recorded 48 children (27 with ASCs, 21 typically developing) as they imitated a model's dance movements. We implemented an algorithm based on metric learning and dynamic time warping that automatically detects and evaluates the important joints and returns a score considering spatial position and timing differences between the child and the model. To establish construct validity and reliability, we compared imitation performance measured by the CAMI method to the more traditional human observation coding (HOC) method across repeated trials and two different movement sequences. RESULTS Results revealed poorer imitation in children with ASCs than in typically developing children (ps < .005), with poorer imitation being associated with increased core autism symptoms. While strong correlations between the CAMI and HOC methods (rs = .69-.87) confirmed the CAMI's construct validity, CAMI scores classified the children into diagnostic groups better than the HOC scores (accuracyCAMI = 87.2%, accuracyHOC = 74.4%). Finally, by comparing repeated movement trials, we demonstrated high test-retest reliability of CAMI (rs = .73-.86). CONCLUSIONS Findings support the CAMI as an objective, highly scalable, directly interpretable method for assessing motor imitation differences, providing a promising biomarker for defining biologically meaningful ASC subtypes and guiding intervention.
Collapse
Affiliation(s)
- Bahar Tunçgenç
- Center for Neurodevelopment and Imaging Research, Kennedy Krieger Institute, Baltimore, Maryland; School of Psychology, University of Nottingham, Nottingham, United Kingdom.
| | - Carolina Pacheco
- Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland
| | - Rebecca Rochowiak
- Center for Neurodevelopment and Imaging Research, Kennedy Krieger Institute, Baltimore, Maryland
| | - Rosemary Nicholas
- Sir Peter Mansfield Imaging Centre, University of Nottingham, Nottingham, United Kingdom
| | - Sundararaman Rengarajan
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California San Diego, San Diego, California
| | - Erin Zou
- Center for Neurodevelopment and Imaging Research, Kennedy Krieger Institute, Baltimore, Maryland
| | - Brice Messenger
- Center for Neurodevelopment and Imaging Research, Kennedy Krieger Institute, Baltimore, Maryland
| | - René Vidal
- Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland
| | - Stewart H Mostofsky
- Center for Neurodevelopment and Imaging Research, Kennedy Krieger Institute, Baltimore, Maryland; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, Maryland; Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
16
|
Carpenter KLH, Hahemi J, Campbell K, Lippmann SJ, Baker JP, Egger HL, Espinosa S, Vermeer S, Sapiro G, Dawson G. Digital Behavioral Phenotyping Detects Atypical Pattern of Facial Expression in Toddlers with Autism. Autism Res 2021; 14:488-499. [PMID: 32924332 PMCID: PMC7920907 DOI: 10.1002/aur.2391] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 08/16/2020] [Accepted: 08/24/2020] [Indexed: 12/21/2022]
Abstract
Commonly used screening tools for autism spectrum disorder (ASD) generally rely on subjective caregiver questionnaires. While behavioral observation is more objective, it is also expensive, time-consuming, and requires significant expertise to perform. As such, there remains a critical need to develop feasible, scalable, and reliable tools that can characterize ASD risk behaviors. This study assessed the utility of a tablet-based behavioral assessment for eliciting and detecting one type of risk behavior, namely, patterns of facial expression, in 104 toddlers (ASD N = 22) and evaluated whether such patterns differentiated toddlers with and without ASD. The assessment consisted of the child sitting on his/her caregiver's lap and watching brief movies shown on a smart tablet while the embedded camera recorded the child's facial expressions. Computer vision analysis (CVA) automatically detected and tracked facial landmarks, which were used to estimate head position and facial expressions (Positive, Neutral, All Other). Using CVA, specific points throughout the movies were identified that reliably differentiate between children with and without ASD based on their patterns of facial movement and expressions (area under the curves for individual movies ranging from 0.62 to 0.73). During these instances, children with ASD more frequently displayed Neutral expressions compared to children without ASD, who had more All Other expressions. The frequency of All Other expressions was driven by non-ASD children more often displaying raised eyebrows and an open mouth, characteristic of engagement/interest. Preliminary results suggest computational coding of facial movements and expressions via a tablet-based assessment can detect differences in affective expression, one of the early, core features of ASD. LAY SUMMARY: This study tested the use of a tablet in the behavioral assessment of young children with autism. Children watched a series of developmentally appropriate movies and their facial expressions were recorded using the camera embedded in the tablet. Results suggest that computational assessments of facial expressions may be useful in early detection of symptoms of autism.
Collapse
Affiliation(s)
- Kimberly L H Carpenter
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jordan Hahemi
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Kathleen Campbell
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Pediatrics, University of Utah, Salt Lake City, Utah, USA
| | - Steven J Lippmann
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jeffrey P Baker
- Department of Pediatrics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Helen L Egger
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- NYU Langone Child Study Center, New York University, New York, New York, USA
| | - Steven Espinosa
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Saritha Vermeer
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Guillermo Sapiro
- Departments of Biomedical Engineering Computer Science, and Mathematics, Duke University, Durham, North Carolina, USA
| | - Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, USA
| |
Collapse
|
17
|
Valle CL, Chenausky K, Tager-Flusberg H. How do minimally verbal children and adolescents with autism spectrum disorder use communicative gestures to complement their spoken language abilities? AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2021; 6:23969415211035065. [PMID: 35155817 PMCID: PMC8837194 DOI: 10.1177/23969415211035065] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND AND AIMS Prior work has examined how children and adolescents with autism spectrum disorder who are minimally verbal use their spoken language abilities during interactions with others. However, social communication includes other aspects beyond speech. To our knowledge, no studies have examined how minimally verbal children and adolescents with autism spectrum disorder are using their gestural communication during social interactions. Such work can provide important insights into how gestures may complement their spoken language abilities. METHODS Fifty minimally verbal children and adolescents with autism spectrum disorder participated (M age = 12.41 years; 38 males). Gestural communication was coded from the Autism Diagnostic Observation Schedule. Children (n = 25) and adolescents (n = 25) were compared on their production of gestures, gesture-speech combinations, and communicative functions. Communicative functions were also assessed by the type of communication modality: gesture, speech, and gesture-speech to examine the range of communicative functions across different modalities of communication. To explore the role gestures may play the relation between speech utterances and gestural production was investigated. RESULTS Analyses revealed that (1) minimally verbal children and adolescents with autism spectrum disorder did not differ in their total number of gestures. The most frequently produced gesture across children and adolescents was a reach gesture, followed by a point gesture (deictic gesture), and then conventional gestures. However, adolescents produced more gesture-speech combinations (reinforcing gesture-speech combinations) and displayed a wider range of communicative functions. (2) Overlap was found in the types of communicative functions expressed across different communication modalities. However, requests were conveyed via gesture more frequently compared to speech or gesture-speech. In contrast, dis/agree/acknowledging and responding to a question posed by the conversational partner was expressed more frequently via speech compared to gesture or gesture-speech. (3) The total number of gestures was negatively associated with total speech utterances after controlling for chronological age, receptive communication ability, and nonverbal IQ. CONCLUSIONS Adolescents may be employing different communication strategies to maintain the conversational exchange and to further clarify the message they want to convey to the conversational partner. Although overlap occurred in communicative functions across gesture, speech, and gesture-speech, nuanced differences emerged in how often they were expressed across different modalities of communication. Given their speech production abilities, gestures may play a compensatory role for some individuals with autism spectrum disorder who are minimally verbal. IMPLICATIONS Findings underscore the importance of assessing multiple modalities of communication to provide a fuller picture of their social communication abilities. Our results identified specific communicative strengths and areas for growth that can be targeted and expanded upon within gesture and speech to optimize social communication development.
Collapse
Affiliation(s)
- Chelsea La Valle
- Chelsea La Valle, MA, Department of
Psychological & Brain Sciences, Boston University, Center for Autism
Research Excellence, 100 Cummington Mall, Boston, MA 02215, USA.
| | | | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences,
Boston University, Center for Autism Research Excellence, Boston, MA,
USA
| |
Collapse
|
18
|
Briot K, Pizano A, Bouvard M, Amestoy A. New Technologies as Promising Tools for Assessing Facial Emotion Expressions Impairments in ASD: A Systematic Review. Front Psychiatry 2021; 12:634756. [PMID: 34025469 PMCID: PMC8131507 DOI: 10.3389/fpsyt.2021.634756] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 03/25/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to recognize and express emotions from facial expressions are essential for successful social interactions. Facial Emotion Recognition (FER) and Facial Emotion Expressions (FEEs), both of which seem to be impaired in Autism Spectrum Disorders (ASD) and contribute to socio-communicative difficulties, participate in the diagnostic criteria for ASD. Only a few studies have focused on FEEs processing and the rare behavioral studies of FEEs in ASD have yielded mixed results. Here, we review studies comparing the production of FEEs between participants with ASD and non-ASD control subjects, with a particular focus on the use of automatic facial expression analysis software. A systematic literature search in accordance with the PRISMA statement identified 20 reports published up to August 2020 concerning the use of new technologies to evaluate both spontaneous and voluntary FEEs in participants with ASD. Overall, the results highlight the importance of considering socio-demographic factors and psychiatric co-morbidities which may explain the previous inconsistent findings, particularly regarding quantitative data on spontaneous facial expressions. There is also reported evidence for an inadequacy of FEEs in individuals with ASD in relation to expected emotion, with a lower quality and coordination of facial muscular movements. Spatial and kinematic approaches to characterizing the synchrony, symmetry and complexity of facial muscle movements thus offer clues to identifying and exploring promising new diagnostic targets. These findings have allowed hypothesizing that there may be mismatches between mental representations and the production of FEEs themselves in ASD. Such considerations are in line with the Facial Feedback Hypothesis deficit in ASD as part of the Broken Mirror Theory, with the results suggesting impairments of neural sensory-motor systems involved in processing emotional information and ensuring embodied representations of emotions, which are the basis of human empathy. In conclusion, new technologies are promising tools for evaluating the production of FEEs in individuals with ASD, and controlled studies involving larger samples of patients and where possible confounding factors are considered, should be conducted in order to better understand and counter the difficulties in global emotional processing in ASD.
Collapse
Affiliation(s)
- Kellen Briot
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| | - Adrien Pizano
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| | - Manuel Bouvard
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| | - Anouck Amestoy
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| |
Collapse
|
19
|
Hashemi J, Dawson G, Carpenter KLH, Campbell K, Qiu Q, Espinosa S, Marsan S, Baker JP, Egger HL, Sapiro G. Computer Vision Analysis for Quantification of Autism Risk Behaviors. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2021; 12:215-226. [PMID: 35401938 PMCID: PMC8993160 DOI: 10.1109/taffc.2018.2868196] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Observational behavior analysis plays a key role for the discovery and evaluation of risk markers for many neurodevelopmental disorders. Research on autism spectrum disorder (ASD) suggests that behavioral risk markers can be observed at 12 months of age or earlier, with diagnosis possible at 18 months. To date, these studies and evaluations involving observational analysis tend to rely heavily on clinical practitioners and specialists who have undergone intensive training to be able to reliably administer carefully designed behavioural-eliciting tasks, code the resulting behaviors, and interpret such behaviors. These methods are therefore extremely expensive, time-intensive, and are not easily scalable for large population or longitudinal observational analysis. We developed a self-contained, closed-loop, mobile application with movie stimuli designed to engage the child's attention and elicit specific behavioral and social responses, which are recorded with a mobile device camera and then analyzed via computer vision algorithms. Here, in addition to presenting this paradigm, we validate the system to measure engagement, name-call responses, and emotional responses of toddlers with and without ASD who were presented with the application. Additionally, we show examples of how the proposed framework can further risk marker research with fine-grained quantification of behaviors. The results suggest these objective and automatic methods can be considered to aid behavioral analysis, and can be suited for objective automatic analysis for future studies.
Collapse
Affiliation(s)
- Jordan Hashemi
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| | - Geraldine Dawson
- Department of Psychiatry and Behavioral Sciences, Duke Center for Autism and Brain Development, and the Duke Institute for Brain Sciences, Durham, NC
| | - Kimberly L H Carpenter
- Department of Psychiatry and Behavioral Sciences, Duke Center for Autism and Brain Development, and the Duke Institute for Brain Sciences, Durham, NC
| | | | - Qiang Qiu
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| | - Steven Espinosa
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| | - Samuel Marsan
- Department of Psychiatry and Behavioral Sciences, Durham, NC
| | | | - Helen L Egger
- Department of Child and Adolescent Psychiatry, NYU Langone Health, New York, NY. She performed this work while at Duke University
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| |
Collapse
|
20
|
Tang C, Zheng W, Zong Y, Qiu N, Lu C, Zhang X, Ke X, Guan C. Automatic Identification of High-Risk Autism Spectrum Disorder: A Feasibility Study Using Video and Audio Data Under the Still-Face Paradigm. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2401-2410. [PMID: 32991285 DOI: 10.1109/tnsre.2020.3027756] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
It is reported that the symptoms of autism spectrum disorder (ASD) could be improved by effective early interventions, which arouses an urgent need for large-scale early identification of ASD. Until now, the screening of ASD has relied on the child psychiatrist to collect medical history and conduct behavioral observations with the help of psychological assessment tools. Such screening measures inevitably have some disadvantages, including strong subjectivity, relying on experts and low-efficiency. With the development of computer science, it is possible to realize a computer-aided screening for ASD and alleviate the disadvantages of manual evaluation. In this study, we propose a behavior-based automated screening method to identify high-risk ASD (HR-ASD) for babies aged 8-24 months. The still-face paradigm (SFP) was used to elicit baby's spontaneous social behavior through a face-to-face interaction, in which a mother was required to maintain a normal interaction to amuse her baby for 2 minutes (a baseline episode) and then suddenly change to the no-reaction and no-expression status with 1 minute (a still-face episode). Here, multiple cues derived from baby's social stress response behavior during the latter episode, including head-movements, facial expressions and vocal characteristics, were statistically analyzed between HR-ASD and typical developmental (TD) groups. An automated identification model of HR-ASD was constructed based on these multi-cue features and the support vector machine (SVM) classifier; moreover, its screening performance was satisfied, for all the accuracy, specificity and sensitivity exceeded 90% on the cases included in this study. The experimental results suggest its feasibility in the early screening of HR-ASD.
Collapse
|
21
|
de Belen RAJ, Bednarz T, Sowmya A, Del Favero D. Computer vision in autism spectrum disorder research: a systematic review of published studies from 2009 to 2019. Transl Psychiatry 2020; 10:333. [PMID: 32999273 PMCID: PMC7528087 DOI: 10.1038/s41398-020-01015-w] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/04/2020] [Accepted: 09/09/2020] [Indexed: 11/29/2022] Open
Abstract
The current state of computer vision methods applied to autism spectrum disorder (ASD) research has not been well established. Increasing evidence suggests that computer vision techniques have a strong impact on autism research. The primary objective of this systematic review is to examine how computer vision analysis has been useful in ASD diagnosis, therapy and autism research in general. A systematic review of publications indexed on PubMed, IEEE Xplore and ACM Digital Library was conducted from 2009 to 2019. Search terms included ['autis*' AND ('computer vision' OR 'behavio* imaging' OR 'behavio* analysis' OR 'affective computing')]. Results are reported according to PRISMA statement. A total of 94 studies are included in the analysis. Eligible papers are categorised based on the potential biological/behavioural markers quantified in each study. Then, different computer vision approaches that were employed in the included papers are described. Different publicly available datasets are also reviewed in order to rapidly familiarise researchers with datasets applicable to their field and to accelerate both new behavioural and technological work on autism research. Finally, future research directions are outlined. The findings in this review suggest that computer vision analysis is useful for the quantification of behavioural/biological markers which can further lead to a more objective analysis in autism research.
Collapse
Affiliation(s)
| | - Tomasz Bednarz
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| | - Dennis Del Favero
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
22
|
Abstract
Purpose
The purpose of this study is to investigate the social and affective aspects of communication in school-age children with HFA and school-age children with WS using a micro-analytic approach. Social communication is important for success at home, school, work and in the community. Lacking the ability to effectively process and convey information can lead to deficits in social communication. Individuals with high functioning autism (HFA) and individuals with Williams syndrome (WS) often have significant impairments in social communication that impact their relationships with others. Currently, little is known about how school-age children use and integrate verbal and non-verbal behaviors in the context of a social interaction.
Design/methodology/approach
A micro-analytic coding scheme was devised to reveal which channels children use to convey information. Language, eye gaze behaviors and facial expressions of the child were coded during this dyadic social interaction. These behaviors were coded throughout the entire interview, as well as when the child was the speaker and when the child was the listener.
Findings
Language results continue to pose problems for the HFA and WS groups compared to their typically developing (TD) peers. For non-verbal communicative behaviors, a qualitative difference in the use of eye gaze was found between the HFA and WS groups. For facial expression, the WS and TD groups produced more facial expressions than the HFA group.
Research limitations/implications
No differences were observed in the HFA group when playing different roles in a conversation, suggesting they are not as sensitive to the social rules of a conversation as their peers. Insights from this study add knowledge toward understanding social-communicative development in school-age children.
Originality/value
In this study, two non-verbal behaviors will be assessed in multiple contexts: the entire biographical interview, when the child is the speaker and when the child is the listener. These social and expressive measures give an indication of how expressive school-age children are and provide information on their attention, affective state and communication skills when conversing with an adult. Insights from this study will add knowledge toward understanding social-communicative development in school-age children.
Collapse
|
23
|
Yang Y. A preliminary evaluation of still face images by deep learning: A potential screening test for childhood developmental disabilities. Med Hypotheses 2020; 144:109978. [PMID: 32540607 DOI: 10.1016/j.mehy.2020.109978] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 05/22/2020] [Accepted: 06/05/2020] [Indexed: 12/01/2022]
Abstract
Most developmental disorders are defined by their clinical symptoms and many disorders share common features. The main objective of this research is to evaluate still facial images as a potential screening test for childhood developmental disabilities, which is free of any biases of subjective judgments of human observers. Via supervised machine learning, a classifier of convolution neural network (CNN) was built by using 908 facial images, half of those were photos of children labeled with "autism", which may include some developmental disorders with autism-like features. Then face images were generated for two categories of photos. Above all, the most important discovery of this research is that face images labeled "autism" and normal controls populate two quite distinctive manifolds. Different pattern was found to be distributed in the eyes and mouth in the generated photos for two categories of faces by deep learning. It is showed that supervised machine learning can obtain facial features, which could possibly be applicable to improve early screening for childhood developmental disabilities by facial expression. A simple computer-based screening test of still face images may prove to be a useful adjunct in many clinical settings.
Collapse
Affiliation(s)
- You Yang
- Department of Developmental and Behavioral Pediatrics, Shanghai Children's Medical Center, Shanghai Jiaotong University School of Medicine, Shanghai 200127, PR China.
| |
Collapse
|
24
|
Lai PT, Ng R, Bellugi U. Parental report of cognitive and social-emotionality traits in school-age children with autism and Williams syndrome. INTERNATIONAL JOURNAL OF DEVELOPMENTAL DISABILITIES 2020; 68:309-316. [PMID: 35603004 PMCID: PMC9122353 DOI: 10.1080/20473869.2020.1765296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Revised: 03/28/2020] [Accepted: 04/30/2020] [Indexed: 06/15/2023]
Abstract
The majority of the research examining children with Autism Spectrum Disorder (ASD) and Williams Syndrome (WS) focus on the social domain while few have examined cognitive style and emotionality. Accordingly, this current study assessed the day-to-day cognitive and behavioral functioning of school-age children with ASD, WS, and neurotypical development (ND) through caregiver-report inventories to further delineate commonalities and disparities in cognitive and social-emotional traits. Two caregiver-report inventories, the Children's Behavior Questionnaire and the Multidimensional Personality Questionnaire were employed to assess the day-to-day functioning of children ages 7-14 years. Participants included 64 caregivers of children, of these, 25 were caregivers of children with high functioning autism (HFA), 14 with WS, and 25 with ND. Multivariate analysis of covariance was computed to assess between-group differences for each subscale within a questionnaire. Covariates included age and full-scale IQ. For cognitive traits, group differences were observed across two categories while seven were present within the social-emotional categories. The majority of the group effects reflected differences in social-emotional traits between ND and both neurodevelopmental groups, while limited distinctions were found between the two clinical groups. This brief report provides additional evidence that HFA and WS may show similarities in cognitive traits but more divergent social-emotional tendencies, despite controlling for age and intellect. This study highlights the large social-emotional differences that supports prior phenotypic descriptions of both neurodevelopmental groups. Future research in these domains are needed to determine focused interventions to address social impairment.
Collapse
Affiliation(s)
- Philip T. Lai
- Joint Doctoral Program in Language and Communicative Disorders, School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
- Joint Doctoral Program in Language and Communicative Disorders, Center for Research in Language, University of California San Diego, La Jolla, CA, USA
- Laboratory for Cognitive Neuroscience, Salk Institute for Biological Sciences, La Jolla, CA, USA
| | - Rowena Ng
- Laboratory for Cognitive Neuroscience, Salk Institute for Biological Sciences, La Jolla, CA, USA
- Institute of Child Development, University of Minnesota Twin Cities, Minneapolis, MN, USA
| | - Ursula Bellugi
- Laboratory for Cognitive Neuroscience, Salk Institute for Biological Sciences, La Jolla, CA, USA
| |
Collapse
|
25
|
Grossard C, Dapogny A, Cohen D, Bernheim S, Juillet E, Hamel F, Hun S, Bourgeois J, Pellerin H, Serret S, Bailly K, Chaby L. Children with autism spectrum disorder produce more ambiguous and less socially meaningful facial expressions: an experimental study using random forest classifiers. Mol Autism 2020; 11:5. [PMID: 31956394 PMCID: PMC6958757 DOI: 10.1186/s13229-020-0312-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 01/01/2020] [Indexed: 01/19/2023] Open
Abstract
Background Computer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD). Methods We recruited 157 children with typical development (TD) and 36 children with ASD in Paris and Nice to perform two experimental tasks to produce FEs with emotional valence. FEs were explored by judging ratings and by random forest (RF) classifiers. To do so, we located a set of 49 facial landmarks in the task videos, we generated a set of geometric and appearance features and we used RF classifiers to explore how children with ASD differed from TD children when producing FEs. Results Using multivariate models including other factors known to predict FEs (age, gender, intellectual quotient, emotion subtype, cultural background), ratings from expert raters showed that children with ASD had more difficulty producing FEs than TD children. In addition, when we explored how RF classifiers performed, we found that classification tasks, except for those for sadness, were highly accurate and that RF classifiers needed more facial landmarks to achieve the best classification for children with ASD. Confusion matrices showed that when RF classifiers were tested in children with ASD, anger was often confounded with happiness. Limitations The sample size of the group of children with ASD was lower than that of the group of TD children. By using several control calculations, we tried to compensate for this limitation. Conclusion Children with ASD have more difficulty producing socially meaningful FEs. The computer vision methods we used to explore FE dynamics also highlight that the production of FEs in children with ASD carries more ambiguity.
Collapse
Affiliation(s)
- Charline Grossard
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France.,2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Arnaud Dapogny
- 2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - David Cohen
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France.,2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Sacha Bernheim
- 2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Estelle Juillet
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France
| | - Fanny Hamel
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France
| | | | | | - Hugues Pellerin
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France
| | | | - Kevin Bailly
- 2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Laurence Chaby
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France.,2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France.,4Institut de Psychologie, Université de Paris, 92100 Boulogne-Billancourt, France
| |
Collapse
|
26
|
Sorensen T, Zane E, Feng T, Narayanan S, Grossman R. Cross-Modal Coordination of Face-Directed Gaze and Emotional Speech Production in School-Aged Children and Adolescents with ASD. Sci Rep 2019; 9:18301. [PMID: 31797950 PMCID: PMC6892887 DOI: 10.1038/s41598-019-54587-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 11/14/2019] [Indexed: 11/10/2022] Open
Abstract
Autism spectrum disorder involves persistent difficulties in social communication. Although these difficulties affect both verbal and nonverbal communication, there are no quantitative behavioral studies to date investigating the cross-modal coordination of verbal and nonverbal communication in autism. The objective of the present study was to characterize the dynamic relation between speech production and facial expression in children with autism and to establish how face-directed gaze modulates this cross-modal coordination. In a dynamic mimicry task, experiment participants watched and repeated neutral and emotional spoken sentences with accompanying facial expressions. Analysis of audio and motion capture data quantified cross-modal coordination between simultaneous speech production and facial expression. Whereas neurotypical children produced emotional sentences with strong cross-modal coordination and produced neutral sentences with weak cross-modal coordination, autistic children produced similar levels of cross-modal coordination for both neutral and emotional sentences. An eyetracking analysis revealed that cross-modal coordination of speech production and facial expression was greater when the neurotypical child spent more time looking at the face, but weaker when the autistic child spent more time looking at the face. In sum, social communication difficulties in autism spectrum disorder may involve deficits in cross-modal coordination. This finding may inform how autistic individuals are perceived in their daily conversations.
Collapse
Affiliation(s)
- Tanner Sorensen
- Signal Analysis and Interpretation Laboratory, Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, 90089, USA.
| | - Emily Zane
- Department of Communication Sciences and Disorders, Emerson College, Boston, MA, 02116, USA
| | - Tiantian Feng
- Signal Analysis and Interpretation Laboratory, Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, 90089, USA
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Laboratory, Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, 90089, USA
| | - Ruth Grossman
- Department of Communication Sciences and Disorders, Emerson College, Boston, MA, 02116, USA
| |
Collapse
|
27
|
Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9214542] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.
Collapse
|
28
|
Grossman RB, Mertens J, Zane E. Perceptions of self and other: Social judgments and gaze patterns to videos of adolescents with and without autism spectrum disorder. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2019; 23:846-857. [PMID: 30014714 PMCID: PMC6403013 DOI: 10.1177/1362361318788071] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Neurotypical adults often form negative first impressions of individuals with autism spectrum disorder and are less interested in engaging with them socially. In contrast, individuals with autism spectrum disorder actively seek out the company of others who share their diagnosis. It is not clear, however, whether individuals with autism spectrum disorder form more positive first impressions of autistic peers when diagnosis is not explicitly shared. We asked adolescents with and without autism spectrum disorder to watch brief video clips of adolescents with and without autism spectrum disorder and answer questions about their impressions of the individuals in the videos. Questions were related to participants' perceptions of the social skills of the individuals in the video, as well as their own willingness to interact with that person. We also measured gaze patterns to the faces, eyes, and mouths of adolescents in the video stimuli. Both participant groups spent less time gazing at videos of autistic adolescents. Regardless of diagnostic group, all participants provided more negative judgments of autistic than neurotypical adolescents in the videos. These data indicate that, without being explicitly informed of a shared diagnosis, adolescents with autism spectrum disorder form negative first impressions of autistic adolescents that are similar to, or lower than, those formed by neurotypical peers.
Collapse
Affiliation(s)
- Ruth B Grossman
- 1 Emerson College, USA
- 2 University of Massachusetts Medical School, USA
| | | | | |
Collapse
|
29
|
Zane E, Yang Z, Pozzan L, Guha T, Narayanan S, Grossman RB. Motion-Capture Patterns of Voluntarily Mimicked Dynamic Facial Expressions in Children and Adolescents With and Without ASD. J Autism Dev Disord 2019; 49:1062-1079. [PMID: 30406914 DOI: 10.1007/s10803-018-3811-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Research shows that neurotypical individuals struggle to interpret the emotional facial expressions of people with Autism Spectrum Disorder (ASD). The current study uses motion-capture to objectively quantify differences between the movement patterns of emotional facial expressions of individuals with and without ASD. Participants volitionally mimicked emotional expressions while wearing facial markers. Recorded marker movement was grouped by expression valence and intensity. We used Growth Curve Analysis to test whether movement patterns were predictable by expression type and participant group. Results show significant interactions between expression type and group, and little effect of emotion valence on ASD expressions. Together, results support perceptions that expressions of individuals with ASD are different from-and more ambiguous than-those of neurotypical individuals'.
Collapse
Affiliation(s)
- Emily Zane
- FACE Lab at Emerson College, Boston, MA, USA. .,Department of Communication Disorders and Sciences, SUNY Fredonia, Thompson Hall, Rm. E127, Fredonia, NY, 14963, USA.
| | - Zhaojun Yang
- Signal Analysis and Interpretation Laboratory (SAIL) at USC, 3740 McClintock Avenue, Los Angeles, CA, 90089, USA
| | - Lucia Pozzan
- VUI, Inorporated, 15 Broad Street, Boston, MA, 02109, USA
| | - Tanaya Guha
- Signal Analysis and Interpretation Laboratory (SAIL) at USC, 3740 McClintock Avenue, Los Angeles, CA, 90089, USA.,Department of Computer Science, University of Warwick, Computer Science Building, Rm. CS3.36, Coventry, CV4 7AL, UK
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Laboratory (SAIL) at USC, 3740 McClintock Avenue, Los Angeles, CA, 90089, USA
| | - Ruth Bergida Grossman
- FACE Lab at Emerson College, Boston, MA, USA.,Communication Sciences and Disorders at Emerson College, UBank, Rm. 803, Boston, MA, 02116, USA.,UMMS Shriver Center, Boston, MA, USA
| |
Collapse
|
30
|
Leo M, Carcagnì P, Distante C, Spagnolo P, Mazzeo PL, Rosato AC, Petrocchi S, Pellegrino C, Levante A, De Lumè F, Lecciso F. Computational Assessment of Facial Expression Production in ASD Children. SENSORS (BASEL, SWITZERLAND) 2018; 18:E3993. [PMID: 30453518 PMCID: PMC6263710 DOI: 10.3390/s18113993] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 11/09/2018] [Accepted: 11/14/2018] [Indexed: 12/01/2022]
Abstract
In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.
Collapse
Affiliation(s)
- Marco Leo
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Pierluigi Carcagnì
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Paolo Spagnolo
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Pier Luigi Mazzeo
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | | | - Serena Petrocchi
- USI, Institute of Communication and Health, Via Buffi 6, 6900 Lugano, Switzerland.
| | | | - Annalisa Levante
- Dipartimento di Storia, University of Salento, Società e Studi Sull' Uomo, Studium 2000-Edificio 5-Via di Valesio, 73100 Lecce, Italy.
| | - Filomena De Lumè
- Dipartimento di Storia, University of Salento, Società e Studi Sull' Uomo, Studium 2000-Edificio 5-Via di Valesio, 73100 Lecce, Italy.
| | - Flavia Lecciso
- Dipartimento di Storia, University of Salento, Società e Studi Sull' Uomo, Studium 2000-Edificio 5-Via di Valesio, 73100 Lecce, Italy.
| |
Collapse
|
31
|
Oral-Motor and Lexical Diversity During Naturalistic Conversations in Adults with Autism Spectrum Disorder. PROCEEDINGS OF THE CONFERENCE. ASSOCIATION FOR COMPUTATIONAL LINGUISTICS. NORTH AMERICAN CHAPTER. MEETING 2018; 2018:147-157. [PMID: 33073267 DOI: 10.18653/v1/w18-0616] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by impaired social communication and the presence of restricted, repetitive patterns of behaviors and interests. Prior research suggests that restricted patterns of behavior in ASD may be cross-domain phenomena that are evident in a variety of modalities. Computational studies of language in ASD provide support for the existence of an underlying dimension of restriction that emerges during a conversation. Similar evidence exists for restricted patterns of facial movement. Using tools from computational linguistics, computer vision, and information theory, this study tests whether cognitive-motor restriction can be detected across multiple behavioral domains in adults with ASD during a naturalistic conversation. Our methods identify restricted behavioral patterns, as measured by entropy in word use and mouth movement. Results suggest that adults with ASD produce significantly less diverse mouth movements and words than neurotypical adults, with an increased reliance on repeated patterns in both domains. The diversity values of the two domains are not significantly correlated, suggesting that they provide complementary information.
Collapse
|
32
|
Sasson NJ, Faso DJ, Nugent J, Lovell S, Kennedy DP, Grossman RB. Neurotypical Peers are Less Willing to Interact with Those with Autism based on Thin Slice Judgments. Sci Rep 2017; 7:40700. [PMID: 28145411 PMCID: PMC5286449 DOI: 10.1038/srep40700] [Citation(s) in RCA: 206] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Accepted: 12/07/2016] [Indexed: 12/03/2022] Open
Abstract
Individuals with autism spectrum disorder (ASD), including those who otherwise require less support, face severe difficulties in everyday social interactions. Research in this area has primarily focused on identifying the cognitive and neurological differences that contribute to these social impairments, but social interaction by definition involves more than one person and social difficulties may arise not just from people with ASD themselves, but also from the perceptions, judgments, and social decisions made by those around them. Here, across three studies, we find that first impressions of individuals with ASD made from thin slices of real-world social behavior by typically-developing observers are not only far less favorable across a range of trait judgments compared to controls, but also are associated with reduced intentions to pursue social interaction. These patterns are remarkably robust, occur within seconds, do not change with increased exposure, and persist across both child and adult age groups. However, these biases disappear when impressions are based on conversational content lacking audio-visual cues, suggesting that style, not substance, drives negative impressions of ASD. Collectively, these findings advocate for a broader perspective of social difficulties in ASD that considers both the individual’s impairments and the biases of potential social partners.
Collapse
Affiliation(s)
- Noah J Sasson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, GR41, 800 W Campbell Road, Richardson, TX, 75080-3021, USA
| | - Daniel J Faso
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, GR41, 800 W Campbell Road, Richardson, TX, 75080-3021, USA
| | - Jack Nugent
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA
| | - Sarah Lovell
- Department of Communication Sciences and Disorders, Emerson College, 120 Boylston Street, Boston, MA 02116, USA
| | - Daniel P Kennedy
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA
| | - Ruth B Grossman
- Department of Communication Sciences and Disorders, Emerson College, 120 Boylston Street, Boston, MA 02116, USA
| |
Collapse
|