1
|
Ran R, Liang W, Deng S, Fan X, Shi K, Wang T, Dong S, Hu Q, Liu C. Risk assessment and automatic identification of autistic children based on appearance. Sci Rep 2024; 14:29074. [PMID: 39580522 PMCID: PMC11585624 DOI: 10.1038/s41598-024-80459-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 11/19/2024] [Indexed: 11/25/2024] Open
Abstract
The diagnosis of Autism Spectrum Disorder (ASD) is mainly based on some diagnostic scales and evaluations by professional doctors, which may have limitations such as subjectivity, time, and cost. This research introduces a novel assessment and auto-identification approach for autistic children based on the appearance of children, which is a relatively objective, fast, and cost-effective approach. Initially, a custom social interaction scenario was developed, followed by a facial data set (ACFD) that contained 187 children, including 92 ASD and 95 children typically developing (TD). Using computer vision techniques, some appearance features of children including facial appearing time, eye concentration analysis, response time to name calls, and emotional expression ability were extracted. Subsequently, these features were combined and machine learning methods were used for the classification of children. Notably, the Bayes classifier achieved a remarkable accuracy of 94.1%. The experimental results show that the extracted visual appearance features can reflect the typical symptoms of children, and the automatic recognition method can provide an auxiliary diagnosis or data support for doctors.
Collapse
Affiliation(s)
- Ruisheng Ran
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Wei Liang
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Shan Deng
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Xin Fan
- The Department of Child Health Care, Chongqing Health Center for Women and Children, Chongqing, 400010, China.
| | - Kai Shi
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Ting Wang
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Shuhong Dong
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Qianwei Hu
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Chenyi Liu
- The College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| |
Collapse
|
2
|
Zaharieva MS, Salvadori EA, Messinger DS, Visser I, Colonnesi C. Automated facial expression measurement in a longitudinal sample of 4- and 8-month-olds: Baby FaceReader 9 and manual coding of affective expressions. Behav Res Methods 2024; 56:5709-5731. [PMID: 38273072 PMCID: PMC11335827 DOI: 10.3758/s13428-023-02301-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2023] [Indexed: 01/27/2024]
Abstract
Facial expressions are among the earliest behaviors infants use to express emotional states, and are crucial to preverbal social interaction. Manual coding of infant facial expressions, however, is laborious and poses limitations to replicability. Recent developments in computer vision have advanced automated facial expression analyses in adults, providing reproducible results at lower time investment. Baby FaceReader 9 is commercially available software for automated measurement of infant facial expressions, but has received little validation. We compared Baby FaceReader 9 output to manual micro-coding of positive, negative, or neutral facial expressions in a longitudinal dataset of 58 infants at 4 and 8 months of age during naturalistic face-to-face interactions with the mother, father, and an unfamiliar adult. Baby FaceReader 9's global emotional valence formula yielded reasonable classification accuracy (AUC = .81) for discriminating manually coded positive from negative/neutral facial expressions; however, the discrimination of negative from neutral facial expressions was not reliable (AUC = .58). Automatically detected a priori action unit (AU) configurations for distinguishing positive from negative facial expressions based on existing literature were also not reliable. A parsimonious approach using only automatically detected smiling (AU12) yielded good performance for discriminating positive from negative/neutral facial expressions (AUC = .86). Likewise, automatically detected brow lowering (AU3+AU4) reliably distinguished neutral from negative facial expressions (AUC = .79). These results provide initial support for the use of selected automatically detected individual facial actions to index positive and negative affect in young infants, but shed doubt on the accuracy of complex a priori formulas.
Collapse
Affiliation(s)
- Martina S Zaharieva
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands.
| | - Eliala A Salvadori
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA
- Department of Pediatrics, University of Miami, Coral Gables, FL, USA
- Department of Music Engineering, University of Miami, Coral Gables, FL, USA
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ingmar Visser
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Cristina Colonnesi
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
李 翔, 马 昕, 李 贻. [A review of studies on visual behavior analysis aided diagnosis of autism spectrum disorders]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:812-819. [PMID: 37666774 PMCID: PMC10477381 DOI: 10.7507/1001-5515.202204038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 06/27/2023] [Indexed: 09/06/2023]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social communication and repetitive behaviors. With the rapid development of computer vision, visual behavior analysis aided diagnosis of ASD has got more and more attention. This paper reviews the research on visual behavior analysis aided diagnosis of ASD. First, the core symptoms and clinical diagnostic criteria of ASD are introduced briefly. Secondly, according to clinical diagnostic criteria, the interaction scenes are classified and introduced. Then, the existing relevant datasets are discussed. Finally, we analyze and compare the advantages and disadvantages of visual behavior analysis aided diagnosis methods for ASD in different interactive scenarios. The challenges in this research field are summarized and the prospects of related research are presented to promote the clinical application of visual behavior analysis in ASD diagnosis.
Collapse
Affiliation(s)
- 翔 李
- 山东大学 控制科学与工程学院 (济南 250061)School of Control Science and Engineering, Shandong University, Jinan 250061, P. R. China
| | - 昕 马
- 山东大学 控制科学与工程学院 (济南 250061)School of Control Science and Engineering, Shandong University, Jinan 250061, P. R. China
| | - 贻斌 李
- 山东大学 控制科学与工程学院 (济南 250061)School of Control Science and Engineering, Shandong University, Jinan 250061, P. R. China
| |
Collapse
|
4
|
Varma M, Washington P, Chrisman B, Kline A, Leblanc E, Paskov K, Stockham N, Jung JY, Sun MW, Wall DP. Identification of Social Engagement Indicators Associated With Autism Spectrum Disorder Using a Game-Based Mobile App: Comparative Study of Gaze Fixation and Visual Scanning Methods. J Med Internet Res 2022; 24:e31830. [PMID: 35166683 PMCID: PMC8889483 DOI: 10.2196/31830] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 12/20/2021] [Accepted: 12/22/2021] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is a widespread neurodevelopmental condition with a range of potential causes and symptoms. Standard diagnostic mechanisms for ASD, which involve lengthy parent questionnaires and clinical observation, often result in long waiting times for results. Recent advances in computer vision and mobile technology hold potential for speeding up the diagnostic process by enabling computational analysis of behavioral and social impairments from home videos. Such techniques can improve objectivity and contribute quantitatively to the diagnostic process. OBJECTIVE In this work, we evaluate whether home videos collected from a game-based mobile app can be used to provide diagnostic insights into ASD. To the best of our knowledge, this is the first study attempting to identify potential social indicators of ASD from mobile phone videos without the use of eye-tracking hardware, manual annotations, and structured scenarios or clinical environments. METHODS Here, we used a mobile health app to collect over 11 hours of video footage depicting 95 children engaged in gameplay in a natural home environment. We used automated data set annotations to analyze two social indicators that have previously been shown to differ between children with ASD and their neurotypical (NT) peers: (1) gaze fixation patterns, which represent regions of an individual's visual focus and (2) visual scanning methods, which refer to the ways in which individuals scan their surrounding environment. We compared the gaze fixation and visual scanning methods used by children during a 90-second gameplay video to identify statistically significant differences between the 2 cohorts; we then trained a long short-term memory (LSTM) neural network to determine if gaze indicators could be predictive of ASD. RESULTS Our results show that gaze fixation patterns differ between the 2 cohorts; specifically, we could identify 1 statistically significant region of fixation (P<.001). In addition, we also demonstrate that there are unique visual scanning patterns that exist for individuals with ASD when compared to NT children (P<.001). A deep learning model trained on coarse gaze fixation annotations demonstrates mild predictive power in identifying ASD. CONCLUSIONS Ultimately, our study demonstrates that heterogeneous video data sets collected from mobile devices hold potential for quantifying visual patterns and providing insights into ASD. We show the importance of automated labeling techniques in generating large-scale data sets while simultaneously preserving the privacy of participants, and we demonstrate that specific social engagement indicators associated with ASD can be identified and characterized using such data.
Collapse
Affiliation(s)
- Maya Varma
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Peter Washington
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Brianna Chrisman
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Pediatrics and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Emilie Leblanc
- Department of Pediatrics and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Kelley Paskov
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Nate Stockham
- Department of Neuroscience, Stanford University, Stanford, CA, United States
| | - Jae-Yoon Jung
- Department of Pediatrics and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Min Woo Sun
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Dennis P Wall
- Department of Pediatrics and Biomedical Data Science, Stanford University, Stanford, CA, United States
| |
Collapse
|
5
|
Automatic Assessment of Motor Impairments in Autism Spectrum Disorders: A Systematic Review. Cognit Comput 2022. [DOI: 10.1007/s12559-021-09940-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
6
|
Alvari G, Coviello L, Furlanello C. EYE-C: Eye-Contact Robust Detection and Analysis during Unconstrained Child-Therapist Interactions in the Clinical Setting of Autism Spectrum Disorders. Brain Sci 2021; 11:1555. [PMID: 34942856 PMCID: PMC8699076 DOI: 10.3390/brainsci11121555] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 11/04/2021] [Accepted: 11/19/2021] [Indexed: 12/26/2022] Open
Abstract
The high level of heterogeneity in Autism Spectrum Disorder (ASD) and the lack of systematic measurements complicate predicting outcomes of early intervention and the identification of better-tailored treatment programs. Computational phenotyping may assist therapists in monitoring child behavior through quantitative measures and personalizing the intervention based on individual characteristics; still, real-world behavioral analysis is an ongoing challenge. For this purpose, we designed EYE-C, a system based on OpenPose and Gaze360 for fine-grained analysis of eye-contact episodes in unconstrained therapist-child interactions via a single video camera. The model was validated on video data varying in resolution and setting, achieving promising performance. We further tested EYE-C on a clinical sample of 62 preschoolers with ASD for spectrum stratification based on eye-contact features and age. By unsupervised clustering, three distinct sub-groups were identified, differentiated by eye-contact dynamics and a specific clinical phenotype. Overall, this study highlights the potential of Artificial Intelligence in categorizing atypical behavior and providing translational solutions that might assist clinical practice.
Collapse
Affiliation(s)
- Gianpaolo Alvari
- Department of Psychology and Cognitive Sciences, University of Trento, Corso Bettini 84, 38068 Rovereto, Italy
- DSH Research Unit, Bruno Kessler Foundation, Via Sommarive 8, 38123 Trento, Italy
| | - Luca Coviello
- University of Trento, 38122 Trento, Italy;
- Enogis, Via al Maso Visintainer 8, 38122 Trento, Italy
| | - Cesare Furlanello
- HK3 Lab, Piazza Manifatture 1, 38068 Rovereto, Italy;
- Orobix Life, Via Camozzi 145, 24121 Bergamo, Italy
| |
Collapse
|
7
|
Using 2D video-based pose estimation for automated prediction of autism spectrum disorders in young children. Sci Rep 2021; 11:15069. [PMID: 34301963 PMCID: PMC8302646 DOI: 10.1038/s41598-021-94378-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 07/09/2021] [Indexed: 11/10/2022] Open
Abstract
Clinical research in autism has recently witnessed promising digital phenotyping results, mainly focused on single feature extraction, such as gaze, head turn on name-calling or visual tracking of the moving object. The main drawback of these studies is the focus on relatively isolated behaviors elicited by largely controlled prompts. We recognize that while the diagnosis process understands the indexing of the specific behaviors, ASD also comes with broad impairments that often transcend single behavioral acts. For instance, the atypical nonverbal behaviors manifest through global patterns of atypical postures and movements, fewer gestures used and often decoupled from visual contact, facial affect, speech. Here, we tested the hypothesis that a deep neural network trained on the non-verbal aspects of social interaction can effectively differentiate between children with ASD and their typically developing peers. Our model achieves an accuracy of 80.9% (F1 score: 0.818; precision: 0.784; recall: 0.854) with the prediction probability positively correlated to the overall level of symptoms of autism in social affect and repetitive and restricted behaviors domain. Provided the non-invasive and affordable nature of computer vision, our approach carries reasonable promises that a reliable machine-learning-based ASD screening may become a reality not too far in the future.
Collapse
|
8
|
Harrison LA, Kats A, Kilroy E, Butera C, Jayashankar A, Keles U, Aziz-Zadeh L. Motor and sensory features successfully decode autism spectrum disorder and combine with the original RDoC framework to boost diagnostic classification. Sci Rep 2021; 11:7839. [PMID: 33837251 PMCID: PMC8035204 DOI: 10.1038/s41598-021-87455-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Accepted: 03/25/2021] [Indexed: 12/28/2022] Open
Abstract
Sensory processing and motor coordination atypicalities are not commonly identified as primary characteristics of autism spectrum disorder (ASD), nor are they well captured in the NIMH's original Research Domain Criteria (RDoC) framework. Here, motor and sensory features performed similarly to RDoC features in support vector classification of 30 ASD youth against 33 typically developing controls. Combining sensory with RDoC features boosted classification performance, achieving a Matthews Correlation Coefficient (MCC) of 0.949 and balanced accuracy (BAcc) of 0.971 (p = 0.00020, calculated against a permuted null distribution). Sensory features alone successfully classified ASD (MCC = 0.565, BAcc = 0.773, p = 0.0222) against a clinically relevant control group of 26 youth with Developmental Coordination Disorder (DCD) and were in fact required to decode against DCD above chance. These findings highlight the importance of sensory and motor features to the ASD phenotype and their relevance to the RDoC framework.
Collapse
Affiliation(s)
- Laura A Harrison
- USC Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA.
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA.
| | - Anastasiya Kats
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA
| | - Emily Kilroy
- USC Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA
| | - Christiana Butera
- USC Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA
| | - Aditya Jayashankar
- USC Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA
| | - Umit Keles
- Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Lisa Aziz-Zadeh
- USC Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
9
|
Hashemi J, Dawson G, Carpenter KLH, Campbell K, Qiu Q, Espinosa S, Marsan S, Baker JP, Egger HL, Sapiro G. Computer Vision Analysis for Quantification of Autism Risk Behaviors. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2021; 12:215-226. [PMID: 35401938 PMCID: PMC8993160 DOI: 10.1109/taffc.2018.2868196] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Observational behavior analysis plays a key role for the discovery and evaluation of risk markers for many neurodevelopmental disorders. Research on autism spectrum disorder (ASD) suggests that behavioral risk markers can be observed at 12 months of age or earlier, with diagnosis possible at 18 months. To date, these studies and evaluations involving observational analysis tend to rely heavily on clinical practitioners and specialists who have undergone intensive training to be able to reliably administer carefully designed behavioural-eliciting tasks, code the resulting behaviors, and interpret such behaviors. These methods are therefore extremely expensive, time-intensive, and are not easily scalable for large population or longitudinal observational analysis. We developed a self-contained, closed-loop, mobile application with movie stimuli designed to engage the child's attention and elicit specific behavioral and social responses, which are recorded with a mobile device camera and then analyzed via computer vision algorithms. Here, in addition to presenting this paradigm, we validate the system to measure engagement, name-call responses, and emotional responses of toddlers with and without ASD who were presented with the application. Additionally, we show examples of how the proposed framework can further risk marker research with fine-grained quantification of behaviors. The results suggest these objective and automatic methods can be considered to aid behavioral analysis, and can be suited for objective automatic analysis for future studies.
Collapse
Affiliation(s)
- Jordan Hashemi
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| | - Geraldine Dawson
- Department of Psychiatry and Behavioral Sciences, Duke Center for Autism and Brain Development, and the Duke Institute for Brain Sciences, Durham, NC
| | - Kimberly L H Carpenter
- Department of Psychiatry and Behavioral Sciences, Duke Center for Autism and Brain Development, and the Duke Institute for Brain Sciences, Durham, NC
| | | | - Qiang Qiu
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| | - Steven Espinosa
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| | - Samuel Marsan
- Department of Psychiatry and Behavioral Sciences, Durham, NC
| | | | - Helen L Egger
- Department of Child and Adolescent Psychiatry, NYU Langone Health, New York, NY. She performed this work while at Duke University
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
| |
Collapse
|
10
|
Chong E, Clark-Whitney E, Southerland A, Stubbs E, Miller C, Ajodan EL, Silverman MR, Lord C, Rozga A, Jones RM, Rehg JM. Detection of eye contact with deep neural networks is as accurate as human experts. Nat Commun 2020; 11:6386. [PMID: 33318484 PMCID: PMC7736573 DOI: 10.1038/s41467-020-19712-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 10/14/2020] [Indexed: 01/10/2023] Open
Abstract
Eye contact is among the most primary means of social communication used by humans. Quantification of eye contact is valuable as a part of the analysis of social roles and communication skills, and for clinical screening. Estimating a subject's looking direction is a challenging task, but eye contact can be effectively captured by a wearable point-of-view camera which provides a unique viewpoint. While moments of eye contact from this viewpoint can be hand-coded, such a process tends to be laborious and subjective. In this work, we develop a deep neural network model to automatically detect eye contact in egocentric video. It is the first to achieve accuracy equivalent to that of human experts. We train a deep convolutional network using a dataset of 4,339,879 annotated images, consisting of 103 subjects with diverse demographic backgrounds. 57 subjects have a diagnosis of Autism Spectrum Disorder. The network achieves overall precision of 0.936 and recall of 0.943 on 18 validation subjects, and its performance is on par with 10 trained human coders with a mean precision 0.918 and recall 0.946. Our method will be instrumental in gaze behavior analysis by serving as a scalable, objective, and accessible tool for clinicians and researchers.
Collapse
Affiliation(s)
- Eunji Chong
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA.
| | - Elysha Clark-Whitney
- Center for Autism and the Developing Brain, Weill Cornell Medicine, New York, USA
| | - Audrey Southerland
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA
| | - Elizabeth Stubbs
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA
| | - Chanel Miller
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA
| | - Eliana L Ajodan
- Center for Autism and the Developing Brain, Weill Cornell Medicine, New York, USA
| | - Melanie R Silverman
- Center for Autism and the Developing Brain, Weill Cornell Medicine, New York, USA
| | - Catherine Lord
- School of Medicine, University of California, Los Angeles, USA
| | - Agata Rozga
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA
| | - Rebecca M Jones
- Center for Autism and the Developing Brain, Weill Cornell Medicine, New York, USA
| | - James M Rehg
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA
| |
Collapse
|
11
|
de Belen RAJ, Bednarz T, Sowmya A, Del Favero D. Computer vision in autism spectrum disorder research: a systematic review of published studies from 2009 to 2019. Transl Psychiatry 2020; 10:333. [PMID: 32999273 PMCID: PMC7528087 DOI: 10.1038/s41398-020-01015-w] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/04/2020] [Accepted: 09/09/2020] [Indexed: 11/29/2022] Open
Abstract
The current state of computer vision methods applied to autism spectrum disorder (ASD) research has not been well established. Increasing evidence suggests that computer vision techniques have a strong impact on autism research. The primary objective of this systematic review is to examine how computer vision analysis has been useful in ASD diagnosis, therapy and autism research in general. A systematic review of publications indexed on PubMed, IEEE Xplore and ACM Digital Library was conducted from 2009 to 2019. Search terms included ['autis*' AND ('computer vision' OR 'behavio* imaging' OR 'behavio* analysis' OR 'affective computing')]. Results are reported according to PRISMA statement. A total of 94 studies are included in the analysis. Eligible papers are categorised based on the potential biological/behavioural markers quantified in each study. Then, different computer vision approaches that were employed in the included papers are described. Different publicly available datasets are also reviewed in order to rapidly familiarise researchers with datasets applicable to their field and to accelerate both new behavioural and technological work on autism research. Finally, future research directions are outlined. The findings in this review suggest that computer vision analysis is useful for the quantification of behavioural/biological markers which can further lead to a more objective analysis in autism research.
Collapse
Affiliation(s)
| | - Tomasz Bednarz
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| | - Dennis Del Favero
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
12
|
Tenenbaum EJ, Carpenter KLH, Sabatos-DeVito M, Hashemi J, Vermeer S, Sapiro G, Dawson G. A Six-Minute Measure of Vocalizations in Toddlers with Autism Spectrum Disorder. Autism Res 2020; 13:1373-1382. [PMID: 32212384 PMCID: PMC7881362 DOI: 10.1002/aur.2293] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 09/24/2019] [Accepted: 03/03/2020] [Indexed: 01/08/2023]
Abstract
To improve early identification of autism spectrum disorder (ASD), we need objective, reliable, and accessible measures. To that end, a previous study demonstrated that a tablet-based application (app) that assessed several autism risk behaviors distinguished between toddlers with ASD and non-ASD toddlers. Using vocal data collected during this study, we investigated whether vocalizations uttered during administration of this app can distinguish among toddlers aged 16-31 months with typical development (TD), language or developmental delay (DLD), and ASD. Participant's visual and vocal responses were recorded using the camera and microphone in a tablet while toddlers watched movies designed to elicit behaviors associated with risk for ASD. Vocalizations were then coded offline. Results showed that (a) children with ASD and DLD were less likely to produce words during app administration than TD participants; (b) the ratio of syllabic vocalizations to all vocalizations was higher among TD than ASD or DLD participants; and (c) the rates of nonsyllabic vocalizations were higher in the ASD group than in either the TD or DLD groups. Those producing more nonsyllabic vocalizations were 24 times more likely to be diagnosed with ASD. These results lend support to previous findings that early vocalizations might be useful in identifying risk for ASD in toddlers and demonstrate the feasibility of using a scalable tablet-based app for assessing vocalizations in the context of a routine pediatric visit. LAY SUMMARY: Although parents often report symptoms of autism spectrum disorder (ASD) in infancy, we are not yet reliably diagnosing ASD until much later in development. A previous study tested a tablet-based application (app) that recorded behaviors we know are associated with ASD to help identify children at risk for the disorder. Here we measured how children vocalize while they watched the movies presented on the tablet. Children with ASD were less likely to produce words, less likely to produce speechlike sounds, and more likely to produce atypical sounds while watching these movies. These measures, combined with other behaviors measured by the app, might help identify which children should be evaluated for ASD. Autism Res 2020, 13: 1373-1382. © 2020 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Elena J Tenenbaum
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, USA
| | - Kimberly L H Carpenter
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, USA
| | - Maura Sabatos-DeVito
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, USA
| | - Jordan Hashemi
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, USA
- Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Saritha Vermeer
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, USA
| | - Guillermo Sapiro
- Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
- Department of Computer Science, Duke University, Durham, North Carolina, USA
- Department of Mathematics, Duke University, Durham, North Carolina, USA
| | - Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, USA
| |
Collapse
|
13
|
Washington P, Park N, Srivastava P, Voss C, Kline A, Varma M, Tariq Q, Kalantarian H, Schwartz J, Patnaik R, Chrisman B, Stockham N, Paskov K, Haber N, Wall DP. Data-Driven Diagnostics and the Potential of Mobile Artificial Intelligence for Digital Therapeutic Phenotyping in Computational Psychiatry. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2020; 5:759-769. [PMID: 32085921 PMCID: PMC7292741 DOI: 10.1016/j.bpsc.2019.11.015] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/24/2019] [Accepted: 11/25/2019] [Indexed: 01/11/2023]
Abstract
Data science and digital technologies have the potential to transform diagnostic classification. Digital technologies enable the collection of big data, and advances in machine learning and artificial intelligence enable scalable, rapid, and automated classification of medical conditions. In this review, we summarize and categorize various data-driven methods for diagnostic classification. In particular, we focus on autism as an example of a challenging disorder due to its highly heterogeneous nature. We begin by describing the frontier of data science methods for the neuropsychiatry of autism. We discuss early signs of autism as defined by existing pen-and-paper-based diagnostic instruments and describe data-driven feature selection techniques for determining the behaviors that are most salient for distinguishing children with autism from neurologically typical children. We then describe data-driven detection techniques, particularly computer vision and eye tracking, that provide a means of quantifying behavioral differences between cases and controls. We also describe methods of preserving the privacy of collected videos and prior efforts of incorporating humans in the diagnostic loop. Finally, we summarize existing digital therapeutic interventions that allow for data capture and longitudinal outcome tracking as the diagnosis moves along a positive trajectory. Digital phenotyping of autism is paving the way for quantitative psychiatry more broadly and will set the stage for more scalable, accessible, and precise diagnostic techniques in the field.
Collapse
Affiliation(s)
- Peter Washington
- Department of Bioengineering, Stanford University, Stanford, California
| | - Natalie Park
- Department of Biological Sciences, Columbia University, New York, New York
| | - Parishkrita Srivastava
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, California
| | - Catalin Voss
- Department of Computer Science, Stanford University, Stanford, California
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Maya Varma
- Department of Computer Science, Stanford University, Stanford, California
| | - Qandeel Tariq
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Haik Kalantarian
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Jessey Schwartz
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Ritik Patnaik
- Department of Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Brianna Chrisman
- Department of Bioengineering, Stanford University, Stanford, California
| | | | - Kelley Paskov
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Nick Haber
- School of Education, Stanford University, Stanford, California
| | - Dennis P Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California; Department of Psychiatry and Behavioral Sciences (by courtesy), Stanford University, Stanford, California.
| |
Collapse
|
14
|
Nebeker C. mHealth Research Applied to Regulated and Unregulated Behavioral Health Sciences. THE JOURNAL OF LAW, MEDICINE & ETHICS : A JOURNAL OF THE AMERICAN SOCIETY OF LAW, MEDICINE & ETHICS 2020; 48:49-59. [PMID: 32342758 DOI: 10.1177/1073110520917029] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Behavioral scientists are developing new methods and frameworks that leverage mobile health technologies to optimize individual level behavior change. Pervasive sensors and mobile apps allow researchers to passively observe human behaviors "in the wild" 24/7 which supports delivery of personalized interventions in the real-world environment. This is all possible because these technologies contain an incredible array of sensors that allow applications to constantly record user location and can contextualize current environmental conditions through barometers, thermometers, and ambient light sensors and can also capture audio and video of the user and their surroundings through multiple integrated high-definition cameras and microphones. These tools are a game changer in behavioral health research and, not surprisingly, introduce new ethical, regulatory/legal and social implications described in this article.
Collapse
Affiliation(s)
- Camille Nebeker
- Camille Nebeker, Ed.D, M.S., is an associate professor in the University of California, San Diego Department of Family Medicine and Public Health, with a primary appointment in Behavioral Medicine and a secondary appointment in Global Health
| |
Collapse
|
15
|
Analysis of Facial Information for Healthcare Applications: A Survey on Computer Vision-Based Approaches. INFORMATION 2020. [DOI: 10.3390/info11030128] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
This paper gives an overview of the cutting-edge approaches that perform facial cue analysis in the healthcare area. The document is not limited to global face analysis but it also concentrates on methods related to local cues (e.g., the eyes). A research taxonomy is introduced by dividing the face in its main features: eyes, mouth, muscles, skin, and shape. For each facial feature, the computer vision-based tasks aiming at analyzing it and the related healthcare goals that could be pursued are detailed.
Collapse
|
16
|
Harlow J, Weibel N, Al Kotob R, Chan V, Bloss C, Linares-Orozco R, Takemoto M, Nebeker C. Using Participatory Design to Inform the Connected and Open Research Ethics (CORE) Commons. SCIENCE AND ENGINEERING ETHICS 2020; 26:183-203. [PMID: 30725245 DOI: 10.1007/s11948-019-00086-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 01/21/2019] [Indexed: 06/09/2023]
Abstract
Mobile health (mHealth) research involving pervasive sensors, mobile apps and other novel data collection tools and methods present new ethical, legal, and social challenges specific to informed consent, data management and bystander rights. To address these challenges, a participatory design approach was deployed whereby stakeholders contributed to the development of a web-based commons to support the mHealth research community including researchers and ethics board members. The CORE (Connected and Open Research Ethics) platform now features a community forum, a resource library and a network of nearly 600 global members. The utility of the participatory design process was evaluated by analyzing activities carried out over an 8-month design phase consisting of 86 distinct events including iterative design deliberations and social media engagement. This article describes how participatory design yielded 55 new features directly mapped to community needs and discusses relationships to user engagement as demonstrated by a steady increase in CORE member activity and followers on Twitter.
Collapse
Affiliation(s)
- John Harlow
- School for the Future of Innovation in Society, Arizona State University, PO Box 875603, Tempe, AZ, 85287-5603, USA
| | - Nadir Weibel
- Department of Computer Science and Engineering, University of California San Diego, 9500 Gilman Dr #0404, La Jolla, CA, 92093, USA
| | - Rasheed Al Kotob
- Department of Nano Engineering, University of California San Diego, 9500 Gilman Dr #0448, La Jolla, CA, 92093, USA
| | - Vincent Chan
- Department of Computer Science and Engineering, University of California San Diego, 9500 Gilman Dr #0404, La Jolla, CA, 92093, USA
| | - Cinnamon Bloss
- Department of Family Medicine and Public Health, University of California San Diego, 9500 Gilman Dr #0811, La Jolla, CA, 92093, USA
| | - Rubi Linares-Orozco
- Office of Regulatory Compliance, University of California San Diego, 9500 Gilman Dr, La Jolla, CA, 92093, USA
| | - Michelle Takemoto
- Department of Family Medicine and Public Health, University of California San Diego, 9500 Gilman Dr #0811, La Jolla, CA, 92093, USA
| | - Camille Nebeker
- Department of Family Medicine and Public Health, University of California San Diego, 9500 Gilman Dr #0811, La Jolla, CA, 92093, USA.
| |
Collapse
|
17
|
Dawson G, Sapiro G. Potential for Digital Behavioral Measurement Tools to Transform the Detection and Diagnosis of Autism Spectrum Disorder. JAMA Pediatr 2019; 173:305-306. [PMID: 30715131 PMCID: PMC7112503 DOI: 10.1001/jamapediatrics.2018.5269] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
18
|
Campbell K, Carpenter KLH, Hashemi J, Espinosa S, Marsan S, Borg JS, Chang Z, Qiu Q, Vermeer S, Adler E, Tepper M, Egger HL, Baker JP, Sapiro G, Dawson G. Computer vision analysis captures atypical attention in toddlers with autism. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2019; 23:619-628. [PMID: 29595333 PMCID: PMC6119515 DOI: 10.1177/1362361318766247] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
To demonstrate the capability of computer vision analysis to detect atypical orienting and attention behaviors in toddlers with autism spectrum disorder. One hundered and four toddlers of 16-31 months old (mean = 22) participated in this study. Twenty-two of the toddlers had autism spectrum disorder and 82 had typical development or developmental delay. Toddlers watched video stimuli on a tablet while the built-in camera recorded their head movement. Computer vision analysis measured participants' attention and orienting in response to name calls. Reliability of the computer vision analysis algorithm was tested against a human rater. Differences in behavior were analyzed between the autism spectrum disorder group and the comparison group. Reliability between computer vision analysis and human coding for orienting to name was excellent (intra-class coefficient 0.84, 95% confidence interval 0.67-0.91). Only 8% of toddlers with autism spectrum disorder oriented to name calling on >1 trial, compared to 63% of toddlers in the comparison group (p = 0.002). Mean latency to orient was significantly longer for toddlers with autism spectrum disorder (2.02 vs 1.06 s, p = 0.04). Sensitivity for autism spectrum disorder of atypical orienting was 96% and specificity was 38%. Older toddlers with autism spectrum disorder showed less attention to the videos overall (p = 0.03). Automated coding offers a reliable, quantitative method for detecting atypical social orienting and reduced sustained attention in toddlers with autism spectrum disorder.
Collapse
Affiliation(s)
- Kathleen Campbell
- Duke University School of Medicine, Department of Psychiatry and Behavioral Sciences
| | - Kimberly LH Carpenter
- Duke University School of Medicine, Department of Psychiatry and Behavioral Sciences
| | - Jordan Hashemi
- Duke University, Department of Electrical and Computer Engineering
| | - Steven Espinosa
- Duke University, Department of Electrical and Computer Engineering
| | - Samuel Marsan
- Duke University School of Medicine, Department of Psychiatry and Behavioral Sciences
| | | | - Zhuoqing Chang
- Duke University, Department of Electrical and Computer Engineering
| | - Qiang Qiu
- Duke University, Department of Electrical and Computer Engineering
| | - Saritha Vermeer
- Duke University School of Medicine, Department of Psychiatry and Behavioral Sciences
| | | | - Mariano Tepper
- Duke University, Department of Electrical and Computer Engineering
| | - Helen L Egger
- Duke University School of Medicine, Department of Psychiatry and Behavioral Sciences
| | | | - Guillermo Sapiro
- Duke University, Department of Electrical and Computer Engineering
- Duke University, Department of Computer Science, Department of Biomedical Engineering, Department of Mathematics
| | - Geraldine Dawson
- Duke University School of Medicine, Department of Psychiatry and Behavioral Sciences
| |
Collapse
|
19
|
Sapiro G, Hashemi J, Dawson G. Computer vision and behavioral phenotyping: an autism case study. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2019; 9:14-20. [PMID: 37786644 PMCID: PMC10544819 DOI: 10.1016/j.cobme.2018.12.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Despite significant recent advances in molecular genetics and neuroscience, behavioral ratings based on clinical observations are still the gold standard for screening, diagnosing, and assessing outcomes in neurodevelopmental disorders, including autism spectrum disorder. Such behavioral ratings are subjective, require significant clinician expertise and training, typically do not capture data from the children in their natural environments such as homes or schools, and are not scalable for large population screening, low-income communities, or longitudinal monitoring, all of which are critical for outcome evaluation in multisite studies and for understanding and evaluating symptoms in the general population. The development of computational approaches to standardized objective behavioral assessment is, thus, a significant unmet need in autism spectrum disorder in particular and developmental and neurodegenerative disorders in general. Here, we discuss how computer vision, and machine learning, can develop scalable low-cost mobile health methods for automatically and consistently assessing existing biomarkers, from eye tracking to movement patterns and affect, while also providing tools and big data for novel discovery.
Collapse
Affiliation(s)
- Guillermo Sapiro
- Electrical and Computer Engineering, Computer Sciences, Biomedical Engineering, and Math, Duke University, Durham, NC, 27707, United States
| | - Jordan Hashemi
- Electrical and Computer Engineering, Duke University, Durham, NC, 27707, United States
| | - Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Science, Duke University, Durham, NC, 27707, United States
| |
Collapse
|
20
|
Del Coco M, Leo M, Carcagni P, Fama F, Spadaro L, Ruta L, Pioggia G, Distante C. Study of Mechanisms of Social Interaction Stimulation in Autism Spectrum Disorder by Assisted Humanoid Robot. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2783684] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
21
|
Dawson G, Campbell K, Hashemi J, Lippmann SJ, Smith V, Carpenter K, Egger H, Espinosa S, Vermeer S, Baker J, Sapiro G. Atypical postural control can be detected via computer vision analysis in toddlers with autism spectrum disorder. Sci Rep 2018; 8:17008. [PMID: 30451886 PMCID: PMC6242931 DOI: 10.1038/s41598-018-35215-8] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 10/31/2018] [Indexed: 12/23/2022] Open
Abstract
Evidence suggests that differences in motor function are an early feature of autism spectrum disorder (ASD). One aspect of motor ability that develops during childhood is postural control, reflected in the ability to maintain a steady head and body position without excessive sway. Observational studies have documented differences in postural control in older children with ASD. The present study used computer vision analysis to assess midline head postural control, as reflected in the rate of spontaneous head movements during states of active attention, in 104 toddlers between 16-31 months of age (Mean = 22 months), 22 of whom were diagnosed with ASD. Time-series data revealed robust group differences in the rate of head movements while the toddlers watched movies depicting social and nonsocial stimuli. Toddlers with ASD exhibited a significantly higher rate of head movement as compared to non-ASD toddlers, suggesting difficulties in maintaining midline position of the head while engaging attentional systems. The use of digital phenotyping approaches, such as computer vision analysis, to quantify variation in early motor behaviors will allow for more precise, objective, and quantitative characterization of early motor signatures and potentially provide new automated methods for early autism risk identification.
Collapse
Affiliation(s)
- Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina, USA.
| | | | - Jordan Hashemi
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Steven J Lippmann
- Department of Population Health Sciences, Duke University, Durham, North Carolina, USA
| | - Valerie Smith
- Department of Population Health Sciences, Duke University, Durham, North Carolina, USA
| | - Kimberly Carpenter
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina, USA
| | - Helen Egger
- NYU Langone Child Study Center, New York University, New York, New York, USA
| | - Steven Espinosa
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Saritha Vermeer
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina, USA
| | - Jeffrey Baker
- Department of Pediatrics, Duke University, Durham, NC, USA
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
- Departments of Biomedical Engineering, Computer Science, and Mathematics, Duke University, Durham, NC, USA
| |
Collapse
|
22
|
Egger HL, Dawson G, Hashemi J, Carpenter KLH, Espinosa S, Campbell K, Brotkin S, Schaich-Borg J, Qiu Q, Tepper M, Baker JP, Bloomfield RA, Sapiro G. Automatic emotion and attention analysis of young children at home: a ResearchKit autism feasibility study. NPJ Digit Med 2018; 1:20. [PMID: 31304303 PMCID: PMC6550157 DOI: 10.1038/s41746-018-0024-6] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2017] [Revised: 02/23/2018] [Accepted: 03/02/2018] [Indexed: 11/16/2022] Open
Abstract
Current tools for objectively measuring young children's observed behaviors are expensive, time-consuming, and require extensive training and professional administration. The lack of scalable, reliable, and validated tools impacts access to evidence-based knowledge and limits our capacity to collect population-level data in non-clinical settings. To address this gap, we developed mobile technology to collect videos of young children while they watched movies designed to elicit autism-related behaviors and then used automatic behavioral coding of these videos to quantify children's emotions and behaviors. We present results from our iPhone study Autism & Beyond, built on ResearchKit's open-source platform. The entire study-from an e-Consent process to stimuli presentation and data collection-was conducted within an iPhone-based app available in the Apple Store. Over 1 year, 1756 families with children aged 12-72 months old participated in the study, completing 5618 caregiver-reported surveys and uploading 4441 videos recorded in the child's natural settings. Usable data were collected on 87.6% of the uploaded videos. Automatic coding identified significant differences in emotion and attention by age, sex, and autism risk status. This study demonstrates the acceptability of an app-based tool to caregivers, their willingness to upload videos of their children, the feasibility of caregiver-collected data in the home, and the application of automatic behavioral encoding to quantify emotions and attention variables that are clinically meaningful and may be refined to screen children for autism and developmental disorders outside of clinical settings. This technology has the potential to transform how we screen and monitor children's development.
Collapse
Affiliation(s)
- Helen L. Egger
- Department of Psychiatry and Behavioral Sciences, Duke Health, Durham, USA
- Present Address: Department of Child and Adolescent Psychiatry, NYU Langone Health, Adjunct at Duke Health, Durham, USA
| | - Geraldine Dawson
- Department of Psychiatry and Behavioral Sciences, Duke Health, Durham, USA
- Department of Psychiatry and Behavioral Sciences, Duke Center for Autism and Brain Development, Duke Institute for Brain Sciences, Durham, USA
| | - Jordan Hashemi
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | - Kimberly L. H. Carpenter
- Department of Psychiatry and Behavioral Sciences, Duke Health, Durham, USA
- Department of Psychiatry and Behavioral Sciences, Duke Center for Autism and Brain Development, Duke Institute for Brain Sciences, Durham, USA
| | - Steven Espinosa
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | - Kathleen Campbell
- Department of Psychiatry and Behavioral Sciences, Duke Health, Durham, USA
| | - Samuel Brotkin
- Department of Psychiatry and Behavioral Sciences, Duke Health, Durham, USA
| | - Jana Schaich-Borg
- Department of Psychiatry and Behavioral Sciences, Duke Health, Durham, USA
| | - Qiang Qiu
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | - Mariano Tepper
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | | | - Richard A. Bloomfield
- Department of Pediatrics, Duke Health, Durham, USA
- Present Address: Apple, Inc., Cupertino, USA
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
- Department of Biomedical Engineering, Department of Computer Sciences, Department of Mathematics, Duke University, Durham, USA
| |
Collapse
|
23
|
Samad MD, Diawara N, Bobzien JL, Harrington JW, Witherow MA, Iftekharuddin KM. A Feasibility Study of Autism Behavioral Markers in Spontaneous Facial, Visual, and Hand Movement Response Data. IEEE Trans Neural Syst Rehabil Eng 2018; 26:353-361. [PMID: 29432106 DOI: 10.1109/tnsre.2017.2768482] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disability with atypical traits in behavioral and physiological responses. These atypical traits in individuals with ASD may be too subtle and subjective to measure visually using tedious methods of scoring. Alternatively, the use of intrusive sensors in the measurement of psychophysical responses in individuals with ASD may likely cause inhibition and bias. This paper proposes a novel experimental protocol for non-intrusive sensing and analysis of facial expression, visual scanning, and eye-hand coordination to investigate behavioral markers for ASD. An institutional review board approved pilot study is conducted to collect the response data from two groups of subjects (ASD and control) while they engage in the tasks of visualization, recognition, and manipulation. For the first time in the ASD literature, the facial action coding system is used to classify spontaneous facial responses. Statistical analyses reveal significantly (p <0.01) higher prevalence of smile expression for the group with ASD with the eye-gaze significantly averted (p<0.05) from viewing the face in the visual stimuli. This uncontrolled manifestation of smile without proper visual engagement suggests impairment in reciprocal social communication, e.g., social smile. The group with ASD also reveals poor correlation in eye-gaze and hand movement data suggesting deficits in motor coordination while performing a dynamic manipulation task. The simultaneous sensing and analysis of multimodal response data may provide useful quantitative insights into ASD to facilitate early detection of symptoms for effective intervention planning.
Collapse
|
24
|
Thevenot J, Lopez MB, Hadid A. A Survey on Computer Vision for Assistive Medical Diagnosis From Faces. IEEE J Biomed Health Inform 2017; 22:1497-1511. [PMID: 28991753 DOI: 10.1109/jbhi.2017.2754861] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automatic medical diagnosis is an emerging center of interest in computer vision as it provides unobtrusive objective information on a patient's condition. The face, as a mirror of health status, can reveal symptomatic indications of specific diseases. Thus, the detection of facial abnormalities or atypical features is at upmost importance when it comes to medical diagnostics. This survey aims to give an overview of the recent developments in medical diagnostics from facial images based on computer vision methods. Various approaches have been considered to assess facial symptoms and to eventually provide further help to the practitioners. However, the developed tools are still seldom used in clinical practice, since their reliability is still a concern due to the lack of clinical validation of the methodologies and their inadequate applicability. Nonetheless, efforts are being made to provide robust solutions suitable for healthcare environments, by dealing with practical issues such as real-time assessment or patients positioning. This survey provides an updated collection of the most relevant and innovative solutions in facial images analysis. The findings show that with the help of computer vision methods, over 30 medical conditions can be preliminarily diagnosed from the automatic detection of some of their symptoms. Furthermore, future perspectives, such as the need for interdisciplinary collaboration and collecting publicly available databases, are highlighted.
Collapse
|