1
|
Matt JE, Rizzo DM, Javed A, Eppstein MJ, Manukyan V, Gramling C, Dewoolkar AM, Gramling R. An Acoustical and Lexical Machine-Learning Pipeline to Identify Connectional Silences. J Palliat Med 2023; 26:1627-1633. [PMID: 37440175 DOI: 10.1089/jpm.2023.0087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2023] Open
Abstract
Context: Developing scalable methods for conversation analytics is essential for health care communication science and quality improvement. Purpose: To assess the feasibility of automating the identification of a conversational feature, Connectional Silence, which is associated with important patient outcomes. Methods: Using audio recordings from the Palliative Care Communication Research Initiative cohort study, we develop and test an automated measurement pipeline comprising three machine-learning (ML) tools-a random forest algorithm and a custom convolutional neural network that operate in parallel on audio recordings, and subsequently a natural language processing algorithm that uses brief excerpts of automated speech-to-text transcripts. Results: Our ML pipeline identified Connectional Silence with an overall sensitivity of 84% and specificity of 92%. For Emotional and Invitational subtypes, we observed sensitivities of 68% and 67%, and specificities of 95% and 97%, respectively. Conclusion: These findings support the capacity for coordinated and complementary ML methods to fully automate the identification of Connectional Silence in natural hospital-based clinical conversations.
Collapse
Affiliation(s)
- Jeremy E Matt
- Graduate Program in Complex Systems and Data Science, College of Engineering and Mathematical Sciences, University of Vermont, Burlington, Vermont, USA
| | - Donna M Rizzo
- Department of Civil and Environmental Engineering, University of Vermont, Burlington, Vermont, USA
| | - Ali Javed
- Division of Cardiovascular Medicine, Department of Medicine, Stanford University School of Medicine, Stanford University, Stanford, California, USA
| | - Margaret J Eppstein
- Department of Computer Science, University of Vermont, Burlington, Vermont, USA
| | | | - Cailin Gramling
- Graduate Program in Complex Systems and Data Science, College of Engineering and Mathematical Sciences, University of Vermont, Burlington, Vermont, USA
| | - Advik Mandar Dewoolkar
- Department of Electrical and Biomedical Engineering, University of Vermont, Burlington, Vermont, USA
| | - Robert Gramling
- Department of Family Medicine, University of Vermont, Burlington, Vermont, USA
| |
Collapse
|
2
|
Cunningham PB, Gilmore J, Naar S, Preston SD, Eubanks CF, Hubig NC, McClendon J, Ghosh S, Ryan-Pettes S. Opening the Black Box of Family-Based Treatments: An Artificial Intelligence Framework to Examine Therapeutic Alliance and Therapist Empathy. Clin Child Fam Psychol Rev 2023; 26:975-993. [PMID: 37676364 PMCID: PMC10845126 DOI: 10.1007/s10567-023-00451-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/15/2023] [Indexed: 09/08/2023]
Abstract
The evidence-based treatment (EBT) movement has primarily focused on core intervention content or treatment fidelity and has largely ignored practitioner skills to manage interpersonal process issues that emerge during treatment, especially with difficult-to-treat adolescents (delinquent, substance-using, medical non-adherence) and those of color. A chief complaint of "real world" practitioners about manualized treatments is the lack of correspondence between following a manual and managing microsocial interpersonal processes (e.g. negative affect) that arise in treating "real world clients." Although family-based EBTs share core similarities (e.g. focus on family interactions, emphasis on practitioner engagement, family involvement), most of these treatments do not have an evidence base regarding common implementation and treatment process problems that practitioners experience in delivering particular models, especially in mid-treatment when demands on families to change their behavior is greatest in treatment - a lack that characterizes the field as a whole. Failure to effectively address common interpersonal processes with difficult-to-treat families likely undermines treatment fidelity and sustained use of EBTs, treatment outcome, and contributes to treatment dropout and treatment nonadherence. Recent advancements in wearables, sensing technologies, multivariate time-series analyses, and machine learning allow scientists to make significant advancements in the study of psychotherapy processes by looking "under the skin" of the provider-client interpersonal interactions that define therapeutic alliance, empathy, and empathic accuracy, along with the predictive validity of these therapy processes (therapeutic alliance, therapist empathy) to treatment outcome. Moreover, assessment of these processes can be extended to develop procedures for training providers to manage difficult interpersonal processes while maintaining a physiological profile that is consistent with astute skills in psychotherapeutic processes. This paper argues for opening the "black box" of therapy to advance the science of evidence-based psychotherapy by examining the clinical interior of evidence-based treatments to develop the next generation of audit- and feedback- (i.e., systemic review of professional performance) supervision systems.
Collapse
Affiliation(s)
- Phillippe B Cunningham
- Division of Global and Community Health, Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, 176 Croghan Spur Rd. Ste. 104, Charleston, SC, 29407, USA.
| | - Jordon Gilmore
- Department of Bioengineering, Clemson University, 401-3 Rhodes Research Center, Clemson, SC, USA
| | - Sylvie Naar
- Center for Translational Behavioral Science, Florida State University, 2010 Levy Avenue Building B, Suite B0266, Tallahassee, FL, USA
| | - Stephanie D Preston
- Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI, 48109, USA
| | - Catherine F Eubanks
- Gordon F. Derner School of Psychology, Adelphi University, One South Avenue, Garden City, NY, USA
| | - Nina Christina Hubig
- School of Computing, Clemson University, 1240 Supply Street, Charleston, SC, 29405, USA
| | - Jerome McClendon
- Department of Automotive Engineering, Clemson University, 4 Research Drive, Greenville, SC, USA
| | - Samiran Ghosh
- Department of Biostatistics and Data Science & Coordinating Center for Clinical Trials (CCCT), University of Texas School of Public Health, University Texas Health Sciences , RAS W-928, 1200 Pressler Street, Houston, TX, 77030, USA
| | - Stacy Ryan-Pettes
- Department of Psychology and Neuroscience, Baylor University, One Bear Place #97334, Waco, TX, 76798, USA
| |
Collapse
|
3
|
Malgaroli M, Hull TD, Zech JM, Althoff T. Natural language processing for mental health interventions: a systematic review and research framework. Transl Psychiatry 2023; 13:309. [PMID: 37798296 PMCID: PMC10556019 DOI: 10.1038/s41398-023-02592-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 08/31/2023] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Neuropsychiatric disorders pose a high societal cost, but their treatment is hindered by lack of objective outcomes and fidelity metrics. AI technologies and specifically Natural Language Processing (NLP) have emerged as tools to study mental health interventions (MHI) at the level of their constituent conversations. However, NLP's potential to address clinical and research challenges remains unclear. We therefore conducted a pre-registered systematic review of NLP-MHI studies using PRISMA guidelines (osf.io/s52jh) to evaluate their models, clinical applications, and to identify biases and gaps. Candidate studies (n = 19,756), including peer-reviewed AI conference manuscripts, were collected up to January 2023 through PubMed, PsycINFO, Scopus, Google Scholar, and ArXiv. A total of 102 articles were included to investigate their computational characteristics (NLP algorithms, audio features, machine learning pipelines, outcome metrics), clinical characteristics (clinical ground truths, study samples, clinical focus), and limitations. Results indicate a rapid growth of NLP MHI studies since 2019, characterized by increased sample sizes and use of large language models. Digital health platforms were the largest providers of MHI data. Ground truth for supervised learning models was based on clinician ratings (n = 31), patient self-report (n = 29) and annotations by raters (n = 26). Text-based features contributed more to model accuracy than audio markers. Patients' clinical presentation (n = 34), response to intervention (n = 11), intervention monitoring (n = 20), providers' characteristics (n = 12), relational dynamics (n = 14), and data preparation (n = 4) were commonly investigated clinical categories. Limitations of reviewed studies included lack of linguistic diversity, limited reproducibility, and population bias. A research framework is developed and validated (NLPxMHI) to assist computational and clinical researchers in addressing the remaining gaps in applying NLP to MHI, with the goal of improving clinical utility, data access, and fairness.
Collapse
Affiliation(s)
- Matteo Malgaroli
- Department of Psychiatry, New York University, Grossman School of Medicine, New York, NY, 10016, USA.
| | | | - James M Zech
- Talkspace, New York, NY, 10025, USA
- Department of Psychology, Florida State University, Tallahassee, FL, 32306, USA
| | - Tim Althoff
- Department of Computer Science, University of Washington, Seattle, WA, 98195, USA
| |
Collapse
|