1
|
Bişkin OT, Candemir C, Gonul AS, Selver MA. Diverse Task Classification from Activation Patterns of Functional Neuro-Images Using Feature Fusion Module. SENSORS (BASEL, SWITZERLAND) 2023; 23:3382. [PMID: 37050440 PMCID: PMC10098749 DOI: 10.3390/s23073382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/08/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
One of the emerging fields in functional magnetic resonance imaging (fMRI) is the decoding of different stimulations. The underlying idea is to reveal the hidden representative signal patterns of various fMRI tasks for achieving high task-classification performance. Unfortunately, when multiple tasks are processed, performance remains limited due to several challenges, which are rarely addressed since the majority of the state-of-the-art studies cover a single neuronal activity task. Accordingly, the first contribution of this study is the collection and release of a rigorously acquired dataset, which contains cognitive, behavioral, and affective fMRI tasks together with resting state. After a comprehensive analysis of the pitfalls of existing systems on this new dataset, we propose an automatic multitask classification (MTC) strategy using a feature fusion module (FFM). FFM aims to create a unique signature for each task by combining deep features with time-frequency representations. We show that FFM creates a feature space that is superior for representing task characteristics compared to their individual use. Finally, for MTC, we test a diverse set of deep-models and analyze their complementarity. Our results reveal higher classification accuracy compared to benchmarks. Both the dataset and the code are accessible to researchers for further developments.
Collapse
Affiliation(s)
- Osman Tayfun Bişkin
- Department of Electrical and Electronics Engineering, Burdur Mehmet Akif Ersoy University, Burdur 15030, Turkey
| | - Cemre Candemir
- International Computer Institute, Ege University, Izmir 35100, Turkey
- Standardization of Computational Anatomy Techniques, SoCAT Lab, Ege University, Izmir 35100, Turkey
| | - Ali Saffet Gonul
- Standardization of Computational Anatomy Techniques, SoCAT Lab, Ege University, Izmir 35100, Turkey
- Department of Psychiatry, Medical Faculty, Ege University, Izmir 35100, Turkey
| | - Mustafa Alper Selver
- Department of Electrical and Electronics Engineering and Izmir Health Technologies Development and Accelerator (BioIzmir), Dokuz Eylul University, Izmir 35160, Turkey
| |
Collapse
|
2
|
Creation, Analysis and Evaluation of AnnoMI, a Dataset of Expert-Annotated Counselling Dialogues. FUTURE INTERNET 2023. [DOI: 10.3390/fi15030110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/15/2023] Open
Abstract
Research on the analysis of counselling conversations through natural language processing methods has seen remarkable growth in recent years. However, the potential of this field is still greatly limited by the lack of access to publicly available therapy dialogues, especially those with expert annotations, but it has been alleviated thanks to the recent release of AnnoMI, the first publicly and freely available conversation dataset of 133 faithfully transcribed and expert-annotated demonstrations of high- and low-quality motivational interviewing (MI)—an effective therapy strategy that evokes client motivation for positive change. In this work, we introduce new expert-annotated utterance attributes to AnnoMI and describe the entire data collection process in more detail, including dialogue source selection, transcription, annotation, and post-processing. Based on the expert annotations on key MI aspects, we carry out thorough analyses of AnnoMI with respect to counselling-related properties on the utterance, conversation, and corpus levels. Furthermore, we introduce utterance-level prediction tasks with potential real-world impacts and build baseline models. Finally, we examine the performance of the models on dialogues of different topics and probe the generalisability of the models to unseen topics.
Collapse
|
3
|
Developing an Implementation Model for ADHD Intervention in Community Clinics: Leveraging Artificial Intelligence and Digital Technology. COGNITIVE AND BEHAVIORAL PRACTICE 2023. [DOI: 10.1016/j.cbpra.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
4
|
Janssens M, Meslec N, Leenders RTAJ. Collective intelligence in teams: Contextualizing collective intelligent behavior over time. Front Psychol 2022; 13:989572. [DOI: 10.3389/fpsyg.2022.989572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/04/2022] [Indexed: 11/04/2022] Open
Abstract
Collective intelligence (CI) in organizational teams has been predominantly understood and explained in terms of the quality of the outcomes that the team produces. This manuscript aims to extend the understanding of CI in teams, by disentangling the core of actual collective intelligent team behavior that unfolds over time during a collaboration period. We posit that outcomes do support the presence of CI, but that collective intelligence itself resides in the interaction processes within the team. Teams behave collectively intelligent when the collective behaviors during the collaboration period are in line with the requirements of the (cognitive) tasks the team is assigned to and the (changing) environment. This perspective results in a challenging, but promising research agenda armed with new research questions that call for unraveling longitudinal fine-grained interactional processes over time. We conclude with exploring methodological considerations that assist researchers to align concept and methodology. In sum, this manuscript proposes a more direct, thorough, and nuanced understanding of collective intelligence in teams, by disentangling micro-level team behaviors over the course of a collaboration period. With this in mind, the field of CI will get a more fine-grained understanding of what really happens at what point in time: when teams behave more or less intelligently. Additionally, when we understand collectively intelligent processes in teams, we can organize targeted interventions to improve or maintain collective intelligence in teams.
Collapse
|
5
|
Creed TA, Salama L, Slevin R, Tanana M, Imel Z, Narayanan S, Atkins DC. Enhancing the quality of cognitive behavioral therapy in community mental health through artificial intelligence generated fidelity feedback (Project AFFECT): a study protocol. BMC Health Serv Res 2022; 22:1177. [PMID: 36127689 PMCID: PMC9487132 DOI: 10.1186/s12913-022-08519-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/02/2022] [Indexed: 11/10/2022] Open
Abstract
Background Each year, millions of Americans receive evidence-based psychotherapies (EBPs) like cognitive behavioral therapy (CBT) for the treatment of mental and behavioral health problems. Yet, at present, there is no scalable method for evaluating the quality of psychotherapy services, leaving EBP quality and effectiveness largely unmeasured and unknown. Project AFFECT will develop and evaluate an AI-based software system to automatically estimate CBT fidelity from a recording of a CBT session. Project AFFECT is an NIMH-funded research partnership between the Penn Collaborative for CBT and Implementation Science and Lyssn.io, Inc. (“Lyssn”) a start-up developing AI-based technologies that are objective, scalable, and cost efficient, to support training, supervision, and quality assurance of EBPs. Lyssn provides HIPAA-compliant, cloud-based software for secure recording, sharing, and reviewing of therapy sessions, which includes AI-generated metrics for CBT. The proposed tool will build from and be integrated into this core platform. Methods Phase I will work from an existing software prototype to develop a LyssnCBT user interface geared to the needs of community mental health (CMH) agencies. Core activities include a user-centered design focus group and interviews with community mental health therapists, supervisors, and administrators to inform the design and development of LyssnCBT. LyssnCBT will be evaluated for usability and implementation readiness in a final stage of Phase I. Phase II will conduct a stepped-wedge, hybrid implementation-effectiveness randomized trial (N = 1,875 clients) to evaluate the effectiveness of LyssnCBT to improve therapist CBT skills and client outcomes and reduce client drop-out. Analyses will also examine the hypothesized mechanism of action underlying LyssnCBT. Discussion Successful execution will provide automated, scalable CBT fidelity feedback for the first time ever, supporting high-quality training, supervision, and quality assurance, and providing a core technology foundation that could support the quality delivery of a range of EBPs in the future. Trial registration ClinicalTrials.gov; NCT05340738; approved 4/21/2022.
Collapse
Affiliation(s)
- Torrey A Creed
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.,Lyssn.io, Inc, Seattle, USA
| | - Leah Salama
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
| | | | | | - Zac Imel
- Lyssn.io, Inc, Seattle, USA.,Department of Educational Psychology, University of Utah, Salt Lake City, USA
| | - Shrikanth Narayanan
- Lyssn.io, Inc, Seattle, USA.,Viterbi School of Engineering, University of Southern California, Los Angeles, USA
| | - David C Atkins
- Lyssn.io, Inc, Seattle, USA. .,Department of Psychiatry and Behavioral Sciences, University of Washington School of Medicine, Seattle, USA.
| |
Collapse
|
6
|
Hasan M, Jefferson N, Hain T, Dawson J. Automatic detection of behavioural codes in team interactions. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2021.101339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
7
|
Flemotomos N, Martinez VR, Chen Z, Creed TA, Atkins DC, Narayanan S. Automated quality assessment of cognitive behavioral therapy sessions through highly contextualized language representations. PLoS One 2021; 16:e0258639. [PMID: 34679105 PMCID: PMC8535177 DOI: 10.1371/journal.pone.0258639] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 10/02/2021] [Indexed: 11/28/2022] Open
Abstract
During a psychotherapy session, the counselor typically adopts techniques which are codified along specific dimensions (e.g., 'displays warmth and confidence', or 'attempts to set up collaboration') to facilitate the evaluation of the session. Those constructs, traditionally scored by trained human raters, reflect the complex nature of psychotherapy and highly depend on the context of the interaction. Recent advances in deep contextualized language models offer an avenue for accurate in-domain linguistic representations which can lead to robust recognition and scoring of such psychotherapy-relevant behavioral constructs, and support quality assurance and supervision. In this work, we propose a BERT-based model for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT), where prior work is limited to frequency-based language features and/or short text excerpts which do not capture the unique elements involved in a spontaneous long conversational interaction. The model focuses on the classification of therapy sessions with respect to the overall score achieved on the widely-used Cognitive Therapy Rating Scale (CTRS), but is trained in a multi-task manner in order to achieve higher interpretability. BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements. We train and evaluate our models on a set of 1,118 real-world therapy sessions, recorded and automatically transcribed. Our best model achieves an F1 score equal to 72.61% on the binary classification task of low vs. high total CTRS.
Collapse
Affiliation(s)
- Nikolaos Flemotomos
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Victor R. Martinez
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Zhuohao Chen
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Torrey A. Creed
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, United States of America
| | - David C. Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, United States of America
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
8
|
Creed TA, Kuo PB, Oziel R, Reich D, Thomas M, O'Connor S, Imel ZE, Hirsch T, Narayanan S, Atkins DC. Knowledge and Attitudes Toward an Artificial Intelligence-Based Fidelity Measurement in Community Cognitive Behavioral Therapy Supervision. ADMINISTRATION AND POLICY IN MENTAL HEALTH AND MENTAL HEALTH SERVICES RESEARCH 2021; 49:343-356. [PMID: 34537885 PMCID: PMC8930782 DOI: 10.1007/s10488-021-01167-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 11/24/2022]
Abstract
To capitalize on investments in evidence-based practices, technology is needed to scale up fidelity assessment and supervision. Stakeholder feedback may facilitate adoption of such tools. This evaluation gathered stakeholder feedback and preferences to explore whether it would be fundamentally feasible or possible to implement an automated fidelity-scoring supervision tool in community mental health settings. A partially mixed, sequential research method design was used including focus group discussions with community mental health therapists (n = 18) and clinical leadership (n = 12) to explore typical supervision practices, followed by discussion of an automated fidelity feedback tool embedded in a cloud-based supervision platform. Interpretation of qualitative findings was enhanced through quantitative measures of participants' use of technology and perceptions of acceptability, appropriateness, and feasibility of the tool. Initial perceptions of acceptability, appropriateness, and feasibility of automated fidelity tools were positive and increased after introduction of an automated tool. Standard supervision was described as collaboratively guided and focused on clinical content, self-care, and documentation. Participants highlighted the tool's utility for supervision, training, and professional growth, but questioned its ability to evaluate rapport, cultural responsiveness, and non-verbal communication. Concerns were raised about privacy and the impact of low scores on therapist confidence. Desired features included intervention labeling and transparency about how scores related to session content. Opportunities for asynchronous, remote, and targeted supervision were particularly valued. Stakeholder feedback suggests that automated fidelity measurement could augment supervision practices. Future research should examine the relations among use of such supervision tools, clinician skill, and client outcomes.
Collapse
Affiliation(s)
- Torrey A Creed
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA. .,Penn Collaborative for CBT & Implementation Science, 3535 Market Street, Suite 3046, Philadelphia, PA, 19104, USA.
| | - Patty B Kuo
- Department of Educational Psychology, University of Utah, Salt Lake City, UT, USA
| | - Rebecca Oziel
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Penn Collaborative for CBT & Implementation Science, 3535 Market Street, Suite 3046, Philadelphia, PA, 19104, USA
| | - Danielle Reich
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Penn Collaborative for CBT & Implementation Science, 3535 Market Street, Suite 3046, Philadelphia, PA, 19104, USA
| | | | - Sydne O'Connor
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Penn Collaborative for CBT & Implementation Science, 3535 Market Street, Suite 3046, Philadelphia, PA, 19104, USA
| | - Zac E Imel
- Department of Educational Psychology, University of Utah, Salt Lake City, UT, USA
| | - Tad Hirsch
- Northeastern University, Boston, MA, USA
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, USA
| | - David C Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
9
|
Towards End-2-end Learning for Predicting Behavior Codes from Spoken Utterances in Psychotherapy Conversations. PROCEEDINGS OF THE CONFERENCE. ASSOCIATION FOR COMPUTATIONAL LINGUISTICS. MEETING 2020; 2020:3797-3803. [PMID: 36751434 PMCID: PMC9901279 DOI: 10.18653/v1/2020.acl-main.351] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Spoken language understanding tasks usually rely on pipelines involving complex processing blocks such as voice activity detection, speaker diarization and Automatic speech recognition (ASR). We propose a novel framework for predicting utterance level labels directly from speech features, thus removing the dependency on first generating transcripts, and transcription free behavioral coding. Our classifier uses a pretrained Speech-2-Vector encoder as bottleneck to generate word-level representations from speech features. This pre-trained encoder learns to encode speech features for a word using an objective similar to Word2Vec. Our proposed approach just uses speech features and word segmentation information for predicting spoken utterance-level target labels. We show that our model achieves competitive results to other state-of-the-art approaches which use transcribed text for the task of predicting psychotherapy-relevant behavior codes.
Collapse
|