1
|
Connaghan KP, Eshghi M, Haenssler AE, Green JR, Wang J, Scheier Z, Keegan M, Clark A, Onnela JP, Burke KM, Berry JD. A Preliminary Investigation of Acoustic Features for Remote Monitoring of Respiration in ALS. Muscle Nerve 2025. [PMID: 40365751 DOI: 10.1002/mus.28435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 04/28/2025] [Accepted: 04/30/2025] [Indexed: 05/15/2025]
Abstract
INTRODUCTION/AIMS There is a substantial need to establish reliable approaches for low-burden at-home monitoring of respiratory function for people with amyotrophic lateral sclerosis (PALS). This preliminary study assessed the potential of acoustic features extracted from a smartphone passage reading task to serve as clinically meaningful outcome measures reflecting instrumental and self-reported respiratory function measures. METHODS Thirty-six PALS completed an in-clinic slow vital capacity (SVC) task, followed by at-home completion of surveys and audio recording of a reading passage using a smartphone application. Speaking rate and pause features were extracted offline. Correlation analysis evaluated the relationship between the acoustic features and both instrumental (SVC) and self-reported (respiratory subscale of the self-entry version of the ALS Functional Rating Scale-Revised; ALSFRS-RSE) measures of respiratory function. Receiver operator characteristic (ROC) with area under the curve (AUC) analysis evaluated the utility of acoustic features for classifying participants with and without respiratory involvement. RESULTS SVC and respiratory self-ratings were significantly correlated with pause, but not rate, measures. Percent pause time was the most strongly correlated acoustic feature with both SVC (r = -0.62) and ALSFRS-RSE respiratory subscale ratings (r = -0.43). ROC analysis revealed that percent pause time classified participants presenting with respiratory involvement based on instrumentation (SVC < 70% predicted [AUC = 0.70]; SVC < 50% predicted [AUC = 0.88]) and self-ratings when using the respiratory ALSFRS-RSE score cut-off of < 11 (AUC = 0.78), but not < 12 (AUC = 0.61). DISCUSSION Percent pause time, extracted from a smartphone-recorded passage reading, offers a promising index for remote assessment and monitoring of respiratory function in PALS.
Collapse
Affiliation(s)
- Kathryn P Connaghan
- Speech and Social Interaction Lab, MGH Institute of Health Professions, Boston, Massachusetts, USA
| | - Marziye Eshghi
- Speech, Physiology, and Neurobiology of Aging and Dementia Lab, MGH Institute of Health Professions, Boston, Massachusetts, USA
- Athinoula A. Martinos Centre for Biomedical Imaging, Boston, Massachusetts, USA
- Department of Radiology, MGH, Harvard Medical School, Boston, Massachusetts, USA
| | - Abigail E Haenssler
- Speech and Social Interaction Lab, MGH Institute of Health Professions, Boston, Massachusetts, USA
- Speech and Feeding Disorders Lab, MGH Institute of Health Professions, Boston, Massachusetts, USA
| | - Jordan R Green
- Speech and Feeding Disorders Lab, MGH Institute of Health Professions, Boston, Massachusetts, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, Massachusetts, USA
| | - Joycelyn Wang
- Speech and Social Interaction Lab, MGH Institute of Health Professions, Boston, Massachusetts, USA
| | - Zoe Scheier
- Healey Center for ALS, Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Mackenzie Keegan
- Healey Center for ALS, Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Alison Clark
- Healey Center for ALS, Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Jukka-Pekka Onnela
- Department of Biostatistics, Harvard University, Boston, Massachusetts, USA
| | - Katherine M Burke
- Healey Center for ALS, Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- MGH Institute of Health Professions, Boston, Massachusetts, USA
| | - James D Berry
- Healey Center for ALS, Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Harvard Medical School, School of Medicine, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Dahl KL, Balz MA, Cádiz MD, Stepp CE. How to Efficiently Measure the Intelligibility of People With Parkinson's Disease. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2025; 34:70-84. [PMID: 39475678 PMCID: PMC11745308 DOI: 10.1044/2024_ajslp-24-00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 06/28/2024] [Accepted: 08/20/2024] [Indexed: 01/11/2025]
Abstract
PURPOSE The purpose of this study was to determine the most efficient approaches to measuring the intelligibility of people with Parkinson's disease (PD) when considering the estimation method, listener experience, number of listeners, number of sentences, and the ways these factors may interact. METHOD Speech-language pathologists (SLPs) and inexperienced listeners estimated the intelligibility of people with and without PD using orthographic transcription or a visual analog scale (VAS). Intelligibility estimates were based on 11 Speech Intelligibility Test sentences. We simulated all combinations of listeners and sentences to compare intelligibility estimates based on fewer listeners and sentences to a speaker-specific benchmark estimate based on the mean intelligibility across all sentences and listeners. RESULTS Intelligibility estimates were closer to the benchmark (i.e., more accurate) when more listeners and sentences were included in the estimation process for transcription- and VAS-based estimates and for SLPs and inexperienced listeners. Differences between the benchmark and subset-based intelligibility estimates were, in some cases, smaller than the minimally detectable change in intelligibility for people with PD. CONCLUSIONS The intelligibility of people with PD can be measured more efficiently by reducing the number of listeners and/or sentences, up to a point, while maintaining the ability to detect change in this outcome. Clinicians and researchers may prioritize either fewer listeners or fewer sentences, depending on the specific constraints of their work setting. However, consideration must be given to listener experience and estimation method, as the effect of reducing the number of listeners and sentences varied with these factors.
Collapse
Affiliation(s)
- Kimberly L. Dahl
- Department of Speech, Language, and Hearing Sciences, Boston University, MA
| | - Magdalen A. Balz
- Department of Speech, Language, and Hearing Sciences, Boston University, MA
- Department of Gerontology, University of Massachusetts, Boston
| | - Manuel Díaz Cádiz
- Department of Speech, Language, and Hearing Sciences, Boston University, MA
| | - Cara E. Stepp
- Department of Speech, Language, and Hearing Sciences, Boston University, MA
- Department of Gerontology, University of Massachusetts, Boston
- Department of Biomedical Engineering, Boston University, MA
- Department of Otolaryngology–Head and Neck Surgery, Boston University School of Medicine, MA
| |
Collapse
|
3
|
Tobin J, Nelson P, MacDonald B, Heywood R, Cave R, Seaver K, Desjardins A, Jiang PP, Green JR. Automatic Speech Recognition of Conversational Speech in Individuals With Disordered Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4176-4185. [PMID: 38963790 DOI: 10.1044/2024_jslhr-24-00045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/06/2024]
Abstract
PURPOSE This study examines the effectiveness of automatic speech recognition (ASR) for individuals with speech disorders, addressing the gap in performance between read and conversational ASR. We analyze the factors influencing this disparity and the effect of speech mode-specific training on ASR accuracy. METHOD Recordings of read and conversational speech from 27 individuals with various speech disorders were analyzed using both (a) one speaker-independent ASR system trained and optimized for typical speech and (b) multiple ASR models that were personalized to the speech of the participants with disordered speech. Word error rates were calculated for each speech model, read versus conversational, and subject. Linear mixed-effects models were used to assess the impact of speech mode and disorder severity on ASR accuracy. We investigated nine variables, classified as technical, linguistic, or speech impairment factors, for their potential influence on the performance gap. RESULTS We found a significant performance gap between read and conversational speech in both personalized and unadapted ASR models. Speech impairment severity notably impacted recognition accuracy in unadapted models for both speech modes and in personalized models for read speech. Linguistic attributes of utterances were the most influential on accuracy, though atypical speech characteristics also played a role. Including conversational speech samples in model training notably improved recognition accuracy. CONCLUSIONS We observed a significant performance gap in ASR accuracy between read and conversational speech for individuals with speech disorders. This gap was largely due to the linguistic complexity and unique characteristics of speech disorders in conversational speech. Training personalized ASR models using conversational speech significantly improved recognition accuracy, demonstrating the importance of domain-specific training and highlighting the need for further research into ASR systems capable of handling disordered conversational speech effectively.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Jordan R Green
- MGH Institute of Health Professions, Boston, MA
- Harvard University, Cambridge, MA
| |
Collapse
|
4
|
Darling-White M, Sisk CN. A Preliminary Investigation of Within-Word Silent Intervals Produced by Children With and Without Neurodevelopmental Disorders. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024; 33:1-18. [PMID: 38963752 PMCID: PMC11427737 DOI: 10.1044/2024_ajslp-23-00183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 01/30/2024] [Accepted: 05/13/2024] [Indexed: 07/06/2024]
Abstract
PURPOSE The categorization of silent intervals during speech production is necessary for accurate measurement of articulation rate and pauses. The primary purpose of this preliminary study was to examine the within-word silent interval associated with the stop closure in word-final stop consonants produced by children with and without neurodevelopmental disorders. METHOD Seven children diagnosed with either cerebral palsy or Down syndrome (i.e., children with neurodevelopmental disorders) and eight typically developing children produced a reading passage. Participants were between the ages of 11 and 16 years. Fifty-eight words from the reading passage were identified as having word-final stop consonants. The closure duration of the word-final stop consonant was calculated, both in absolute duration and percent pause time. The articulation rate of the entire passage was calculated. The number of closure durations that met or exceeded the minimum duration threshold to be considered a pause (150 ms) was examined descriptively. RESULTS Children with neurodevelopmental disorders produced significantly longer closure durations and significantly slower articulation rates than typically developing children. Children with neurodevelopmental disorders produced closure durations that met or exceeded the minimum duration threshold of a pause, but typically developing children, generally, did not. CONCLUSION These data indicate the need to examine the location of silent intervals that meet the minimum duration threshold of a pause and correct for articulatory events during the measurement of articulation rate and pauses in children with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Meghan Darling-White
- Department of Speech, Language, and Hearing Sciences, University of Arizona, Tucson
| | - Christine N. Sisk
- Department of Speech, Language, and Hearing Sciences, University of Arizona, Tucson
| |
Collapse
|
5
|
Sullivan L, Martin E, Allison KM. Effects of SPEAK OUT! & LOUD Crowd on Functional Speech Measures in Parkinson's Disease. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024; 33:1930-1951. [PMID: 38838243 DOI: 10.1044/2024_ajslp-23-00321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
PURPOSE This study investigated the effects of the SPEAK OUT! & LOUD Crowd therapy program on speaking rate, percent pause time, intelligibility, naturalness, and communicative participation in individuals with Parkinson's disease (PD). METHOD Six adults with PD completed 12 individual SPEAK OUT! sessions across four consecutive weeks followed by group-based LOUD Crowd sessions for five consecutive weeks. Most therapy sessions were conducted via telehealth, with two participants completing the SPEAK OUT! portion in person. Speech samples were recorded at six time points: three baseline time points prior to SPEAK OUT!, two post-SPEAK OUT! time points, and one post-LOUD Crowd time point. Acoustic measures of speaking rate and percent pause time and listener ratings of speech intelligibility and naturalness were obtained for each time point. Participant self-ratings of communicative participation were also collected at pre- and posttreatment time points. RESULTS Results showed significant improvement in communicative participation scores at a group level following completion of the SPEAK OUT! & LOUD Crowd treatment program. Two participants showed a significant decrease in speaking rate and increase in percent pause time following treatment. Changes in intelligibility and naturalness were not statistically significant. CONCLUSIONS These findings provide preliminary support for the effectiveness of the SPEAK OUT! & LOUD Crowd treatment program in improving communicative participation for people with mild-to-moderate hypokinetic dysarthria secondary to PD. This study is also the first to demonstrate positive effects of this treatment program for people receiving the therapy via telehealth.
Collapse
Affiliation(s)
- Lauren Sullivan
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA
| | - Elizabeth Martin
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA
| | - Kristen M Allison
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA
| |
Collapse
|
6
|
Kasper E, Temp AGM, Köckritz V, Meier L, Machts J, Vielhaber S, Hermann A, Prudlo J. Verbal expressive language minimally affected in non-demented people living with amyotrophic lateral sclerosis. Amyotroph Lateral Scler Frontotemporal Degener 2024; 25:308-316. [PMID: 38306019 DOI: 10.1080/21678421.2024.2307512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 01/08/2024] [Indexed: 02/03/2024]
Abstract
Objective: Language dysfunction is one of the most common cognitive impairments in amyotrophic lateral sclerosis (ALS). Although discourse capacities are essential for daily functioning, verbal expressive language has not been widely investigated in ALS. The existing research available suggests that discourse impairments are prevalent. This study investigates verbal expressive language in people living with ALS (plwALS) in contrast to healthy controls (HC).Methods: 64 plwALS and 49 age, gender and education-matched healthy controls were ask to describe the Cookie Theft Picture Task. The recordings were analyzed for discourse productivity, discourse content, syntactic complexity, speech fluency and verb processing. We applied the Bayesian hypothesis-testing framework, incorporating the effects of dysarthria, cognitive impairment status (CIS), and premorbid crystalline verbal IQ.Results: Compared to HC, plwALS only showed a single impairment: speech dysfluency. Discourse productivity, discourse content, syntactic complexity and verb processing were not impaired. Cognition and dysarthria exceeded the influence of verbal IQ for total words spoken and content density. Cognition alone seemed to explain dysfluency. Body-agent verbs were produced at even higher rates than other verb types. For the remaining outcomes, verbal IQ was the most decisive factor.Conclusions: In contrast to existing research, our data demonstrates no discernible impairment in verbal expressive language in ALS. What our findings show to be decisive is accounting for the influence of dysarthria, cognitive impairment status, and verbal IQ as variables on spontaneous verbal expressive language. Minor impairments in verbal expressive language appear to be influenced to a greater degree by executive dysfunctioning and dysarthria than by language impairment.
Collapse
Affiliation(s)
- Elisabeth Kasper
- Department of Neurology, University Medical Centre, Rostock, Germany
- DZNE site Rostock, German Centre for Neurodegenerative Diseases (DZNE), Rostock, Germany
| | - Anna G M Temp
- DZNE site Rostock, German Centre for Neurodegenerative Diseases (DZNE), Rostock, Germany
- Neurozentrum, Berufsgenossenschaftliches Klinikum Hamburg, Germany
| | - Verena Köckritz
- DZNE site Rostock, German Centre for Neurodegenerative Diseases (DZNE), Rostock, Germany
| | - Lisa Meier
- Department of Neurology, University Medical Centre, Rostock, Germany
| | - Judith Machts
- Department of Neurology, Otto-von-Guericke University, Magdeburg, Germany
- German Centre for Neurodegenerative Diseases (DZNE), Magdeburg, Germany, and
| | - Stefan Vielhaber
- Department of Neurology, Otto-von-Guericke University, Magdeburg, Germany
- German Centre for Neurodegenerative Diseases (DZNE), Magdeburg, Germany, and
| | - Andreas Hermann
- Department of Neurology, University Medical Centre, Rostock, Germany
- Translational Neurodegeneration Section "Albrecht Kossel", University Medical Centre, Rostock
| | - Johannes Prudlo
- Department of Neurology, University Medical Centre, Rostock, Germany
- DZNE site Rostock, German Centre for Neurodegenerative Diseases (DZNE), Rostock, Germany
| |
Collapse
|
7
|
Sonkaya ZZ, Özturk B, Sonkaya R, Taskiran E, Karadas Ö. Using Objective Speech Analysis Techniques for the Clinical Diagnosis and Assessment of Speech Disorders in Patients with Multiple Sclerosis. Brain Sci 2024; 14:384. [PMID: 38672033 PMCID: PMC11047916 DOI: 10.3390/brainsci14040384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/11/2024] [Accepted: 04/12/2024] [Indexed: 04/28/2024] Open
Abstract
Multiple sclerosis (MS) is one of the chronic and neurodegenerative diseases of the central nervous system (CNS). It generally affects motor, sensory, cerebellar, cognitive, and language functions. It is thought that identifying MS speech disorders using quantitative methods will make a significant contribution to physicians in the diagnosis and follow-up of MS patients. In this study, it was aimed to investigate the speech disorders of MS via objective speech analysis techniques. The study was conducted on 20 patients diagnosed with MS according to McDonald's 2017 criteria and 20 healthy volunteers without any speech or voice pathology. Speech data obtained from patients and healthy individuals were analyzed with the PRAAT speech analysis program, and classification algorithms were tested to determine the most effective classifier in separating specific speech features of MS disease. As a result of the study, the K-nearest neighbor algorithm (K-NN) was found to be the most successful classifier (95%) in distinguishing pathological sounds which were seen in MS patients from those in healthy individuals. The findings obtained in our study can be considered as preliminary data to determine the voice characteristics of MS patients.
Collapse
Affiliation(s)
- Zeynep Z. Sonkaya
- Department of Experimental Linguistics, Ankara University, 06590 Ankara, Turkey
| | - Bilgin Özturk
- Department of Neurology, Gülhane Medicine Faculty, Health Science University, 06010 Ankara, Turkey; (B.Ö.); (R.S.); (Ö.K.)
| | - Rıza Sonkaya
- Department of Neurology, Gülhane Medicine Faculty, Health Science University, 06010 Ankara, Turkey; (B.Ö.); (R.S.); (Ö.K.)
| | - Esra Taskiran
- Department of Neurology, Antalya Training and Research Hospital, 07100 Antalya, Turkey;
| | - Ömer Karadas
- Department of Neurology, Gülhane Medicine Faculty, Health Science University, 06010 Ankara, Turkey; (B.Ö.); (R.S.); (Ö.K.)
| |
Collapse
|
8
|
Avetissian T, Formosa F, Badel A, Delnavaz A, Voix J. A Novel Piezoelectric Energy Harvester for Earcanal Dynamic Motion Exploitation Using a Bistable Resonator Cycled by Coupled Hydraulic Valves Made of Collapsed Flexible Tubes. MICROMACHINES 2024; 15:415. [PMID: 38542662 PMCID: PMC10972077 DOI: 10.3390/mi15030415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/03/2024] [Accepted: 03/15/2024] [Indexed: 11/28/2024]
Abstract
Scavenging energy from the earcanal's dynamic motion during jaw movements may be a practical way to enhance the battery autonomy of hearing aids. The main challenge is optimizing the amount of energy extracted while working with soft human tissues and the earcanal's restricted volume. This paper proposes a new energy harvester concept: a liquid-filled earplug which transfers energy outside the earcanal to a generator. The latter is composed of a hydraulic amplifier, two hydraulic cylinders that actuate a bistable resonator to raise the source frequency while driving an amplified piezoelectric transducer to generate electricity. The cycling of the resonator is achieved using two innovative flexible hydraulic valves based on the buckling of flexible tubes. A multiphysics-coupled model is established to determine the system operation requirements and to evaluate its theoretical performances. This model exhibits a theoretical energy conversion efficiency of 85%. The electromechanical performance of the resonator coupled to the piezoelectric transducer and the hydraulic behavior of the valves are experimentally investigated. The global model was updated using the experimental data to improve its predictability toward further optimization of the design. Moreover, the energy losses are identified to enhance the entire proposed design and improve the experimental energy conversion efficiency to 26%.
Collapse
Affiliation(s)
- Tigran Avetissian
- Université du Québec-École de Technologie Supérieure, Montréal, QC H3C 1K3, Canada; (A.D.); (J.V.)
| | - Fabien Formosa
- Laboratoire SYMME, Université Savoie Mont Blanc, 74940 Annecy, France; (F.F.); (A.B.)
| | - Adrien Badel
- Laboratoire SYMME, Université Savoie Mont Blanc, 74940 Annecy, France; (F.F.); (A.B.)
| | - Aidin Delnavaz
- Université du Québec-École de Technologie Supérieure, Montréal, QC H3C 1K3, Canada; (A.D.); (J.V.)
| | - Jérémie Voix
- Université du Québec-École de Technologie Supérieure, Montréal, QC H3C 1K3, Canada; (A.D.); (J.V.)
| |
Collapse
|
9
|
Rowe HP, Stipancic KL, Campbell TF, Yunusova Y, Green JR. The association between longitudinal declines in speech sound accuracy and speech intelligibility in speakers with amyotrophic lateral sclerosis. CLINICAL LINGUISTICS & PHONETICS 2024; 38:227-248. [PMID: 37122073 PMCID: PMC10613582 DOI: 10.1080/02699206.2023.2202297] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 04/01/2023] [Accepted: 04/03/2023] [Indexed: 05/27/2023]
Abstract
The purpose of this study was to examine how neurodegeneration secondary to amyotrophic lateral sclerosis (ALS) impacts speech sound accuracy over time and how speech sound accuracy, in turn, is related to speech intelligibility. Twenty-one participants with ALS read the Bamboo Passage over multiple data collection sessions across several months. Phonemic and orthographic transcriptions were completed for all speech samples. The percentage of phonemes accurately produced was calculated across each phoneme, sound class (i.e. consonants versus vowels), and distinctive feature (i.e. features involved in Manner of Articulation, Place of Articulation, Laryngeal Voicing, Tongue Height, and Tongue Advancement). Intelligibility was determined by calculating the percentage of words correctly transcribed orthographically by naive listeners. Linear mixed effects models were conducted to assess the decline of each distinctive feature over time and its impact on intelligibility. The results demonstrated that overall phonemic production accuracy had a nonlinear relationship with speech intelligibility and that a subset of features (i.e. those dependent on precise lingual and labial constriction and/or extensive lingual and labial movement) were more important for intelligibility and were more impacted over time than other features. Furthermore, findings revealed that consonants were more strongly associated with intelligibility than vowels, but consonants did not significantly differ from vowels in their decline over time. These findings have the potential to (1) strengthen mechanistic understanding of the physiological constraints imposed by neuronal degeneration on speech production and (2) inform the timing and selection of treatment and assessment targets for individuals with ALS.
Collapse
Affiliation(s)
- Hannah P Rowe
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Boston, Massachusetts, USA
| | - Kaila L Stipancic
- Department of Communicative Disorders and Sciences, The State University of New York, Buffalo, New York, USA
| | - Thomas F Campbell
- Callier Center for Communication Disorders, University of Texas, Dallas, Texas, USA
| | - Yana Yunusova
- Department of Speech-Language Pathology and Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Ontario, Canada
- KITE Research Center, Toronto Rehabilitation Institute, Toronto, Ontario, Canada
| | - Jordan R Green
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Boston, Massachusetts, USA
| |
Collapse
|
10
|
Gutz SE, Maffei MF, Green JR. Feedback From Automatic Speech Recognition to Elicit Clear Speech in Healthy Speakers. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2023; 32:2940-2959. [PMID: 37824377 PMCID: PMC10721250 DOI: 10.1044/2023_ajslp-23-00030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/10/2023] [Accepted: 08/01/2023] [Indexed: 10/14/2023]
Abstract
PURPOSE This study assessed the effectiveness of feedback generated by automatic speech recognition (ASR) for eliciting clear speech from young, healthy individuals. As a preliminary step toward exploring a novel method for eliciting clear speech in patients with dysarthria, we investigated the effects of ASR feedback in healthy controls. If successful, ASR feedback has the potential to facilitate independent, at-home clear speech practice. METHOD Twenty-three healthy control speakers (ages 23-40 years) read sentences aloud in three speaking modes: Habitual, Clear (over-enunciated), and in response to ASR feedback (ASR). In the ASR condition, we used Mozilla DeepSpeech to transcribe speech samples and provide participants with a value indicating the accuracy of the ASR's transcription. For speakers who achieved sufficiently high ASR accuracy, noise was added to their speech at a participant-specific signal-to-noise ratio to ensure that each participant had to over-enunciate to achieve high ASR accuracy. RESULTS Compared to habitual speech, speech produced in the ASR and Clear conditions was clearer, as rated by speech-language pathologists, and more intelligible, per speech-language pathologist transcriptions. Speech in the Clear and ASR conditions aligned on several acoustic measures, particularly those associated with increased vowel distinctiveness and decreased speaking rate. However, ASR accuracy, intelligibility, and clarity were each correlated with different speech features, which may have implications for how people change their speech for ASR feedback. CONCLUSIONS ASR successfully elicited outcomes similar to clear speech in healthy speakers. Future work should investigate its efficacy in eliciting clear speech in people with dysarthria.
Collapse
Affiliation(s)
- Sarah E. Gutz
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| | - Marc F. Maffei
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Jordan R. Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| |
Collapse
|
11
|
Stipancic KL, Wilding G, Tjaden K. Lexical Characteristics of the Speech Intelligibility Test: Effects on Transcription Intelligibility for Speakers With Multiple Sclerosis and Parkinson's Disease. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3115-3131. [PMID: 36931064 PMCID: PMC10555462 DOI: 10.1044/2023_jslhr-22-00279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 09/19/2022] [Accepted: 01/01/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE Lexical characteristics of speech stimuli can significantly impact intelligibility. However, lexical characteristics of the widely used Speech Intelligibility Test (SIT) are unknown. We aimed to (a) define variation in neighborhood density, word frequency, grammatical word class, and type-token ratio across a large corpus of SIT sentences and tests and (b) determine the relationship of lexical characteristics to speech intelligibility in speakers with multiple sclerosis (MS), Parkinson's disease (PD), and neurologically healthy controls. METHOD Using an extant database of 92 speakers (32 controls, 30 speakers with MS, and 30 speakers with PD), percent correct intelligibility scores were obtained for the SIT. Neighborhood density, word frequency, word class, and type-token ratio were calculated and summed for each of the 11 sentences of each SIT test. The distribution of each characteristic across SIT sentences and tests was examined. Linear mixed-effects models were performed to assess the relationship between intelligibility and the lexical characteristics. RESULTS There was large variability in the distribution of lexical characteristics across this large corpus of SIT sentences and tests. Modeling revealed a relationship between intelligibility and the lexical characteristics, with word frequency and word class significantly contributing to the model. CONCLUSIONS Three primary findings emerged: (a) There was considerable variability in lexical characteristics both within and across the large corpus of SIT tests; (b) there was not a robust association between intelligibility and the lexical characteristics; and (c) findings from a study demonstrating an effect of neighborhood density and word frequency on intelligibility were replicated. Clinical and research implications of the findings are discussed, and three exemplar SIT tests systematically controlling for neighborhood density and word frequency are provided.
Collapse
Affiliation(s)
- Kaila L. Stipancic
- Department of Communicative Disorders and Sciences, University at Buffalo, The State University of New York
| | - Gregory Wilding
- Department of Biostatistics, University at Buffalo, The State University of New York
| | - Kris Tjaden
- Department of Communicative Disorders and Sciences, University at Buffalo, The State University of New York
| |
Collapse
|
12
|
Jaddoh A, Loizides F, Rana O. Interaction between people with dysarthria and speech recognition systems: A review. Assist Technol 2023; 35:330-338. [PMID: 35435810 DOI: 10.1080/10400435.2022.2061085] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/28/2022] [Indexed: 10/18/2022] Open
Abstract
In recent years, rapid advancements have taken place for automatic speech recognition (ASR) systems and devices. Though ASR technologies have increased, the accessibility of these novel interaction systems is underreported and may present difficulties for people with speech impediments. In this article, we attempt to identify gaps in current research on the interaction between people with dysarthria and ASR systems and devices. We cover the period from 2011, when Siri (the first and the leading commercial voice assistant) was launched, to 2020. The review employs an interaction framework in which each element (user, input, system, and output) contributes to the interaction process. To select the articles for review, we conducted a search of scientific databases and academic journals. A total of 36 studies met the inclusion criteria, which included use of the word error rate (WER) as a measurement for evaluating ASR systems. This review determines that challenges in interacting with ASR systems persist even in light of the most recent commercial technologies. Further, understanding of the entire interaction process remains limited; thus, to improve this interaction, the recent progress of ASR systems must be elucidated.
Collapse
Affiliation(s)
- Aisha Jaddoh
- School for Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Fernando Loizides
- School for Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Omer Rana
- School for Computer Science and Informatics, Cardiff University, Cardiff, UK
| |
Collapse
|
13
|
Donohue C, Gray LT, Anderson A, DiBiase L, Wymer JP, Plowman EK. Profiles of Dysarthria and Dysphagia in Individuals With Amyotrophic Lateral Sclerosis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:154-162. [PMID: 36525626 PMCID: PMC10023186 DOI: 10.1044/2022_jslhr-22-00312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 07/22/2022] [Accepted: 09/21/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE While dysarthria and dysphagia are known bulbar manifestations of amyotrophic lateral sclerosis (ALS), the relative prevalence of speech and swallowing impairments and whether these bulbar symptoms emerge at the same time point or progress at similar rates is not yet clear. We, therefore, sought to determine the relative prevalence of speech and swallowing impairments in a cohort of individuals with ALS and to determine the impact of disease duration, severity, and onset type on bulbar impairments. METHOD Eighty-eight individuals with a confirmed diagnosis of ALS completed the ALS Functional Rating Scale-Revised (ALSFRS-R), underwent videofluoroscopy (VF), and completed the Sentence Intelligibility Test (SIT) during a single visit. Demographic variables including disease duration and onset type were also obtained from participants. Duplicate, independent, and blinded ratings were completed using the Dynamic Imaging Grade of Swallowing Toxicity (DIGEST) scale and SIT to index dysphagia (DIGEST ≥ 1) and dysarthria (< 96% intelligible and/or < 150 words per minute) status. Descriptive statistics, Pearson chi-squared tests, independent-samples t tests, and odds ratios were performed. RESULTS Dysphagia and dysarthria were instrumentally confirmed in 68% and 78% of individuals with ALS, respectively. Dysarthria and dysphagia were associated (p = .01), and bulbar impairment profile distributions in rank order included (a) dysphagia - dysarthria (59%, n = 52), (b) no dysphagia - dysarthria (19%, n = 17), (c) no dysphagia - no dysarthria (13%, n = 11), and (d) dysphagia - no dysarthria (9%, n = 8). Participants with dysphagia or dysarthria demonstrated 4.2 higher odds of exhibiting a bulbar impairment in the other domain than participants with normal speech and swallowing (95% CI [1.5, 12.2]). There were no differences in ALSFRS-R total scores or disease duration across bulbar impairment profiles (p > .05). ALSFRS-R bulbar subscale scores were significantly lower in individuals with dysphagia versus no dysphagia (8.4 vs. 10.4, p < .0001) and dysarthria versus no dysarthria (8.5 vs. 10.9, p < .0001). Dysphagia and onset type (p = .003) and dysarthria and onset type were associated (p < .0001). CONCLUSIONS Over half of the individuals with ALS in this study demonstrated both dysphagia and dysarthria. Of those with only one bulbar impairment, speech was twice as likely to be the first bulbar symptom to degrade. Future studies are needed to confirm these findings and determine the longitudinal progression of bulbar impairments in this patient population.
Collapse
Affiliation(s)
- Cara Donohue
- Aerodigestive Research Core Laboratory, University of Florida, Gainesville
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
- Breathing Research and Therapeutics Center, University of Florida, Gainesville
| | - Lauren Tabor Gray
- Aerodigestive Research Core Laboratory, University of Florida, Gainesville
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
- Breathing Research and Therapeutics Center, University of Florida, Gainesville
- Center of Collaborative Research, NOVA Southeastern University, Fort Lauderdale, FL
| | - Amber Anderson
- Aerodigestive Research Core Laboratory, University of Florida, Gainesville
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Lauren DiBiase
- Aerodigestive Research Core Laboratory, University of Florida, Gainesville
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - James P. Wymer
- Department of Neurology, University of Florida, Gainesville
| | - Emily K. Plowman
- Aerodigestive Research Core Laboratory, University of Florida, Gainesville
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
- Breathing Research and Therapeutics Center, University of Florida, Gainesville
- Department of Neurology, University of Florida, Gainesville
- Department of Surgery, University of Florida, Gainesville
| |
Collapse
|
14
|
Decreased Speech Comprehension and Increased Vocal Efforts Among Healthcare Providers Using N95 Mask. Indian J Otolaryngol Head Neck Surg 2022; 75:159-164. [PMID: 36532232 PMCID: PMC9734841 DOI: 10.1007/s12070-022-03218-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 09/24/2022] [Indexed: 12/13/2022] Open
Abstract
Aim: N95 masks are recommended for the healthcare providers (HCPs) taking care of patients with coronavirus disease 2019. However, the use of these masks hampers communication. We aimed to evaluate the effect of N95 masks on speech comprehension among listeners and vocal efforts (VEs) of the HCPs. Materials and Methods: This prospective study involved 50 HCPs. We used a single observer with normal hearing to assess the difficulty in comprehension, while VE was estimated in HCPs. The speech reception threshold (SRT), speech discrimination score (SDS), and VEs were evaluated initially without using N95 mask and then repeated with HCPs wearing N95 mask. Results: The use of masks resulted in a statistically significant increase in mean SRT [4.25 (1.65) dB] and VE [2.6 (0.69)], with simultaneous decrease in mean SDS [19.2 (8.77)] (all p-values < 0.0001). Moreover, demographic parameters including age, sex, and profession were not associated with change in SRT, SDS, and VE (all p-values > 0.05). Conclusion: Though use of N95 masks protects the HCPs against the viral infection, it results in decreased speech comprehension and increased VEs. Moreover, these issues are universal among the HCPs and are applicable to the general public as well.
Collapse
|
15
|
Gutz SE, Rowe HP, Tilton-Bolowsky VE, Green JR. Speaking with a KN95 face mask: a within-subjects study on speaker adaptation and strategies to improve intelligibility. Cogn Res Princ Implic 2022; 7:73. [PMID: 35907167 PMCID: PMC9339031 DOI: 10.1186/s41235-022-00423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 07/18/2022] [Indexed: 11/15/2022] Open
Abstract
Mask-wearing during the COVID-19 pandemic has prompted a growing interest in the functional impact of masks on speech and communication. Prior work has shown that masks dampen sound, impede visual communication cues, and reduce intelligibility. However, more work is needed to understand how speakers change their speech while wearing a mask and to identify strategies to overcome the impact of wearing a mask. Data were collected from 19 healthy adults during a single in-person session. We investigated the effects of wearing a KN95 mask on speech intelligibility, as judged by two speech-language pathologists, examined speech kinematics and acoustics associated with mask-wearing, and explored KN95 acoustic filtering. We then considered the efficacy of three speaking strategies to improve speech intelligibility: Loud, Clear, and Slow speech. To inform speaker strategy recommendations, we related findings to self-reported speaker effort. Results indicated that healthy speakers could compensate for the presence of a mask and achieve normal speech intelligibility. Additionally, we showed that speaking loudly or clearly-and, to a lesser extent, slowly-improved speech intelligibility. However, using these strategies may require increased physical and cognitive effort and should be used only when necessary. These results can inform recommendations for speakers wearing masks, particularly those with communication disorders (e.g., dysarthria) who may struggle to adapt to a mask but can respond to explicit instructions. Such recommendations may further help non-native speakers and those communicating in a noisy environment or with listeners with hearing loss.
Collapse
Affiliation(s)
- Sarah E. Gutz
- Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, MA USA
| | - Hannah P. Rowe
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Building 79/96, 2nd floor, 13th Street, Boston, MA 02129 USA
| | - Victoria E. Tilton-Bolowsky
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Building 79/96, 2nd floor, 13th Street, Boston, MA 02129 USA
| | - Jordan R. Green
- Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, MA USA
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Building 79/96, 2nd floor, 13th Street, Boston, MA 02129 USA
| |
Collapse
|
16
|
Deep learning applications in telerehabilitation speech therapy scenarios. Comput Biol Med 2022; 148:105864. [PMID: 35853398 DOI: 10.1016/j.compbiomed.2022.105864] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 06/05/2022] [Accepted: 07/02/2022] [Indexed: 11/21/2022]
Abstract
Nowadays, many application scenarios benefit from automatic speech recognition (ASR) technology. Within the field of speech therapy, in some cases ASR is exploited in the treatment of dysarthria with the aim of supporting articulation output. However, in presence of atypical speech, standard ASR approaches do not provide any reliable result in terms of voice recognition due to main issues, including: (i) the extreme intra and inter-speakers variability of the speech in presence of speech impairments, such as dysarthria; (ii) the absence of dedicated corpora containing voice samples from users with a speech disability to train a state-of-the-art speech model, particularly in non-English languages. In this paper, we focus on isolated word recognition for native Italian speakers with dysarthria and we exploit an existing mobile app to collect audio data from users with speech disorders while they perform articulation exercises for speech therapy purposes. With this data availability, a convolutional neural network has been trained to spot a small number of keywords within atypical speech, according to a speaker dependent method. Finally, we discuss the benefits of the trained ASR system in tailored telerehabilitation contexts intended for patients with dysarthria who can follow treatment plans under the supervision of remote speech language pathologists.
Collapse
|
17
|
Gutz SE, Stipancic KL, Yunusova Y, Berry JD, Green JR. Validity of Off-the-Shelf Automatic Speech Recognition for Assessing Speech Intelligibility and Speech Severity in Speakers With Amyotrophic Lateral Sclerosis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2128-2143. [PMID: 35623334 PMCID: PMC9567308 DOI: 10.1044/2022_jslhr-21-00589] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/21/2022] [Accepted: 03/15/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE There is increasing interest in using automatic speech recognition (ASR) systems to evaluate impairment severity or speech intelligibility in speakers with dysarthria. We assessed the clinical validity of one currently available off-the-shelf (OTS) ASR system (i.e., a Google Cloud ASR API) for indexing sentence-level speech intelligibility and impairment severity in individuals with amyotrophic lateral sclerosis (ALS), and we provided guidance for potential users of such systems in research and clinic. METHOD Using speech samples collected from 52 individuals with ALS and 20 healthy control speakers, we compared word recognition rate (WRR) from the commercially available Google Cloud ASR API (Machine WRR) to clinician-provided judgments of impairment severity, as well as sentence intelligibility (Human WRR). We assessed the internal reliability of Machine and Human WRR by comparing the standard deviation of WRR across sentences to the minimally detectable change (MDC), a clinical benchmark that indicates whether results are within measurement error. We also evaluated Machine and Human WRR diagnostic accuracy for classifying speakers into clinically established categories. RESULTS Human WRR achieved better accuracy than Machine WRR when indexing speech severity, and, although related, Human and Machine WRR were not strongly correlated. When the speech signal was mixed with noise (noise-augmented ASR) to reduce a ceiling effect, Machine WRR performance improved. Internal reliability metrics were worse for Machine than Human WRR, particularly for typical and mildly impaired severity groups, although sentence length significantly impacted both Machine and Human WRRs. CONCLUSIONS Results indicated that the OTS ASR system was inadequate for early detection of speech impairment and grading overall speech severity. While Machine and Human WRR were correlated, ASR should not be used as a one-to-one proxy for transcription speech intelligibility or clinician severity ratings. Overall, findings suggested that the tested OTS ASR system, Google Cloud ASR, has limited utility for grading clinical speech impairment in speakers with ALS.
Collapse
Affiliation(s)
- Sarah E. Gutz
- Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, MA
| | - Kaila L. Stipancic
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| | - Yana Yunusova
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Hurvitz Brain Sciences Program, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Network, Ontario, Canada
| | - James D. Berry
- Sean M. Healey and AMG Center for ALS, Massachusetts General Hospital, Boston
| | - Jordan R. Green
- Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, MA
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| |
Collapse
|
18
|
Lévêque N, Slis A, Lancia L, Bruneteau G, Fougeron C. Acoustic Change Over Time in Spastic and/or Flaccid Dysarthria in Motor Neuron Diseases. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1767-1783. [PMID: 35412848 DOI: 10.1044/2022_jslhr-21-00434] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study aims to investigate acoustic change over time as biomarkers to differentiate among spastic-flaccid dysarthria associated with amyotrophic lateral sclerosis (ALS), spastic dysarthria associated with primary lateral sclerosis (PLS), flaccid dysarthria associated with spinal and bulbar muscular atrophy (SBMA), and to explore how these acoustic parameters are affected by dysarthria severity. METHOD Thirty-three ALS patients with mixed flaccid-spastic dysarthria, 17 PLS patients with pure spastic dysarthria, 18 SBMA patients with pure flaccid dysarthria, and 70 controls, all French speakers, were included in the study. Speakers produced vowel-glide sequences targeting different vocal tract shape changes. The mean and coefficient of variation of the total squared change of mel frequency cepstral coefficients were used to capture the degree and variability of acoustic changes linked to vocal tract modifications over time. Differences in duration of acoustic events were also measured. RESULTS All pathological groups showed significantly less acoustic change compared to controls, reflecting less acoustic contrast in sequences. Spastic and mixed spastic-flaccid dysarthric speakers showed smaller acoustic changes and slower sequence production compared to flaccid dysarthria. For dysarthria subtypes associated with a spastic component, reduced degree of acoustic change was also associated with dysarthria severity. CONCLUSIONS The acoustic parameters partially differentiated among the dysarthria subtypes in relation to motor neuron diseases. While similar acoustic patterns were found in spastic-flaccid and spastic dysarthria, crucial differences were found between these two subtypes relating to variability. The acoustic patterns were much more variable in ALS. This method forms a promising clinical tool as a diagnostic marker of articulatory impairment, even at mild stage of dysarthria progression in all subtypes.
Collapse
Affiliation(s)
- Nathalie Lévêque
- Laboratoire de Phonétique et de Phonologie, UMR 7018, CNRS/University Sorbonne-Nouvelle, Paris, France
- Assistance Publique - Hôpitaux de Paris, Department of Neurology, Hôpital Pitié-Salpêtrière, ALS Reference Center, Paris, France
| | - Anneke Slis
- Laboratoire de Phonétique et de Phonologie, UMR 7018, CNRS/University Sorbonne-Nouvelle, Paris, France
| | - Leonardo Lancia
- Laboratoire de Phonétique et de Phonologie, UMR 7018, CNRS/University Sorbonne-Nouvelle, Paris, France
| | - Gaëlle Bruneteau
- Assistance Publique - Hôpitaux de Paris, Department of Neurology, Hôpital Pitié-Salpêtrière, ALS Reference Center, Paris, France
| | - Cécile Fougeron
- Laboratoire de Phonétique et de Phonologie, UMR 7018, CNRS/University Sorbonne-Nouvelle, Paris, France
| |
Collapse
|
19
|
Vieira FG, Venugopalan S, Premasiri AS, McNally M, Jansen A, McCloskey K, Brenner MP, Perrin S. A machine-learning based objective measure for ALS disease severity. NPJ Digit Med 2022; 5:45. [PMID: 35396385 PMCID: PMC8993812 DOI: 10.1038/s41746-022-00588-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Amyotrophic Lateral Sclerosis (ALS) disease severity is usually measured using the subjective, questionnaire-based revised ALS Functional Rating Scale (ALSFRS-R). Objective measures of disease severity would be powerful tools for evaluating real-world drug effectiveness, efficacy in clinical trials, and for identifying participants for cohort studies. We developed a machine learning (ML) based objective measure for ALS disease severity based on voice samples and accelerometer measurements from a four-year longitudinal dataset. 584 people living with ALS consented and carried out prescribed speaking and limb-based tasks. 542 participants contributed 5814 voice recordings, and 350 contributed 13,009 accelerometer samples, while simultaneously measuring ALSFRS-R scores. Using these data, we trained ML models to predict bulbar-related and limb-related ALSFRS-R scores. On the test set (n = 109 participants) the voice models achieved a multiclass AUC of 0.86 (95% CI, 0.85-0.88) on speech ALSFRS-R prediction, whereas the accelerometer models achieved a median multiclass AUC of 0.73 on 6 limb-related functions. The correlations across functions observed in self-reported ALSFRS-R scores were preserved in ML-derived scores. We used these models and self-reported ALSFRS-R scores to evaluate the real-world effects of edaravone, a drug approved for use in ALS. In the cohort of 54 test participants who received edaravone as part of their usual care, the ML-derived scores were consistent with the self-reported ALSFRS-R scores. At the individual level, the continuous ML-derived score can capture gradual changes that are absent in the integer ALSFRS-R scores. This demonstrates the value of these tools for assessing disease severity and, potentially, drug effects.
Collapse
Affiliation(s)
| | | | | | - Maeve McNally
- ALS Therapy Development Institute, Watertown, MA, USA
| | - Aren Jansen
- Google Research, Google, Mountain View, CA, USA
| | | | - Michael P Brenner
- Google Research, Google, Mountain View, CA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Steven Perrin
- ALS Therapy Development Institute, Watertown, MA, USA
- Eledon Pharmaceuticals, Irvine, CA, USA
| |
Collapse
|
20
|
van Brenk F, Stipancic K, Kain A, Tjaden K. Intelligibility Across a Reading Passage: The Effect of Dysarthria and Cued Speaking Styles. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2022; 31:390-408. [PMID: 34982941 PMCID: PMC9135029 DOI: 10.1044/2021_ajslp-21-00151] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
OBJECTIVE Reading a passage out loud is a commonly used task in the perceptual assessment of dysarthria. The extent to which perceptual characteristics remain unchanged or stable over the time course of a passage is largely unknown. This study investigated crowdsourced visual analogue scale (VAS) judgments of intelligibility across a reading passage as a function of cued speaking styles commonly used in treatment to maximize intelligibility. PATIENTS AND METHOD The Hunter passage was read aloud in habitual, slow, loud, and clear speaking styles by 16 speakers with Parkinson's disease (PD), 30 speakers with multiple sclerosis (MS), and 32 control speakers. VAS judgments of intelligibility from three fragments representing the beginning, middle, and end of the reading passage were obtained from 540 crowdsourced online listeners. RESULTS Overall passage intelligibility was reduced for the two clinical groups relative to the control group. All speaker groups exhibited intelligibility variation across the reading passage, with trends of increased intelligibility toward the end of the reading passage. For control speakers and speakers with PD, patterns of intelligibility variation across passage reading did not differ with speaking style. For the MS group, intelligibility variation across the passage was dependent on speaking style. CONCLUSIONS The presence of intelligibility variation within a reading passage warrants careful selection of speech materials in research and clinical practice. Results further indicate that the crowdsourced VAS rating paradigm is useful to document intelligibility in a reading passage for different cued speaking styles commonly used in treatment for dysarthria.
Collapse
Affiliation(s)
- Frits van Brenk
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
- Utrecht Institute of Linguistics OTS, Utrecht University, the Netherlands
| | - Kaila Stipancic
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| | - Alexander Kain
- Department of Pediatrics, Oregon Health & Science University, Portland
| | - Kris Tjaden
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| |
Collapse
|
21
|
Lehner K, Ziegler W. Indicators of Communication Limitation in Dysarthria and Their Relation to Auditory-Perceptual Speech Symptoms: Construct Validity of the KommPaS Web App. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:22-42. [PMID: 34890213 DOI: 10.1044/2021_jslhr-21-00215] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Despite extensive research into communication-related parameters in dysarthria, such as intelligibility, naturalness, and perceived listener effort, the existing evidence has not been translated into a clinically applicable, comprehensive, and valid diagnostic tool so far. This study addresses Communication-Related Parameters in Speech Disorders (KommPaS), a new web-based diagnostic instrument for measuring indices of communication limitation in individuals with dysarthria through online crowdsourcing. More specifically, it answers questions about the construct validity of KommPaS. In the first part, the interrelationship of the KommPaS variables intelligibility, naturalness, perceived listener effort, and speech rate were explored in order to draw a comprehensive picture of a patient's limitations and avoid the collection of redundant information. Second, the influences of motor speech symptoms on the KommPaS variables were studied in order to delineate the structural relationships between two complementary diagnostic perspectives. METHOD One hundred persons with dysarthria of different etiologies and varying degrees of severity were examined with KommPaS to obtain layperson-based data on communication-level parameters, and with the Bogenhausen Dysarthria Scale (BoDyS) to obtain expert-based, function-level data on dysarthria symptoms. The internal structure of the KommPaS variables and their dependence on the BoDyS variables were analyzed using structural equation modeling. RESULTS Despite a high multicollinearity, all KommPaS variables were shown to provide complementary diagnostic information and their mutual interconnections were delineated in a path graph model. Regarding the influence of the BoDyS scales on the KommPaS variables, separate linear regression models revealed plausible predictor sets. A complete path model of KommPaS and BoDyS variables was developed to map the complex interplay between variables at the functional and the communication levels of dysarthria assessment. CONCLUSION In validating a new clinical tool for the diagnostics of communication limitations in dysarthria, this study is the first to draw a comprehensive picture of how auditory-perceptual characteristics of dysarthria interact at the levels of expert-based functional and layperson-based communicative assessments.
Collapse
Affiliation(s)
- Katharina Lehner
- Clinical Neuropsychology Research Group, Institute for Phonetics and Speech Processing, Ludwig-Maximilians-University Munich, Germany
| | - Wolfram Ziegler
- Clinical Neuropsychology Research Group, Institute for Phonetics and Speech Processing, Ludwig-Maximilians-University Munich, Germany
| |
Collapse
|
22
|
Stipancic KL, Palmer KM, Rowe HP, Yunusova Y, Berry JD, Green JR. "You Say Severe, I Say Mild": Toward an Empirical Classification of Dysarthria Severity. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4718-4735. [PMID: 34762814 PMCID: PMC9150682 DOI: 10.1044/2021_jslhr-21-00197] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 07/07/2021] [Accepted: 08/12/2021] [Indexed: 05/19/2023]
Abstract
PURPOSE The main purpose of this study was to create an empirical classification system for speech severity in patients with dysarthria secondary to amyotrophic lateral sclerosis (ALS) by exploring the reliability and validity of speech-language pathologists' (SLPs') ratings of dysarthric speech. METHOD Ten SLPs listened to speech samples from 52 speakers with ALS and 20 healthy control speakers. SLPs were asked to rate the speech severity of the speakers using five response options: normal, mild, moderate, severe, and profound. Four severity-surrogate measures were also calculated: SLPs transcribed the speech samples for the calculation of speech intelligibility and rated the effort it took to understand the speakers on a visual analog scale. In addition, speaking rate and intelligible speaking rate were calculated for each speaker. Intrarater and interrater reliability were calculated for each measure. We explored the validity of clinician-based severity ratings by comparing them to the severity-surrogate measures. Receiver operating characteristic (ROC) curves were conducted to create optimal cutoff points for defining dysarthria severity categories. RESULTS Intrarater and interrater reliability for the clinician-based severity ratings were excellent and were comparable to reliability for the severity-surrogate measures explored. Clinician severity ratings were strongly associated with all severity-surrogate measures, suggesting strong construct validity. We also provided a range of values for each severity-surrogate measure within each severity category based on the cutoff points obtained from the ROC analyses. CONCLUSIONS Clinician severity ratings of dysarthric speech are reliable and valid. We discuss the underlying challenges that arise when selecting a stratification measure and offer recommendations for a classification scheme when stratifying patients and research participants into speech severity categories.
Collapse
Affiliation(s)
- Kaila L. Stipancic
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| | - Kira M. Palmer
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Hannah P. Rowe
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Yana Yunusova
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
| | - James D. Berry
- Sean M. Healey and AMG Center for ALS, Massachusetts General Hospital, Boston
| | - Jordan R. Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| |
Collapse
|
23
|
Lee J, Madhavan A, Krajewski E, Lingenfelter S. Assessment of dysarthria and dysphagia in patients with amyotrophic lateral sclerosis: Review of the current evidence. Muscle Nerve 2021; 64:520-531. [PMID: 34296769 DOI: 10.1002/mus.27361] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 06/21/2021] [Accepted: 06/27/2021] [Indexed: 11/11/2022]
Abstract
Bulbar dysfunction is a common presentation of amyotrophic lateral sclerosis (ALS) and significantly impacts quality of life of people with ALS (PALS). The current paper reviews measurements of dysarthria and dysphagia specific to ALS to identify efficient and valid assessment measures. Using such assessment measures will lead to improved management of bulbar dysfunction in ALS. Measures reviewed for dysarthria in PALS are organized into three categories: acoustic, kinematic, and strength. A set of criteria are used to evaluate the effectiveness of the measures' identification of speech impairments, measurement of functional verbal communication, and clinical applicability. Assessments reviewed for dysphagia in PALS are organized into six categories: patient reported outcomes, dietary intake, pulmonary function and airway defense capacity, bulbar function, dysphagia/aspiration screens, and instrumental evaluations. Measurements that have good potential for clinical use are highlighted in both topic areas. Additionally, areas of improvement for clinical practice and research are identified and discussed. In general, no single speech measure fulfilled all the criteria, although a few measures were identified as potential diagnostic tools. Similarly, few objective measures that were validated and replicated with large sample sizes were found for diagnosis of dysphagia in PALS. Importantly, clinical applicability was found to be limited; thus, a collaborative team focused on implementation science would be helpful to improve the clinical uptake of assessments. Overall, the review highlights the need for further development of clinically viable and efficient measurements that use a multidisciplinary approach.
Collapse
Affiliation(s)
- Jimin Lee
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park, Pennsylvania, USA
| | - Aarthi Madhavan
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park, Pennsylvania, USA
| | - Elizabeth Krajewski
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park, Pennsylvania, USA
| | - Sydney Lingenfelter
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park, Pennsylvania, USA
| |
Collapse
|
24
|
Lehner K, Ziegler W. The Impact of Lexical and Articulatory Factors in the Automatic Selection of Test Materials for a Web-Based Assessment of Intelligibility in Dysarthria. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2196-2212. [PMID: 33647214 DOI: 10.1044/2020_jslhr-20-00267] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The clinical assessment of intelligibility must be based on a large repository and extensive variation of test materials, to render test stimuli unpredictable and thereby avoid expectancies and familiarity effects in the listeners. At the same time, it is essential that test materials are systematically controlled for factors influencing intelligibility. This study investigated the impact of lexical and articulatory characteristics of quasirandomly selected target words on intelligibility in a large sample of dysarthric speakers under clinical examination conditions. Method Using the clinical assessment tool KommPaS, a total of 2,700 sentence-embedded target words, quasirandomly drawn from a large corpus, were spoken by a group of 100 dysarthric patients and later transcribed by listeners recruited via online crowdsourcing. Transcription accuracy was analyzed for influences of lexical frequency, phonological neighborhood structure, articulatory complexity, lexical familiarity, word class, stimulus length, and embedding position. Classification and regression analyses were performed using random forests and generalized linear mixed models. Results Across all degrees of severity, target words with higher frequency, fewer and less frequent phonological neighbors, higher articulatory complexity, and higher lexical familiarity received significantly higher intelligibility scores. In addition, target words were more challenging sentence-initially than in medial or final position. Stimulus length had mixed effects; word length and word class had no effect. Conclusions In a large-scale clinical examination of intelligibility in speakers with dysarthria, several well-established influences of lexical and articulatory parameters could be replicated, and the roles of new factors were discussed. This study provides clues about how experimental rigor can be combined with clinical requirements in the diagnostics of communication impairment in patients with dysarthria.
Collapse
Affiliation(s)
- Katharina Lehner
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig Maximilians University, Munich, Germany
| | - Wolfram Ziegler
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig Maximilians University, Munich, Germany
| |
Collapse
|
25
|
Stipancic KL, Yunusova Y, Campbell TF, Wang J, Berry JD, Green JR. Two Distinct Clinical Phenotypes of Bulbar Motor Impairment in Amyotrophic Lateral Sclerosis. Front Neurol 2021; 12:664713. [PMID: 34220673 PMCID: PMC8244731 DOI: 10.3389/fneur.2021.664713] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: Understanding clinical variants of motor neuron diseases such as amyotrophic lateral sclerosis (ALS) is critical for discovering disease mechanisms and across-patient differences in therapeutic response. The current work describes two clinical subgroups of patients with ALS that, despite similar levels of bulbar motor involvement, have disparate clinical and functional speech presentations. Methods: Participants included 47 healthy control speakers and 126 speakers with ALS. Participants with ALS were stratified into three clinical subgroups (i.e., bulbar asymptomatic, bulbar symptomatic high speech function, and bulbar symptomatic low speech function) based on clinical metrics of bulbar motor impairment. Acoustic and lip kinematic analytics were derived from each participant's recordings of reading samples and a rapid syllable repetition task. Group differences were reported on clinical scales of ALS and bulbar motor severity and on multiple speech measures. Results: The high and low speech-function subgroups were found to be similar on many of the dependent measures explored. However, these two groups were differentiated on the basis of an acoustic measure used as a proxy for tongue movement. Conclusion: This study supports the hypothesis that high and low speech-function subgroups do not differ solely in overall severity, but rather, constitute two distinct bulbar motor phenotypes. The findings suggest that the low speech-function group exhibited more global involvement of the bulbar muscles than the high speech-function group that had relatively intact lingual function. This work has implications for clinical measures used to grade bulbar motor involvement, suggesting that a single bulbar measure is inadequate for capturing differences among phenotypes.
Collapse
Affiliation(s)
- Kaila L Stipancic
- Speech and Feeding Disorders Lab, Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, United States.,UB Motor Speech Disorders Lab, Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, United States
| | - Yana Yunusova
- Speech Production Lab, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | - Thomas F Campbell
- Speech, Language, Cognition, and Communication Lab, Department of Communication Sciences and Disorders, University of Texas at Dallas, Dallas, TX, United States
| | - Jun Wang
- Speech Disorders and Technology Lab, Department of Communication Sciences and Disorders, University of Texas at Austin, Austin, TX, United States
| | - James D Berry
- Sean M. Healey and AMG Center for ALS, Massachusetts General Hospital, Boston, MA, United States
| | - Jordan R Green
- Speech and Feeding Disorders Lab, Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, United States
| |
Collapse
|
26
|
Ma EPM, Tse MMS, Momenian M, Pu D, Chen FF. The Effects of Dysphonic Voice on Speech Intelligibility in Cantonese-Speaking Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:16-29. [PMID: 33306439 DOI: 10.1044/2020_jslhr-19-00190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose This study aims to investigate the effects of dysphonic voice on speech intelligibility in Cantonese-speaking adults. Method Speech recordings from three speakers with dysphonia secondary to phonotrauma and three speakers with healthy voices were presented to 30 healthy listeners (15 men and 15 women; M age = 22.7 years) under six noise conditions (signal-to-noise ratio [SNR] -10, SNR -5, SNR 0, SNR +5, SNR +10) and quiet conditions. The speech recordings were composed of sentences with five different lengths: five syllables, eight syllables, 10 syllables, 12 syllables, and 15 syllables. The effects of speaker's voice quality, background noise condition, and sentence length on speech intelligibility were examined. Speech intelligibility scores were calculated based on the listener's correct judgment of the number of syllables heard as a percentage of the total syllables in each stimulus. Results Dysphonic voices, as compared to healthy voices, were significantly more affected by background noise. Speech presented with dysphonic voices was significantly less intelligible than speech presented with healthy voices under unfavorable SNR conditions (SNR -10, SNR -5, and SNR 0 conditions). However, there was no sufficient evidence to suggest effects of sentence length on intelligibility, regardless of the speaker's voice quality or the level of background noise. Conclusions This study provides empirical data on the impacts of dysphonic voice on speech intelligibility in Cantonese speakers. The findings highlight the importance of educating the public about the impacts of voice quality and background noise on speech intelligibility and the potential of compensatory strategies that specifically address these barriers. Supplemental Material https://doi.org/10.23641/asha.13335926.
Collapse
Affiliation(s)
- Estella P-M Ma
- Voice Research Laboratory, Faculty of Education, The University of Hong Kong
| | - Mandy M-S Tse
- Voice Research Laboratory, Faculty of Education, The University of Hong Kong
| | - Mohammad Momenian
- Laboratory for Communication Science, Faculty of Education, The University of Hong Kong
| | - Dai Pu
- Department of Surgery, LKS Faculty of Medicine, The University of Hong Kong
| | - Felix F Chen
- Department of Electrical and Electronic Engineering, South University of Science and Technology of China, Shenzhen
| |
Collapse
|
27
|
Bottalico P, Murgia S, Puglisi GE, Astolfi A, Kirk KI. Effect of masks on speech intelligibility in auralized classrooms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:2878. [PMID: 33261397 PMCID: PMC7857496 DOI: 10.1121/10.0002450] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 05/24/2023]
Abstract
This study explored the effects of wearing face masks on classroom communication. The effects of three different types of face masks (fabric, surgical, and N95 masks) on speech intelligibility (SI) presented to college students in auralized classrooms were evaluated. To simulate realistic classroom conditions, speech stimuli were presented in the presence of speech-shaped noise with a signal-to-noise ratio of +3 dB under two different reverberation times (0.4 s and 3.1 s). The use of fabric masks yielded a significantly greater reduction in SI compared to the other masks. Therefore, surgical masks or N95 masks are recommended in teaching environments.
Collapse
Affiliation(s)
- Pasquale Bottalico
- Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Silvia Murgia
- Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign, Champaign, Illinois 61820, USA
| | | | | | - Karen Iler Kirk
- Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign, Champaign, Illinois 61820, USA
| |
Collapse
|
28
|
Chiaramonte R, Bonfiglio M. Acoustic analysis of voice in bulbar amyotrophic lateral sclerosis: a systematic review and meta-analysis of studies. LOGOP PHONIATR VOCO 2019; 45:151-163. [DOI: 10.1080/14015439.2019.1687748] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Rita Chiaramonte
- Department of Physical Medicine and Rehabilitation, University of Catania, Catania, Italy
| | - Marco Bonfiglio
- Department for Health Activities, ASP Siracusa, Siracusa, Italy
| |
Collapse
|