1
|
Awan SN, Bahr R, Watts S, Boyer M, Budinsky R, Bensoussan Y. Validity of Acoustic Measures Obtained Using Various Recording Methods Including Smartphones With and Without Headset Microphones. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1712-1730. [PMID: 38749007 DOI: 10.1044/2024_jslhr-23-00759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
PURPOSE The goal of this study was to assess various recording methods, including combinations of high- versus low-cost microphones, recording interfaces, and smartphones in terms of their ability to produce commonly used time- and spectral-based voice measurements. METHOD Twenty-four vowel samples representing a diversity of voice quality deviations and severities from a wide age range of male and female speakers were played via a head-and-thorax model and recorded using a high-cost, research standard GRAS 40AF (GRAS Sound & Vibration) microphone and amplification system. Additional recordings were made using various combinations of headset microphones (AKG C555 L [AKG Acoustics GmbH], Shure SM35-XLR [Shure Incorporated], AVID AE-36 [AVID Products, Inc.]) and audio interfaces (Focusrite Scarlett 2i2 [Focusrite Audio Engineering Ltd.] and PC, Focusrite and smartphone, smartphone via a TRRS adapter), as well as smartphones direct (Apple iPhone 13 Pro, Google Pixel 6) using their built-in microphones. The effect of background noise from four different room conditions was also evaluated. Vowel samples were analyzed for measures of fundamental frequency, perturbation, cepstral peak prominence, and spectral tilt (low vs. high spectral ratio). RESULTS Results show that a wide variety of recording methods, including smartphones with and without a low-cost headset microphone, can effectively track the wide range of acoustic characteristics in a diverse set of typical and disordered voice samples. Although significant differences in acoustic measures of voice may be observed, the presence of extremely strong correlations (rs > .90) with the recording standard implies a strong linear relationship between the results of different methods that may be used to predict and adjust any observed differences in measurement results. CONCLUSION Because handheld smartphone distance and positioning may be highly variable when used in actual clinical recording situations, smartphone + a low-cost headset microphone is recommended as an affordable recording method that controls mouth-to-microphone distance and positioning and allows both hands to be available for manipulation of the smartphone device.
Collapse
Affiliation(s)
- Shaheen N Awan
- School of Communication Sciences and Disorders, University of Central Florida, Orlando
| | - Ruth Bahr
- Department of Communication Sciences & Disorders, University of South Florida, Tampa
| | - Stephanie Watts
- Department of Otolaryngology - Head & Neck Surgery, Morsani College of Medicine, University of South Florida, Tampa
| | - Micah Boyer
- Department of Otolaryngology - Head & Neck Surgery, Morsani College of Medicine, University of South Florida, Tampa
| | - Robert Budinsky
- Department of Communication Sciences & Disorders, University of South Florida, Tampa
| | - Yael Bensoussan
- Department of Otolaryngology - Head & Neck Surgery, Morsani College of Medicine, University of South Florida, Tampa
| |
Collapse
|
2
|
Oreskovic J, Kaufman J, Fossat Y. Impact of Audio Data Compression on Feature Extraction for Vocal Biomarker Detection: Validation Study. JMIR BIOMEDICAL ENGINEERING 2024; 9:e56246. [PMID: 38875677 PMCID: PMC11058552 DOI: 10.2196/56246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 02/28/2024] [Accepted: 03/23/2024] [Indexed: 06/16/2024] Open
Abstract
BACKGROUND Vocal biomarkers, derived from acoustic analysis of vocal characteristics, offer noninvasive avenues for medical screening, diagnostics, and monitoring. Previous research demonstrated the feasibility of predicting type 2 diabetes mellitus through acoustic analysis of smartphone-recorded speech. Building upon this work, this study explores the impact of audio data compression on acoustic vocal biomarker development, which is critical for broader applicability in health care. OBJECTIVE The objective of this research is to analyze how common audio compression algorithms (MP3, M4A, and WMA) applied by 3 different conversion tools at 2 bitrates affect features crucial for vocal biomarker detection. METHODS The impact of audio data compression on acoustic vocal biomarker development was investigated using uncompressed voice samples converted into MP3, M4A, and WMA formats at 2 bitrates (320 and 128 kbps) with MediaHuman (MH) Audio Converter, WonderShare (WS) UniConverter, and Fast Forward Moving Picture Experts Group (FFmpeg). The data set comprised recordings from 505 participants, totaling 17,298 audio files, collected using a smartphone. Participants recorded a fixed English sentence up to 6 times daily for up to 14 days. Feature extraction, including pitch, jitter, intensity, and Mel-frequency cepstral coefficients (MFCCs), was conducted using Python and Parselmouth. The Wilcoxon signed rank test and the Bonferroni correction for multiple comparisons were used for statistical analysis. RESULTS In this study, 36,970 audio files were initially recorded from 505 participants, with 17,298 recordings meeting the fixed sentence criteria after screening. Differences between the audio conversion software, MH, WS, and FFmpeg, were notable, impacting compression outcomes such as constant or variable bitrates. Analysis encompassed diverse data compression formats and a wide array of voice features and MFCCs. Wilcoxon signed rank tests yielded P values, with those below the Bonferroni-corrected significance level indicating significant alterations due to compression. The results indicated feature-specific impacts of compression across formats and bitrates. MH-converted files exhibited greater resilience compared to WS-converted files. Bitrate also influenced feature stability, with 38 cases affected uniquely by a single bitrate. Notably, voice features showed greater stability than MFCCs across conversion methods. CONCLUSIONS Compression effects were found to be feature specific, with MH and FFmpeg showing greater resilience. Some features were consistently affected, emphasizing the importance of understanding feature resilience for diagnostic applications. Considering the implementation of vocal biomarkers in health care, finding features that remain consistent through compression for data storage or transmission purposes is valuable. Focused on specific features and formats, future research could broaden the scope to include diverse features, real-time compression algorithms, and various recording methods. This study enhances our understanding of audio compression's influence on voice features and MFCCs, providing insights for developing applications across fields. The research underscores the significance of feature stability in working with compressed audio data, laying a foundation for informed voice data use in evolving technological landscapes.
Collapse
|
3
|
Kim JM, Kim MS, Choi SY, Ryu JS. Prediction of dysphagia aspiration through machine learning-based analysis of patients' postprandial voices. J Neuroeng Rehabil 2024; 21:43. [PMID: 38555417 PMCID: PMC10981344 DOI: 10.1186/s12984-024-01329-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/26/2024] [Indexed: 04/02/2024] Open
Abstract
BACKGROUND Conventional diagnostic methods for dysphagia have limitations such as long wait times, radiation risks, and restricted evaluation. Therefore, voice-based diagnostic and monitoring technologies are required to overcome these limitations. Based on our hypothesis regarding the impact of weakened muscle strength and the presence of aspiration on vocal characteristics, this single-center, prospective study aimed to develop a machine-learning algorithm for predicting dysphagia status (normal, and aspiration) by analyzing postprandial voice limiting intake to 3 cc. METHODS Conducted from September 2021 to February 2023 at Seoul National University Bundang Hospital, this single center, prospective cohort study included 198 participants aged 40 or older, with 128 without suspected dysphagia and 70 with dysphagia-aspiration. Voice data from participants were collected and used to develop dysphagia prediction models using the Multi-Layer Perceptron (MLP) with MobileNet V3. Male-only, female-only, and combined models were constructed using 10-fold cross-validation. Through the inference process, we established a model capable of probabilistically categorizing a new patient's voice as either normal or indicating the possibility of aspiration. RESULTS The pre-trained models (mn40_as and mn30_as) exhibited superior performance compared to the non-pre-trained models (mn4.0 and mn3.0). Overall, the best-performing model, mn30_as, which is a pre-trained model, demonstrated an average AUC across 10 folds as follows: combined model 0.8361 (95% CI 0.7667-0.9056; max 0.9541), male model 0.8010 (95% CI 0.6589-0.9432; max 1.000), and female model 0.7572 (95% CI 0.6578-0.8567; max 0.9779). However, for the female model, a slightly higher result was observed with the mn4.0, which scored 0.7679 (95% CI 0.6426-0.8931; max 0.9722). Additionally, the other models (pre-trained; mn40_as, non-pre-trained; mn4.0 and mn3.0) also achieved performance above 0.7 in most cases, and the highest fold-level performance for most models was approximately around 0.9. The 'mn' in model names refers to MobileNet and the following number indicates the 'width_mult' parameter. CONCLUSIONS In this study, we used mel-spectrogram analysis and a MobileNetV3 model for predicting dysphagia aspiration. Our research highlights voice analysis potential in dysphagia screening, diagnosis, and monitoring, aiming for non-invasive safer, and more effective interventions. TRIAL REGISTRATION This study was approved by the IRB (No. B-2109-707-303) and registered on clinicaltrials.gov (ID: NCT05149976).
Collapse
Affiliation(s)
- Jung-Min Kim
- Department of Health Science and Technology, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
- Department of Rehabilitation Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Min-Seop Kim
- Department of Multimedia Engineering, Dongguk University, Seoul, South Korea
| | - Sun-Young Choi
- Department of Rehabilitation Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Ju Seok Ryu
- Department of Rehabilitation Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea.
- Seoul National University College of Medicine, 82 Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Seoul, Gyeonggi-Do, 13620, South Korea.
| |
Collapse
|
4
|
Cavalcanti JC, Eriksson A, Barbosa PA. Multiparametric Analysis of Speaking Fundamental Frequency in Genetically Related Speakers Using Different Speech Materials: Some Forensic Implications. J Voice 2024; 38:243.e11-243.e29. [PMID: 34629229 DOI: 10.1016/j.jvoice.2021.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 08/03/2021] [Accepted: 08/09/2021] [Indexed: 11/18/2022]
Abstract
OBJECTIVES To assess the speaker-discriminatory potential of a set of fundamental frequency estimates in intraidentical twin pair comparisons and cross-pair comparisons (i.e., among all speakers). PARTICIPANTS A total of 20 Brazilian Portuguese speakers of the same dialect, namely 10 male identical twin pairs aged between 19 and 35, were recruited. METHOD The participants were recorded directly through professional microphones while taking part in a spontaneous dialogue over mobile phones. Acoustic measurements were performed in connected speech samples, and in lengthened vowels, at least 160 ms long produced during spontaneous speech. RESULTS f0 baseline, central tendency, and extreme values were found mostly discriminatory in intra-twin pair and cross-pair comparisons. These were also the estimates displaying the largest effect sizes. Overall, only three identical twins were found statistically different regarding their f0 patterns in connected speech, but not for lengthened vowel-based f0 metrics. Estimates of f0 variation and modulation were found the least discriminatory across speakers, which may signal the control of speaking style and dialect on dynamic patterns of f0. Concerning system performance, the base value of f0 (f0 baseline) was found the most reliable metric, displaying the lowest equal error rate (EER). CONCLUSIONS The outcomes suggest that, although identical twins were very closely related regarding their f0 patterns, some pairs could still be differentiated acoustically, only in connected speech. Such findings reinforce the relevance of analyzing long-term f0 metrics for speaker comparison purposes, with particular consideration to f0 baseline. Furthermore, f0 differences across subjects were suggested as more expressive in connected speech than in lengthened vowels.
Collapse
Affiliation(s)
- Julio Cesar Cavalcanti
- Department of linguistics, Stockholm University, Stockholm, Sweden; Institute of language studies, Campinas State University, Campinas, São Paulo, Brazil.
| | - Anders Eriksson
- Department of linguistics, Stockholm University, Stockholm, Sweden.
| | - Plinio A Barbosa
- Institute of language studies, Campinas State University, Campinas, São Paulo, Brazil.
| |
Collapse
|
5
|
Calà F, Frassineti L, Sforza E, Onesimo R, D’Alatri L, Manfredi C, Lanata A, Zampino G. Artificial Intelligence Procedure for the Screening of Genetic Syndromes Based on Voice Characteristics. Bioengineering (Basel) 2023; 10:1375. [PMID: 38135966 PMCID: PMC10741055 DOI: 10.3390/bioengineering10121375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/25/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
Perceptual and statistical evidence has highlighted voice characteristics of individuals affected by genetic syndromes that differ from those of normophonic subjects. In this paper, we propose a procedure for systematically collecting such pathological voices and developing AI-based automated tools to support differential diagnosis. Guidelines on the most appropriate recording devices, vocal tasks, and acoustical parameters are provided to simplify, speed up, and make the whole procedure homogeneous and reproducible. The proposed procedure was applied to a group of 56 subjects affected by Costello syndrome (CS), Down syndrome (DS), Noonan syndrome (NS), and Smith-Magenis syndrome (SMS). The entire database was divided into three groups: pediatric subjects (PS; individuals < 12 years of age), female adults (FA), and male adults (MA). In line with the literature results, the Kruskal-Wallis test and post hoc analysis with Dunn-Bonferroni test revealed several significant differences in the acoustical features not only between healthy subjects and patients but also between syndromes within the PS, FA, and MA groups. Machine learning provided a k-nearest-neighbor classifier with 86% accuracy for the PS group, a support vector machine (SVM) model with 77% accuracy for the FA group, and an SVM model with 84% accuracy for the MA group. These preliminary results suggest that the proposed method based on acoustical analysis and AI could be useful for an effective, non-invasive automatic characterization of genetic syndromes. In addition, clinicians could benefit in the case of genetic syndromes that are extremely rare or present multiple variants and facial phenotypes.
Collapse
Affiliation(s)
- Federico Calà
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (F.C.); (L.F.); (A.L.)
| | - Lorenzo Frassineti
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (F.C.); (L.F.); (A.L.)
- Department of Information Engineering, Università degli Studi di Pisa, 56122 Pisa, Italy
| | - Elisabetta Sforza
- Department of Life Sciences and Public Health, Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (E.S.); (G.Z.)
| | - Roberta Onesimo
- Centre for Rare Diseases and Transition, Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy;
| | - Lucia D’Alatri
- Unit for Ear, Nose and Throat Medicine, Department of Neuroscience, Sensory Organs and Chest, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy;
| | - Claudia Manfredi
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (F.C.); (L.F.); (A.L.)
| | - Antonio Lanata
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (F.C.); (L.F.); (A.L.)
| | - Giuseppe Zampino
- Department of Life Sciences and Public Health, Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (E.S.); (G.Z.)
- Centre for Rare Diseases and Transition, Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy;
- European Reference Network for Rare Malformation Syndromes, Intellectual and Other Neurodevelopmental Disorders—ERN ITHACA
| |
Collapse
|
6
|
Llico AF, Shanley SN, Friedman AD, Bamford LM, Roberts RM, McKenna VS. Comparison Between Custom Smartphone Acoustic Processing Algorithms and Praat in Healthy and Disordered Voices. J Voice 2023:S0892-1997(23)00241-2. [PMID: 37690854 DOI: 10.1016/j.jvoice.2023.07.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 07/31/2023] [Accepted: 07/31/2023] [Indexed: 09/12/2023]
Abstract
OBJECTIVE The aim of this study was to understand the relationship between temporal and spectral-based acoustic measures derived using Praat and custom smartphone algorithms across patients with a wide range of vocal pathologies. METHODS Voice samples were collected from 56 adults (11 vocally healthy, 45 dysphonic, aged 18-80 years) performing three speech tasks: (a) sustained vowel, (b) maximum phonation, and (c) the second and third sentences of the Rainbow passage. Data were analyzed to extract mean fundamental frequency (fo), maximum phonation time (MPT), and cepstral peak prominence (CPP) using Praat and our custom smartphone algorithms. Linear regression models were calculated with and without outliers to determine relationships. RESULTS Statistically significant relationships were found between the smartphone algorithms and Praat for all three measures (r2 = 0.68-0.95, with outliers; r2 = 0.80-0.98, without outliers). An offset between CPP measures was found where Praat values were consistently lower than those computed by the smartphone app. Outlying data were identified and described, and findings indicated that speakers with high levels of clinician-perceived dysphonia resulted in smartphone algorithm errors. CONCLUSIONS These results suggest that the proposed algorithms can provide measurements comparable to clinically derived values. However, clinicians should take caution when analyzing severely dysphonic voices as the current algorithms show reduced accuracy for measures of mean fo and MPT for these voice types.
Collapse
Affiliation(s)
- Andres F Llico
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, Ohio
| | - Savannah N Shanley
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio
| | - Aaron D Friedman
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio
| | - Leigh M Bamford
- Department of Electrical and Computer Engineering, University of Cincinnati, Cincinnati, Ohio
| | - Rachel M Roberts
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio
| | - Victoria S McKenna
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, Ohio; Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio; Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio.
| |
Collapse
|
7
|
McKenna VS, Roberts RM, Friedman AD, Shanley SN, Llico AF. Impact of naturalistic smartphone positioning on acoustic measures of voicea). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:323-333. [PMID: 37450331 DOI: 10.1121/10.0020176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 06/29/2023] [Indexed: 07/18/2023]
Abstract
Smartphone technology has been used for at-home health monitoring, but there are few available applications (apps) for tracking acoustic measures of voice for those with chronic voice problems. Current apps limit the user by restricting the range of smartphone positions to those that are unnatural and non-interactive. Therefore, we aimed to understand how more natural smartphone positions impacted the accuracy of acoustic measures in comparison to clinically acquired and derived measures. Fifty-six adults (11 vocally healthy, 45 voice disordered, aged 18-80 years) completed voice recordings while holding their smartphones in four different positions (e.g., as if reading from the phone, up to the ear, etc.) while a head-mounted high-quality microphone attached to a handheld acoustic recorder simultaneously captured voice recordings. Comparisons revealed that mean fundamental frequency (Hz), maximum phonation time (s), and cepstral peak prominence (CPP; dB) were not impacted by phone position; however, CPP was significantly lower on smartphone recordings than handheld recordings. Spectral measures (low-to-high spectral ratio, harmonics-to-noise ratio) were impacted by the phone position and the recording device. These results indicate that more natural phone positions can be used to capture specific voice measures, but not all are directly comparable to clinically derived values.
Collapse
Affiliation(s)
- Victoria S McKenna
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio 45267, USA
| | - Rachel M Roberts
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio 45267, USA
| | - Aaron D Friedman
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio 45267, USA
| | - Savannah N Shanley
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio 45267, USA
| | - Andres F Llico
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, Ohio 45221, USA
| |
Collapse
|
8
|
The Effects of Vocal Loading and Steam Inhalation on Acoustic, Aerodynamic and Vocal Tract Discomfort Measures in Adults. J Voice 2022. [DOI: 10.1016/j.jvoice.2022.09.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Awan SN, Shaikh MA, Desjardins M, Feinstein H, Abbott KV. The Effect of Microphone Frequency Response on Spectral and Cepstral Measures of Voice: An Examination of Low-Cost Electret Headset Microphones. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2022; 31:959-973. [PMID: 35050724 PMCID: PMC9150670 DOI: 10.1044/2021_ajslp-21-00156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 08/12/2021] [Accepted: 10/11/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE The purpose of this study was to establish the frequency response of a selection of low-cost headset microphones that could be given to subjects for remote voice recordings and to examine the effect of microphone type and frequency response on key acoustic measures related to voice quality obtained from speech and vowel samples. METHOD The frequency responses of three low-cost headset microphones were evaluated using pink noise generated via a head-and-torso model. Each of the headset microphones was then used to record a series of speech and vowel samples prerecorded from 24 speakers who represented a diversity of sex, age, fundamental frequency (F o), and voice quality types. Recordings were later analyzed for the following measures: smoothed cepstral peak prominence (CPP; dB), low versus high spectral ratio (L/H ratio; dB), CPP F o (Hz), and cepstral spectral index of dysphonia (CSID). RESULTS The frequency response of the microphones under test was observed to have nonsignificant effects on measures of the CPP and CPP F o, significant effects on the CSID in speech contexts, and strong and significant effects on the measure of spectral tilt (L/H ratio). However, the correlations between the various headset microphones and a reference precision microphone were excellent (rs > .90). CONCLUSIONS The headset microphones under test all showed the capability to track a wide range of diversity in the voice signal. Though the use of higher quality microphones that have demonstrated specifications is recommended for typical research and clinical purposes, low-cost electret microphones may be used to provide valid measures of voice, specifically when the same microphone and signal chain is used for the evaluation of pre- versus posttreatment change or intergroup comparisons.
Collapse
Affiliation(s)
- Shaheen N. Awan
- Department of Communication Sciences and Disorders, University of South Florida, Tampa
| | - Mohsin A. Shaikh
- Department of Communication Sciences and Disorders, Bloomsburg University of Pennsylvania
| | - Maude Desjardins
- Department of Communication Sciences & Disorders, University of Delaware, Newark
| | - Hagar Feinstein
- Department of Communication Sciences & Disorders, University of Delaware, Newark
| | | |
Collapse
|