1
|
Tobin J, Nelson P, MacDonald B, Heywood R, Cave R, Seaver K, Desjardins A, Jiang PP, Green JR. Automatic Speech Recognition of Conversational Speech in Individuals With Disordered Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4176-4185. [PMID: 38963790 DOI: 10.1044/2024_jslhr-24-00045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/06/2024]
Abstract
PURPOSE This study examines the effectiveness of automatic speech recognition (ASR) for individuals with speech disorders, addressing the gap in performance between read and conversational ASR. We analyze the factors influencing this disparity and the effect of speech mode-specific training on ASR accuracy. METHOD Recordings of read and conversational speech from 27 individuals with various speech disorders were analyzed using both (a) one speaker-independent ASR system trained and optimized for typical speech and (b) multiple ASR models that were personalized to the speech of the participants with disordered speech. Word error rates were calculated for each speech model, read versus conversational, and subject. Linear mixed-effects models were used to assess the impact of speech mode and disorder severity on ASR accuracy. We investigated nine variables, classified as technical, linguistic, or speech impairment factors, for their potential influence on the performance gap. RESULTS We found a significant performance gap between read and conversational speech in both personalized and unadapted ASR models. Speech impairment severity notably impacted recognition accuracy in unadapted models for both speech modes and in personalized models for read speech. Linguistic attributes of utterances were the most influential on accuracy, though atypical speech characteristics also played a role. Including conversational speech samples in model training notably improved recognition accuracy. CONCLUSIONS We observed a significant performance gap in ASR accuracy between read and conversational speech for individuals with speech disorders. This gap was largely due to the linguistic complexity and unique characteristics of speech disorders in conversational speech. Training personalized ASR models using conversational speech significantly improved recognition accuracy, demonstrating the importance of domain-specific training and highlighting the need for further research into ASR systems capable of handling disordered conversational speech effectively.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Jordan R Green
- MGH Institute of Health Professions, Boston, MA
- Harvard University, Cambridge, MA
| |
Collapse
|
2
|
Rowe HP, Stipancic KL, Campbell TF, Yunusova Y, Green JR. The association between longitudinal declines in speech sound accuracy and speech intelligibility in speakers with amyotrophic lateral sclerosis. CLINICAL LINGUISTICS & PHONETICS 2024; 38:227-248. [PMID: 37122073 PMCID: PMC10613582 DOI: 10.1080/02699206.2023.2202297] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 04/01/2023] [Accepted: 04/03/2023] [Indexed: 05/27/2023]
Abstract
The purpose of this study was to examine how neurodegeneration secondary to amyotrophic lateral sclerosis (ALS) impacts speech sound accuracy over time and how speech sound accuracy, in turn, is related to speech intelligibility. Twenty-one participants with ALS read the Bamboo Passage over multiple data collection sessions across several months. Phonemic and orthographic transcriptions were completed for all speech samples. The percentage of phonemes accurately produced was calculated across each phoneme, sound class (i.e. consonants versus vowels), and distinctive feature (i.e. features involved in Manner of Articulation, Place of Articulation, Laryngeal Voicing, Tongue Height, and Tongue Advancement). Intelligibility was determined by calculating the percentage of words correctly transcribed orthographically by naive listeners. Linear mixed effects models were conducted to assess the decline of each distinctive feature over time and its impact on intelligibility. The results demonstrated that overall phonemic production accuracy had a nonlinear relationship with speech intelligibility and that a subset of features (i.e. those dependent on precise lingual and labial constriction and/or extensive lingual and labial movement) were more important for intelligibility and were more impacted over time than other features. Furthermore, findings revealed that consonants were more strongly associated with intelligibility than vowels, but consonants did not significantly differ from vowels in their decline over time. These findings have the potential to (1) strengthen mechanistic understanding of the physiological constraints imposed by neuronal degeneration on speech production and (2) inform the timing and selection of treatment and assessment targets for individuals with ALS.
Collapse
Affiliation(s)
- Hannah P Rowe
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Boston, Massachusetts, USA
| | - Kaila L Stipancic
- Department of Communicative Disorders and Sciences, The State University of New York, Buffalo, New York, USA
| | - Thomas F Campbell
- Callier Center for Communication Disorders, University of Texas, Dallas, Texas, USA
| | - Yana Yunusova
- Department of Speech-Language Pathology and Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Ontario, Canada
- KITE Research Center, Toronto Rehabilitation Institute, Toronto, Ontario, Canada
| | - Jordan R Green
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Boston, Massachusetts, USA
| |
Collapse
|
3
|
Krajewski E, Lee J, Olmstead AJ, Simmons Z. Comparison of Vowel and Sentence Intelligibility in People With Dysarthria Secondary to Amyotrophic Lateral Sclerosis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024:1-10. [PMID: 38376500 DOI: 10.1044/2024_jslhr-23-00497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
PURPOSE In this study, we examined the utility of vowel intelligibility testing for assessing the impact of dysarthria on speech characteristics in people with amyotrophic lateral sclerosis (ALS). We tested the sensitivity and specificity of overall vowel identification, as well as that of vowel-specific identification, to dysarthria presence and severity. We additionally examined the relationship between vowel intelligibility and sentence intelligibility. METHOD Twenty-three people with ALS and 22 age- and sex-matched control speakers produced sentences from the Speech Intelligibility Test (SIT), as well as 10 American English monophthongs in /h/-vowel-/d/ words for the vowel intelligibility test (VIT). Data for SIT and VIT scores came from 135 listeners. Diagnostic accuracy of VIT measures was evaluated using the area under the curve of receiver operator characteristics. We then examined differences between control speakers, speakers with mild dysarthria, and speakers with severe dysarthria in their relationship between SIT and VIT scores. RESULTS The results suggest that the overall vowel intelligibility score showed high sensitivity and specificity in differentiating between speakers with and without dysarthria, even those with milder symptoms. In addition, single-vowel identification scores showed at least acceptable group differentiation between the mild and severe dysarthria groups, though fewer single vowels were acceptable discriminators between the control group and the group with mild dysarthria. Identification accuracy of /ɪ/ in particular showed excellent discrimination across all groups. Examination of the relationship between SIT and VIT scores suggests a severity-specific relationship. Speakers with SIT scores above 70% generally had higher SIT than VIT scores, whereas speakers with SIT below 70% generally had higher VIT than SIT scores. DISCUSSION Vowel intelligibility testing can detect speech impairments in speakers with mild dysarthria and residual articulatory function in speakers with severe dysarthria. Vowel intelligibility testing may, therefore, be a useful addition to intelligibility testing for individuals with dysarthria.
Collapse
Affiliation(s)
- Elizabeth Krajewski
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park
| | - Jimin Lee
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park
| | - Annie J Olmstead
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park
| | - Zachary Simmons
- Penn State Hershey ALS Clinic and Research Center, The Pennsylvania State University College of Medicine, Hershey
- Department of Neurology, The Pennsylvania State University College of Medicine, Hershey
- Department of Humanities, The Pennsylvania State University College of Medicine, Hershey
| |
Collapse
|
4
|
Gutz SE, Maffei MF, Green JR. Feedback From Automatic Speech Recognition to Elicit Clear Speech in Healthy Speakers. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2023; 32:2940-2959. [PMID: 37824377 PMCID: PMC10721250 DOI: 10.1044/2023_ajslp-23-00030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/10/2023] [Accepted: 08/01/2023] [Indexed: 10/14/2023]
Abstract
PURPOSE This study assessed the effectiveness of feedback generated by automatic speech recognition (ASR) for eliciting clear speech from young, healthy individuals. As a preliminary step toward exploring a novel method for eliciting clear speech in patients with dysarthria, we investigated the effects of ASR feedback in healthy controls. If successful, ASR feedback has the potential to facilitate independent, at-home clear speech practice. METHOD Twenty-three healthy control speakers (ages 23-40 years) read sentences aloud in three speaking modes: Habitual, Clear (over-enunciated), and in response to ASR feedback (ASR). In the ASR condition, we used Mozilla DeepSpeech to transcribe speech samples and provide participants with a value indicating the accuracy of the ASR's transcription. For speakers who achieved sufficiently high ASR accuracy, noise was added to their speech at a participant-specific signal-to-noise ratio to ensure that each participant had to over-enunciate to achieve high ASR accuracy. RESULTS Compared to habitual speech, speech produced in the ASR and Clear conditions was clearer, as rated by speech-language pathologists, and more intelligible, per speech-language pathologist transcriptions. Speech in the Clear and ASR conditions aligned on several acoustic measures, particularly those associated with increased vowel distinctiveness and decreased speaking rate. However, ASR accuracy, intelligibility, and clarity were each correlated with different speech features, which may have implications for how people change their speech for ASR feedback. CONCLUSIONS ASR successfully elicited outcomes similar to clear speech in healthy speakers. Future work should investigate its efficacy in eliciting clear speech in people with dysarthria.
Collapse
Affiliation(s)
- Sarah E. Gutz
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| | - Marc F. Maffei
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Jordan R. Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| |
Collapse
|
5
|
Mahr TJ, Hustad KC. Lexical Predictors of Intelligibility in Young Children's Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3013-3025. [PMID: 36626389 PMCID: PMC10555465 DOI: 10.1044/2022_jslhr-22-00294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 08/19/2022] [Accepted: 09/19/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Speech perception is a probabilistic process, integrating bottom-up and top-down sources of information, and the frequency and phonological neighborhood of a word can predict how well it is perceived. In addition to asking how intelligible speakers are, it is important to ask how intelligible individual words are. We examined whether lexical features of words influenced intelligibility in young children. In particular, we applied the neighborhood activation model, which posits that a word's frequency and the overall frequency of a word's phonological competitors jointly affect the intelligibility of a word. METHOD We measured the intelligibility of 165 children between 30 and 47 months in age on 38 different single words. We performed an item response analysis using generalized mixed-effects logistic regression, adding word-level characteristics (target frequency, neighborhood competition, motor complexity, and phonotactic probability) as predictors of intelligibility. RESULTS There was considerable variation among the words and the children, but between-word variability was larger in magnitude than between-child variability. There was a clear positive effect of target word frequency and a negative effect of neighborhood competition. We did not find a clear negative effect of motor complexity, and phonotactic probability did not have any effect on intelligibility. CONCLUSION Word frequency and neighborhood competition both had an effect on intelligibility in young children's speech, so listener expectations are an important factor in the selection of items for children's intelligibility assessment.
Collapse
Affiliation(s)
| | - Katherine C. Hustad
- Waisman Center, University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison
| |
Collapse
|
6
|
Stipancic KL, Wilding G, Tjaden K. Lexical Characteristics of the Speech Intelligibility Test: Effects on Transcription Intelligibility for Speakers With Multiple Sclerosis and Parkinson's Disease. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3115-3131. [PMID: 36931064 PMCID: PMC10555462 DOI: 10.1044/2023_jslhr-22-00279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 09/19/2022] [Accepted: 01/01/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE Lexical characteristics of speech stimuli can significantly impact intelligibility. However, lexical characteristics of the widely used Speech Intelligibility Test (SIT) are unknown. We aimed to (a) define variation in neighborhood density, word frequency, grammatical word class, and type-token ratio across a large corpus of SIT sentences and tests and (b) determine the relationship of lexical characteristics to speech intelligibility in speakers with multiple sclerosis (MS), Parkinson's disease (PD), and neurologically healthy controls. METHOD Using an extant database of 92 speakers (32 controls, 30 speakers with MS, and 30 speakers with PD), percent correct intelligibility scores were obtained for the SIT. Neighborhood density, word frequency, word class, and type-token ratio were calculated and summed for each of the 11 sentences of each SIT test. The distribution of each characteristic across SIT sentences and tests was examined. Linear mixed-effects models were performed to assess the relationship between intelligibility and the lexical characteristics. RESULTS There was large variability in the distribution of lexical characteristics across this large corpus of SIT sentences and tests. Modeling revealed a relationship between intelligibility and the lexical characteristics, with word frequency and word class significantly contributing to the model. CONCLUSIONS Three primary findings emerged: (a) There was considerable variability in lexical characteristics both within and across the large corpus of SIT tests; (b) there was not a robust association between intelligibility and the lexical characteristics; and (c) findings from a study demonstrating an effect of neighborhood density and word frequency on intelligibility were replicated. Clinical and research implications of the findings are discussed, and three exemplar SIT tests systematically controlling for neighborhood density and word frequency are provided.
Collapse
Affiliation(s)
- Kaila L. Stipancic
- Department of Communicative Disorders and Sciences, University at Buffalo, The State University of New York
| | - Gregory Wilding
- Department of Biostatistics, University at Buffalo, The State University of New York
| | - Kris Tjaden
- Department of Communicative Disorders and Sciences, University at Buffalo, The State University of New York
| |
Collapse
|
7
|
Lehner K, Ziegler W. The Impact of Lexical and Articulatory Factors in the Automatic Selection of Test Materials for a Web-Based Assessment of Intelligibility in Dysarthria. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2196-2212. [PMID: 33647214 DOI: 10.1044/2020_jslhr-20-00267] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The clinical assessment of intelligibility must be based on a large repository and extensive variation of test materials, to render test stimuli unpredictable and thereby avoid expectancies and familiarity effects in the listeners. At the same time, it is essential that test materials are systematically controlled for factors influencing intelligibility. This study investigated the impact of lexical and articulatory characteristics of quasirandomly selected target words on intelligibility in a large sample of dysarthric speakers under clinical examination conditions. Method Using the clinical assessment tool KommPaS, a total of 2,700 sentence-embedded target words, quasirandomly drawn from a large corpus, were spoken by a group of 100 dysarthric patients and later transcribed by listeners recruited via online crowdsourcing. Transcription accuracy was analyzed for influences of lexical frequency, phonological neighborhood structure, articulatory complexity, lexical familiarity, word class, stimulus length, and embedding position. Classification and regression analyses were performed using random forests and generalized linear mixed models. Results Across all degrees of severity, target words with higher frequency, fewer and less frequent phonological neighbors, higher articulatory complexity, and higher lexical familiarity received significantly higher intelligibility scores. In addition, target words were more challenging sentence-initially than in medial or final position. Stimulus length had mixed effects; word length and word class had no effect. Conclusions In a large-scale clinical examination of intelligibility in speakers with dysarthria, several well-established influences of lexical and articulatory parameters could be replicated, and the roles of new factors were discussed. This study provides clues about how experimental rigor can be combined with clinical requirements in the diagnostics of communication impairment in patients with dysarthria.
Collapse
Affiliation(s)
- Katharina Lehner
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig Maximilians University, Munich, Germany
| | - Wolfram Ziegler
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig Maximilians University, Munich, Germany
| |
Collapse
|
8
|
Optimizing linguistic materials for feature-based intelligibility assessment in speech impairments. Behav Res Methods 2021; 54:42-53. [PMID: 34100199 DOI: 10.3758/s13428-021-01610-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/02/2021] [Indexed: 11/08/2022]
Abstract
Assessing the intelligibility of speech-disordered individuals generally involves asking them to read aloud texts such as word lists, a procedure that can be time-consuming if the materials are lengthy. This paper seeks to optimize such elicitation materials by identifying an optimal trade-off between the quantity of material needed for assessment purposes and its capacity to elicit a robust intelligibility metrics. More specifically, it investigates the effect of reducing the number of pseudowords used in a phonetic-acoustic decoding task in a speech-impaired population in terms of the subsequent impact on the intelligibility classifier as quantified by accuracy indexes (AUC of ROC, Balanced Accuracy index and F-scores). A comparison of obtained accuracy indexes shows that when reduction of the amount of elicitation material is based on a phonetic criterion-here, related to phonotactic complexity-the classifier has a higher classifying ability than when the material is arbitrarily reduced. Crucially, downsizing the material to about 30% of the original dataset does not diminish the classifier's performance nor affect its stability. This result is of significant interest to clinicians as well as patients since it validates a tool that is both reliable and efficient.
Collapse
|
9
|
Kuruvilla-Dugdale M, Salazar M, Zhang A, Mefferd AS. Detection of Articulatory Deficits in Parkinson's Disease: Can Systematic Manipulations of Phonetic Complexity Help? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2084-2098. [PMID: 32598198 PMCID: PMC7838836 DOI: 10.1044/2020_jslhr-19-00245] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Purpose This study sought to determine the feasibility of using phonetic complexity manipulations as a way to systematically assess articulatory deficits in talkers with progressive dysarthria due to Parkinson's disease (PD). Method Articulatory kinematics were recorded using three-dimensional electromagnetic articulography from 15 talkers with PD (58-84 years old) and 15 healthy controls (55-80 years old) while they produced target words embedded in a carrier phrase. Majority of the talkers with PD exhibited a relatively mild dysarthria. For stimuli selection, phonetic complexity was calculated for a variety of words using the framework proposed by Kent (1992), and six words representative of low, medium, and high phonetic complexity were selected as targets. Jaw, posterior tongue, and anterior tongue kinematic measures that were used to test for phonetic complexity effects included movement speed, cumulative path distance, movement range, movement duration, and spatiotemporal variability. Results Significantly smaller movements and slower movement speeds were evident in talkers with PD, predominantly for words with high phonetic complexity. The effect sizes of between-groups differences were larger for several jaw kinematic measures than those of the tongue. Discussion and Conclusion Findings suggest that systematic manipulations of phonetic complexity can support the detection of articulatory deficits in talkers with PD. Phonetic complexity should therefore be leveraged for the assessment of articulatory performance in talkers with progressive dysarthria. Future work will be directed toward linking speech kinematic and auditory-perceptual measures to determine the clinical significance of the current findings.
Collapse
Affiliation(s)
| | - Mary Salazar
- Department of Speech, Language and Hearing Sciences, University of Missouri, Columbia
| | - Anqing Zhang
- Division of Biostatistics, Children's National Medical Center, Washington, DC
| | - Antje S. Mefferd
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
10
|
Chiaramonte R, Bonfiglio M. Acoustic analysis of voice in bulbar amyotrophic lateral sclerosis: a systematic review and meta-analysis of studies. LOGOP PHONIATR VOCO 2019; 45:151-163. [DOI: 10.1080/14015439.2019.1687748] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Rita Chiaramonte
- Department of Physical Medicine and Rehabilitation, University of Catania, Catania, Italy
| | - Marco Bonfiglio
- Department for Health Activities, ASP Siracusa, Siracusa, Italy
| |
Collapse
|
11
|
Bislick L, Hula WD. Perceptual Characteristics of Consonant Production in Apraxia of Speech and Aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2019; 28:1411-1431. [PMID: 31454259 DOI: 10.1044/2019_ajslp-18-0169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose This retrospective analysis examined group differences in error rate across 4 contextual variables (clusters vs. singletons, syllable position, number of syllables, and articulatory phonetic features) in adults with apraxia of speech (AOS) and adults with aphasia only. Group differences in the distribution of error type across contextual variables were also examined. Method Ten individuals with acquired AOS and aphasia and 11 individuals with aphasia participated in this study. In the context of a 2-group experimental design, the influence of 4 contextual variables on error rate and error type distribution was examined via repetition of 29 multisyllabic words. Error rates were analyzed using Bayesian methods, whereas distribution of error type was examined via descriptive statistics. Results There were 4 findings of robust differences between the 2 groups. These differences were found for syllable position, number of syllables, manner of articulation, and voicing. Group differences were less robust for clusters versus singletons and place of articulation. Results of error type distribution show a high proportion of distortion and substitution errors in speakers with AOS and a high proportion of substitution and omission errors in speakers with aphasia. Conclusion Findings add to the continued effort to improve the understanding and assessment of AOS and aphasia. Several contextual variables more consistently influenced breakdown in participants with AOS compared to participants with aphasia and should be considered during the diagnostic process. Supplemental Material https://doi.org/10.23641/asha.9701690.
Collapse
Affiliation(s)
- Lauren Bislick
- School of Communication Sciences and Disorders, University of Central Florida, Orlando
| | - William D Hula
- VA Pittsburgh Healthcare System, PA
- University of Pittsburgh, PA
| |
Collapse
|