1
|
Zhou Z, Zhao X, Yang Q, Zhou T, Feng Y, Chen Y, Chen Z, Deng C. A randomized controlled trial of the efficacy of music therapy on the social skills of children with autism spectrum disorder. RESEARCH IN DEVELOPMENTAL DISABILITIES 2025; 158:104942. [PMID: 39938171 DOI: 10.1016/j.ridd.2025.104942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 10/21/2024] [Accepted: 02/01/2025] [Indexed: 02/14/2025]
Abstract
BACKGROUND Children with autism spectrum disorder (ASD) experience deficits in social skills. Music therapy (MT) has been used as a therapeutic aid for clinical disorders. This study aims to explore the effect of MT on the social skills of children with ASD and to provide evidence for clinical intervention in the social skills of children with ASD. METHODS Children with ASD who were admitted to the Department of Children's Health Care of Zhongshan Torch Development Zone People's Hospital from April 2023 to March 2024 were continuously recruited and assigned to the experimental group and the control group by random number table. The control group received standard care only, while the experimental group added MT to standard care. The MT program is led by an occupational therapist and combines social skills training with musical activities. The training was conducted in small groups of 3-5 children for 30 minutes, three times a week for 12 weeks. The Social Responsiveness Scale (SRS-2), the Autism Treatment Evaluation Checklist (ATEC), and the Gesell Development Schedules (GDS) were performed before and after the intervention. RESULTS A total of 29 children with ASD were included and randomly assigned to the MT group (n = 15) and the control group (n = 14). All participants completed the whole treatment protocol. There was no significant difference in the scores of SRS-2, ATEC, and GDS between the two groups before intervention. After 12 weeks of intervention, the scores of SRS-2 of the MT group were decreased in the social communication subscale (P < 0.05 compared to baseline and the control group) and total scores (P < 0.05 compared to baseline and the control group). The score of the ATEC scale of the MT group decreased in the speech/language/communication subscale (P < 0.05 compared to baseline and the control group), the sociability subscale (P < 0.05 compared to baseline and the control group), and the total score (P < 0.05 compared to baseline). The development quotient score of the social domain of GDS in the MT group was significantly higher than that before intervention (P < 0.05) and that in the control group (P < 0.05). CONCLUSION This study suggests that MT could effectively improve the social skills of children with ASD, and has a positive effect on language ability. MT has the potential to be an effective complement to regular social skill training.
Collapse
Affiliation(s)
- Zhaowen Zhou
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Jinan University, China
| | - Xingting Zhao
- Department of Children's Health Care, Zhongshan Torch Development Zone People's Hospital, China
| | - Qiaoxue Yang
- Department of Children's Health Care, Zhongshan Torch Development Zone People's Hospital, China
| | - Tingting Zhou
- Department of Children's Health Care, Zhongshan Torch Development Zone People's Hospital, China
| | - Yunyan Feng
- Department of Children's Health Care, Zhongshan Torch Development Zone People's Hospital, China
| | - Yiping Chen
- Department of Children's Health Care, Zhongshan Torch Development Zone People's Hospital, China
| | - Zhuoming Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Jinan University, China.
| | - Cheng Deng
- Department of Children's Health Care, Zhongshan Torch Development Zone People's Hospital, China.
| |
Collapse
|
2
|
Tran Ngoc A, Meyer J, Meunier F. Musical Experience and Speech Processing: The Case of Whistled Words. Cogn Sci 2024; 48:e70032. [PMID: 39699042 DOI: 10.1111/cogs.70032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 08/31/2024] [Accepted: 12/04/2024] [Indexed: 12/20/2024]
Abstract
In this paper, we explore the effect of musical expertise on whistled word perception by naive listeners. In whistled words of nontonal languages, vowels are transposed to relatively stable pitches, while consonants are translated into pitch movements or interruptions. Previous behavioral studies have demonstrated that naive listeners can categorize isolated consonants, vowels, and words well over chance. Here, we take an interest in the effect of musical experience on words while focusing on specific phonemes within the context of the word. We consider the role of phoneme position and type and compare the way in which these whistled consonants and vowels contribute to word recognition. Musical experience shows a significant and increasing advantage according to the musical level achieved, which, when further specified according to vowels and consonants, shows stronger advantages for vowels over consonants for all participants with musical experience, and advantages for high-level musicians over nonmusicians for both consonants and vowels. By specifying high-level musician skill according to one's musical instrument expertise (piano, violin, flute, or singing), and comparing these instrument groups to expert users of whistled speech, we observe instrument-specific profiles in the answer patterns. The differentiation of such profiles underlines a resounding advantage for expert whistlers, as well as the role of instrument specificity when considering skills transferred from music to speech. These profiles also highlight differences in phoneme correspondence rates due to the context of the word, especially impacting "acute" consonants (/s/ and /t/), and highlighting the robustness of /i/ and /o/.
Collapse
Affiliation(s)
- Anaïs Tran Ngoc
- Université Côte d'Azur, CNRS, BCL
- Université Grenoble Alpes, CNRS, GIPSA-Lab
| | - Julien Meyer
- Université Grenoble Alpes, CNRS, GIPSA-Lab
- Aula de Silbo, Universidad de Las Palmas de Gran Canaria, Spain
| | | |
Collapse
|
3
|
Rampinini A, Balboni I, Golestani N, Berthele R. A behavioural exploration of language aptitude and experience, cognition and more using Graph Analysis. Brain Res 2024; 1842:149109. [PMID: 38964704 DOI: 10.1016/j.brainres.2024.149109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/01/2024] [Accepted: 07/01/2024] [Indexed: 07/06/2024]
Abstract
Language aptitude has recently regained interest in cognitive neuroscience. Traditional language aptitude testing included phonemic coding ability, associative memory, grammatical sensitivity and inductive language learning. Moreover, domain-general cognitive abilities are associated with individual differences in language aptitude, together with factors that have yet to be elucidated. Beyond domain-general cognition, it is also likely that aptitude and experience in domain-specific but non-linguistic fields (e.g. music or numerical processing) influence and are influenced by language aptitude. We investigated some of these relationships in a sample of 152 participants, using exploratory graph analysis, across different levels of regularisation, i.e. sensitivity. We carried out a meta cluster analysis in a second step to identify variables that are robustly grouped together. We discuss the data, as well as their meta-network groupings, at a baseline network sensitivity level, and in two analyses, one including and the other excluding dyslexic readers. Our results show a stable association between language and cognition, and the isolation of multilingual language experience, musicality and literacy. We highlight the necessity of a more comprehensive view of language and of cognition as multivariate systems.
Collapse
Affiliation(s)
- Alessandra Rampinini
- Department of Psychology, Faculty of Psychology and Education Science, University of Geneva, Geneva, Switzerland; National Centre for Competence in Research Evolving Language, Switzerland
| | - Irene Balboni
- Department of Psychology, Faculty of Psychology and Education Science, University of Geneva, Geneva, Switzerland; Institute of Multilingualism, University of Fribourg, Fribourg, Switzerland; National Centre for Competence in Research Evolving Language, Switzerland
| | - Narly Golestani
- Department of Psychology, Faculty of Psychology and Education Science, University of Geneva, Geneva, Switzerland; Cognitive Science Hub, University of Vienna, Vienna, Austria; Department of Behavioural and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria; National Centre for Competence in Research Evolving Language, Switzerland
| | - Raphael Berthele
- Institute of Multilingualism, University of Fribourg, Fribourg, Switzerland; National Centre for Competence in Research Evolving Language, Switzerland.
| |
Collapse
|
4
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
5
|
Musso M, Altenmüller E, Reisert M, Hosp J, Schwarzwald R, Blank B, Horn J, Glauche V, Kaller C, Weiller C, Schumacher M. Speaking in gestures: Left dorsal and ventral frontotemporal brain systems underlie communication in conducting. Eur J Neurosci 2023; 57:324-350. [PMID: 36509461 DOI: 10.1111/ejn.15883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 09/27/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022]
Abstract
Conducting constitutes a well-structured system of signs anticipating information concerning the rhythm and dynamic of a musical piece. Conductors communicate the musical tempo to the orchestra, unifying the individual instrumental voices to form an expressive musical Gestalt. In a functional magnetic resonance imaging (fMRI) experiment, 12 professional conductors and 16 instrumentalists conducted real-time novel pieces with diverse complexity in orchestration and rhythm. For control, participants either listened to the stimuli or performed beat patterns, setting the time of a metronome or complex rhythms played by a drum. Activation of the left superior temporal gyrus (STG), supplementary and premotor cortex and Broca's pars opercularis (F3op) was shared in both musician groups and separated conducting from the other conditions. Compared to instrumentalists, conductors activated Broca's pars triangularis (F3tri) and the STG, which differentiated conducting from time beating and reflected the increase in complexity during conducting. In comparison to conductors, instrumentalists activated F3op and F3tri when distinguishing complex rhythm processing from simple rhythm processing. Fibre selection from a normative human connectome database, constructed using a global tractography approach, showed that the F3op and STG are connected via the arcuate fasciculus, whereas the F3tri and STG are connected via the extreme capsule. Like language, the anatomical framework characterising conducting gestures is located in the left dorsal system centred on F3op. This system reflected the sensorimotor mapping for structuring gestures to musical tempo. The ventral system centred on F3Tri may reflect the art of conductors to set this musical tempo to the individual orchestra's voices in a global, holistic way.
Collapse
Affiliation(s)
- Mariacristina Musso
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Eckart Altenmüller
- Institute of Music Physiology and Musician's Medicine, Hannover University of Music Drama and Media, Hannover, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jonas Hosp
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Ralf Schwarzwald
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Bettina Blank
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Julian Horn
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Volkmar Glauche
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Christoph Kaller
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Cornelius Weiller
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Martin Schumacher
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
6
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
7
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
8
|
Rimmele JM, Kern P, Lubinus C, Frieler K, Poeppel D, Assaneo MF. Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
| | - Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Klaus Frieler
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
- Department of Psychology, New York University, New York, NY, United States
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|