1
|
Wall EM, Woolley SC. Social experiences shape song preference learning independently of developmental exposure to song. Proc Biol Sci 2024; 291:20240358. [PMID: 38835281 DOI: 10.1098/rspb.2024.0358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 04/08/2024] [Indexed: 06/06/2024] Open
Abstract
Communication governs the formation and maintenance of social relationships. The interpretation of communication signals depends not only on the signal's content but also on a receiver's individual experience. Experiences throughout life may interact to affect behavioural plasticity, such that a lack of developmental sensory exposure could constrain adult learning, while salient adult social experiences could remedy developmental deficits. We investigated how experiences impact the formation and direction of female auditory preferences in the zebra finch. Zebra finches form long-lasting pair bonds and females learn preferences for their mate's vocalizations. We found that after 2 weeks of cohabitation with a male, females formed pair bonds and learned to prefer their partner's song regardless of whether they were reared with ('normally reared') or without ('song-naive') developmental exposure to song. In contrast, females that heard but did not physically interact with a male did not prefer his song. In addition, previous work has found that song-naive females do not show species-typical preferences for courtship song. We found that cohabitation with a male ameliorated this difference in preference. Thus, courtship and pair bonding, but not acoustic-only interactions, strongly influence preference learning regardless of rearing experience, and may dynamically drive auditory plasticity for recognition and preference.
Collapse
Affiliation(s)
- Erin M Wall
- Integrated Program in Neuroscience, McGill University, Montreal, Québec H3A 1A1, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Québec H3G 2A8, Canada
| | - Sarah C Woolley
- Integrated Program in Neuroscience, McGill University, Montreal, Québec H3A 1A1, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Québec H3G 2A8, Canada
- Department of Biology, McGill University, Montreal, Québec H3A 1B1, Canada
| |
Collapse
|
2
|
Kim G, Sánchez-Valpuesta M, Kao MH. Partial inactivation of songbird auditory cortex impairs both tempo and pitch discrimination. Mol Brain 2023; 16:48. [PMID: 37270583 PMCID: PMC10239083 DOI: 10.1186/s13041-023-01039-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 05/25/2023] [Indexed: 06/05/2023] Open
Abstract
Neuronal tuning for spectral and temporal features has been studied extensively in the auditory system. In the auditory cortex, diverse combinations of spectral and temporal tuning have been found, but how specific feature tuning contributes to the perception of complex sounds remains unclear. Neurons in the avian auditory cortex are spatially organized in terms of spectral or temporal tuning widths, providing an opportunity for investigating the link between auditory tuning and perception. Here, using naturalistic conspecific vocalizations, we asked whether subregions of the auditory cortex that are tuned for broadband sounds are more important for discriminating tempo than pitch, due to the lower frequency selectivity. We found that bilateral inactivation of the broadband region impairs performance on both tempo and pitch discrimination. Our results do not support the hypothesis that the lateral, more broadband subregion of the songbird auditory cortex contributes more to processing temporal than spectral information.
Collapse
Affiliation(s)
- Gunsoo Kim
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, South Korea.
| | | | - Mimi H Kao
- Department of Biology, Tufts University, Medford, MA, 02155, USA
- Graduate School of Biomedical Sciences, Tufts University School of Medicine, Boston, MA, 02111, USA
| |
Collapse
|
3
|
Vocal Learning and Behaviors in Birds and Human Bilinguals: Parallels, Divergences and Directions for Research. LANGUAGES 2021. [DOI: 10.3390/languages7010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Comparisons between the communication systems of humans and animals are instrumental in contextualizing speech and language into an evolutionary and biological framework and for illuminating mechanisms of human communication. As a complement to previous work that compares developmental vocal learning and use among humans and songbirds, in this article we highlight phenomena associated with vocal learning subsequent to the development of primary vocalizations (i.e., the primary language (L1) in humans and the primary song (S1) in songbirds). By framing avian “second-song” (S2) learning and use within the human second-language (L2) context, we lay the groundwork for a scientifically-rich dialogue between disciplines. We begin by summarizing basic birdsong research, focusing on how songs are learned and on constraints on learning. We then consider commonalities in vocal learning across humans and birds, in particular the timing and neural mechanisms of learning, variability of input, and variability of outcomes. For S2 and L2 learning outcomes, we address the respective roles of age, entrenchment, and social interactions. We proceed to orient current and future birdsong inquiry around foundational features of human bilingualism: L1 effects on the L2, L1 attrition, and L1<–>L2 switching. Throughout, we highlight characteristics that are shared across species as well as the need for caution in interpreting birdsong research. Thus, from multiple instructive perspectives, our interdisciplinary dialogue sheds light on biological and experiential principles of L2 acquisition that are informed by birdsong research, and leverages well-studied characteristics of bilingualism in order to clarify, contextualize, and further explore S2 learning and use in songbirds.
Collapse
|
4
|
Arneodo EM, Chen S, Brown DE, Gilja V, Gentner TQ. Neurally driven synthesis of learned, complex vocalizations. Curr Biol 2021; 31:3419-3425.e5. [PMID: 34139192 DOI: 10.1016/j.cub.2021.05.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 04/03/2021] [Accepted: 05/18/2021] [Indexed: 12/29/2022]
Abstract
Brain machine interfaces (BMIs) hold promise to restore impaired motor function and serve as powerful tools to study learned motor skill. While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model,1-4 speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.5-7 Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech,8-10 and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.11-18 In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates.19-23 Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird's own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity. This may inspire similar approaches to prosthetics in other species by exploiting knowledge of the peripheral systems and the temporal structure of their output.
Collapse
Affiliation(s)
- Ezequiel M Arneodo
- Biocircuits Institute, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA; Department of Psychology, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA; IFLP-CONICET, Departamento de Física, Universidad Nacional de La Plata, CC 67, La Plata 1900, Argentina
| | - Shukai Chen
- Department of Bioengineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| | - Daril E Brown
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| | - Timothy Q Gentner
- Biocircuits Institute, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA; Department of Psychology, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA; Kavli Institute for Brain and Mind, 9500 Gilman Drive, La Jolla, CA 92093, USA; Neurobiology Section, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA.
| |
Collapse
|