1
|
Henry KS, Amburgey KN, Abrams KS, Idrobo F, Carney LH. Formant-frequency discrimination of synthesized vowels in budgerigars (Melopsittacus undulatus) and humans. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:2073. [PMID: 29092534 PMCID: PMC5640449 DOI: 10.1121/1.5006912] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 08/29/2017] [Accepted: 09/28/2017] [Indexed: 05/31/2023]
Abstract
Vowels are complex sounds with four to five spectral peaks known as formants. The frequencies of the two lowest formants, F1and F2, are sufficient for vowel discrimination. Behavioral studies show that many birds and mammals can discriminate vowels. However, few studies have quantified thresholds for formant-frequency discrimination. The present study examined formant-frequency discrimination in budgerigars (Melopsittacus undulatus) and humans using stimuli with one or two formants and a constant fundamental frequency of 200 Hz. Stimuli had spectral envelopes similar to natural speech and were presented with random level variation. Thresholds were estimated for frequency discrimination of F1, F2, and simultaneous F1 and F2 changes. The same two-down, one-up tracking procedure and single-interval, two-alternative task were used for both species. Formant-frequency discrimination thresholds were as sensitive in budgerigars as in humans and followed the same patterns across all conditions. Thresholds expressed as percent frequency difference were higher for F1 than for F2, and were unchanged between stimuli with one or two formants. Thresholds for simultaneous F1 and F2 changes indicated that discrimination was based on combined information from both formant regions. Results were consistent with previous human studies and show that budgerigars provide an exceptionally sensitive animal model of vowel feature discrimination.
Collapse
Affiliation(s)
- Kenneth S Henry
- Department of Otolaryngology, University of Rochester, Rochester, New York 14642, USA
| | - Kassidy N Amburgey
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14642, USA
| | - Kristina S Abrams
- Department of Neuroscience, University of Rochester, Rochester, New York 14642, USA
| | | | - Laurel H Carney
- Department of Biomedical Engineering, University of Rochester, Rochester, New York 14642, USA
| |
Collapse
|
2
|
Bizley JK, Walker KMM, King AJ, Schnupp JWH. Spectral timbre perception in ferrets: discrimination of artificial vowels under different listening conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:365-76. [PMID: 23297909 PMCID: PMC3783993 DOI: 10.1121/1.4768798] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Parks Road, Oxford OX1 3PT, United Kingdom.
| | | | | | | |
Collapse
|
3
|
Charlton BD, Ellis WAH, Larkin R, Fitch WT. Perception of size-related formant information in male koalas (Phascolarctos cinereus). Anim Cogn 2012; 15:999-1006. [PMID: 22740017 DOI: 10.1007/s10071-012-0527-5] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2012] [Revised: 06/18/2012] [Accepted: 06/18/2012] [Indexed: 11/24/2022]
Abstract
Advances in bioacoustics allow us to study the perceptual and functional relevance of individual acoustic parameters. Here, we use re-synthesised male koala bellows and a habituation-dishabituation paradigm to test the hypothesis that male koalas are sensitive to shifts in formant frequencies corresponding to the natural variation in body size between a large and small adult male. We found that males habituated to bellows, in which the formants had been shifted to simulate a large or small male displayed a significant increase in behavioural response (dishabituation) when they were presented with bellows simulating the alternate size variant. The rehabituation control, in which the behavioural response levels returned to that of the last playbacks of the habituation phase, indicates that this was not a chance increase in response levels. Our results provide clear evidence that male koalas perceive and attend to size-related formant information in their own species-specific vocalisations and suggest that formant perception is a widespread ability shared by marsupials and placental mammals, and perhaps by vertebrates more widely.
Collapse
|
4
|
Charlton BD, Ellis WAH, McKinnon AJ, Cowin GJ, Brumm J, Nilsson K, Fitch WT. Cues to body size in the formant spacing of male koala (Phascolarctos cinereus) bellows: honesty in an exaggerated trait. ACTA ACUST UNITED AC 2012; 214:3414-22. [PMID: 21957105 DOI: 10.1242/jeb.061358] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Determining the information content of vocal signals and understanding morphological modifications of vocal anatomy are key steps towards revealing the selection pressures acting on a given species' vocal communication system. Here, we used a combination of acoustic and anatomical data to investigate whether male koala bellows provide reliable information on the caller's body size, and to confirm whether male koalas have a permanently descended larynx. Our results indicate that the spectral prominences of male koala bellows are formants (vocal tract resonances), and show that larger males have lower formant spacing. In contrast, no relationship between body size and the fundamental frequency was found. Anatomical investigations revealed that male koalas have a permanently descended larynx: the first example of this in a marsupial. Furthermore, we found a deeply anchored sternothyroid muscle that could allow male koalas to retract their larynx into the thorax. While this would explain the low formant spacing of the exhalation and initial inhalation phases of male bellows, further research will be required to reveal the anatomical basis for the formant spacing of the later inhalation phases, which is predictive of vocal tract lengths of around 50 cm (nearly the length of an adult koala's body). Taken together, these findings show that the formant spacing of male koala bellows has the potential to provide receivers with reliable information on the caller's body size, and reveal that vocal adaptations allowing callers to exaggerate (or maximise) the acoustic impression of their size have evolved independently in marsupials and placental mammals.
Collapse
Affiliation(s)
- Benjamin D Charlton
- Department of Cognitive Biology, University of Vienna, A-1090 Vienna, Austria.
| | | | | | | | | | | | | |
Collapse
|
5
|
Charlton BD, McComb K, Reby D. Free-Ranging Red Deer Hinds Show Greater Attentiveness to Roars with Formant Frequencies Typical of Young Males. Ethology 2008. [DOI: 10.1111/j.1439-0310.2008.01539.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
6
|
Nelken I, Bar-Yosef O. Neurons and objects: the case of auditory cortex. Front Neurosci 2008; 2:107-13. [PMID: 18982113 PMCID: PMC2570071 DOI: 10.3389/neuro.01.009.2008] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2008] [Accepted: 06/13/2008] [Indexed: 12/04/2022] Open
Abstract
Sounds are encoded into electrical activity in the inner ear, where they are represented (roughly) as patterns of energy in narrow frequency bands. However, sounds are perceived in terms of their high-order properties. It is generally believed that this transformation is performed along the auditory hierarchy, with low-level physical cues computed at early stages of the auditory system and high-level abstract qualities at high-order cortical areas. The functional position of primary auditory cortex (A1) in this scheme is unclear – is it ‘early’, encoding physical cues, or is it ‘late’, already encoding abstract qualities? Here we argue that neurons in cat A1 show sensitivity to high-level features of sounds. In particular, these neurons may already show sensitivity to ‘auditory objects’. The evidence for this claim comes from studies in which individual sounds are presented singly and in mixtures. Many neurons in cat A1 respond to mixtures in the same way they respond to one of the individual components of the mixture, and in many cases neurons may respond to a low-level component of the mixture rather than to the acoustically dominant one, even though the same neurons respond to the acoustically-dominant component when presented alone.
Collapse
Affiliation(s)
- Israel Nelken
- Department of Neurobiology, The Silberman Institute of Life Sciences, Edmund Safra Campus, Hebrew University Jerusalem, Israel.
| | | |
Collapse
|
7
|
Young ED. Neural representation of spectral and temporal information in speech. Philos Trans R Soc Lond B Biol Sci 2008; 363:923-45. [PMID: 17827107 PMCID: PMC2606788 DOI: 10.1098/rstb.2007.2151] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Speech is the most interesting and one of the most complex sounds dealt with by the auditory system. The neural representation of speech needs to capture those features of the signal on which the brain depends in language communication. Here we describe the representation of speech in the auditory nerve and in a few sites in the central nervous system from the perspective of the neural coding of important aspects of the signal. The representation is tonotopic, meaning that the speech signal is decomposed by frequency and different frequency components are represented in different populations of neurons. Essential to the representation are the properties of frequency tuning and nonlinear suppression. Tuning creates the decomposition of the signal by frequency, and nonlinear suppression is essential for maintaining the representation across sound levels. The representation changes in central auditory neurons by becoming more robust against changes in stimulus intensity and more transient. However, it is probable that the form of the representation at the auditory cortex is fundamentally different from that at lower levels, in that stimulus features other than the distribution of energy across frequency are analysed.
Collapse
Affiliation(s)
- Eric D Young
- Department of Biomedical Engineering, Centre for Hearing and Balance, Johns Hopkins University, 720 Rutland Avenue, Baltimore, MD 21205, USA.
| |
Collapse
|
8
|
|
9
|
Hienz RD, Jones AM, Weerts EM. The discrimination of baboon grunt calls and human vowel sounds by baboons. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2004; 116:1692-1697. [PMID: 15478436 DOI: 10.1121/1.1778902] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The ability of baboons to discriminate changes in the formant structures of a synthetic baboon grunt call and an acoustically similar human vowel (/epsilon/) was examined to determine how comparable baboons are to humans in discriminating small changes in vowel sounds, and whether or not any species-specific advantage in discriminability might exist when baboons discriminate their own vocalizations. Baboons were trained to press and hold down a lever to produce a pulsed train of a standard sound (e.g., /epsilon/ or a baboon grunt call), and to release the lever only when a variant of the sound occurred. Synthetic variants of each sound had the same first and third through fifth formants (F1 and F3-5), but varied in the location of the second formant (F2). Thresholds for F2 frequency changes were 55 and 67 Hz for the grunt and vowel stimuli, respectively, and were not statistically different from one another. Baboons discriminated changes in vowel formant structures comparable to those discriminated by humans. No distinct advantages in discrimination performances were observed when the baboons discriminated these synthetic grunt vocalizations.
Collapse
Affiliation(s)
- Robert D Hienz
- Division of Behavioral Biology, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine/Bayview Campus, Baltimore, Maryland 21224-6823, USA.
| | | | | |
Collapse
|
10
|
Hienz RD, Weed MR, Zarcone TJ, Brady JV. Cocaine's effects on detection, discrimination, and identification of auditory stimuli by baboons. Pharmacol Biochem Behav 2003; 74:287-96. [PMID: 12479947 DOI: 10.1016/s0091-3057(02)00997-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The perceptual effects of cocaine were examined under conditions that required baboons to detect the presence of tones as well as to identify tones of different pitches, and the results compared to the results of prior studies on cocaine's effects on the detection of tones, the discrimination of different tone pitches, and the discrimination of different human vowel sounds of similar pitch. A reaction time procedure was employed in which baboons were trained to press a lever in the presence of a visual "ready" signal, and release the lever only when one tone pitch occurred, but not release the lever when a second, different tone pitch occurred. Changes in the percentage of correct detections and median reaction times for each tone were measured following intramuscular administration of cocaine (0.01-1.0 mg/kg). Cocaine impaired tone identification and shortened reaction times to the tones in all baboons. Cocaine's effects on accuracy, however, were primarily due to elevations in false alarm rates, as opposed to detection of the stimuli themselves. The results demonstrate that cocaine impairs the discriminability of tone pitches in baboons, and that such impairments can depend upon the type of stimuli employed (tones vs. speech sounds) and the type of procedure employed (discrimination vs. identification).
Collapse
Affiliation(s)
- Robert D Hienz
- Division of Behavioral Biology, Department of Psychiatry and Behavioral Sciences, The Johns Hopkins University School of Medicine, Hopkins Bayview Medical Center, 5510 Nathan Shock Drive, Suite 3000, Baltimore, MD 21224, USA.
| | | | | | | |
Collapse
|
11
|
Abstract
Macaque monkeys, like humans, are more sensitive to differences in formant frequency than to differences in the frequency of pure tones (see Sinnott et al. (1987) J. Comp. Psychol. 94, 401-415; Pfingst (1993) J. Acoust. Soc. Am. 93, 2124-2129; Prosen et al. (1990) J. Acoust. Soc. Am. 88, 2152-2158; Sinnott et al. (1985) J. Acoust. Soc. Am. 78, 1977-1985; Sinnott and Kreiter (1991) J. Acoust. Soc. Am. 89, 2421-2429; for summary, see May et al. (1996) Aud. Neurosci. 3, 135-162). In the discrimination of formant frequency, it appears that the relevant cue for macaque monkeys is relative level differences of the component frequencies (Sommers et al. (1992) J. Acoust. Soc. Am. 91, 3499-3510). To further explore the result of Sommers et al., we trained macaque monkeys (Macaca fuscata) to report detection of a change in the spectral shape of multi-component harmonic complexes. Spectral shape changes were produced by the addition of intensity increments. When the amplitude spectrum of the comparison stimulus was modeled after the /ae/ vowel sound, thresholds for detecting a change from the comparison stimulus were lowest when intensity increments were added at spectral peaks. These results parallel previous data from human subjects, suggesting that both human and monkey subjects may process vowel spectra through simultaneous comparisons of component levels across the spectrum. When the subjects were asked to detect a change from a comparison stimulus with a flat amplitude spectrum, the subjects showed sensitivity that was relatively comparable to that of human subjects tested in other investigations (e.g. Zera et al. (1993) J. Acoust. Soc. Am. 93, 3431-3441). In additional experiments, neither increasing the dynamic range of the /ae/ spectrum nor dynamically varying the amplitude of the increment during the stimulus presentation reliably affected detection thresholds.
Collapse
Affiliation(s)
- C G Le Prell
- Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, MI 48109-0506, USA.
| | | | | |
Collapse
|
12
|
Steinschneider M, Volkov IO, Noh MD, Garell PC, Howard MA. Temporal encoding of the voice onset time phonetic parameter by field potentials recorded directly from human auditory cortex. J Neurophysiol 1999; 82:2346-57. [PMID: 10561410 DOI: 10.1152/jn.1999.82.5.2346] [Citation(s) in RCA: 140] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Voice onset time (VOT) is an important parameter of speech that denotes the time interval between consonant onset and the onset of low-frequency periodicity generated by rhythmic vocal cord vibration. Voiced stop consonants (/b/, /g/, and /d/) in syllable initial position are characterized by short VOTs, whereas unvoiced stop consonants (/p/, /k/, and t/) contain prolonged VOTs. As the VOT is increased in incremental steps, perception rapidly changes from a voiced stop consonant to an unvoiced consonant at an interval of 20-40 ms. This abrupt change in consonant identification is an example of categorical speech perception and is a central feature of phonetic discrimination. This study tested the hypothesis that VOT is represented within auditory cortex by transient responses time-locked to consonant and voicing onset. Auditory evoked potentials (AEPs) elicited by stop consonant-vowel (CV) syllables were recorded directly from Heschl's gyrus, the planum temporale, and the superior temporal gyrus in three patients undergoing evaluation for surgical remediation of medically intractable epilepsy. Voiced CV syllables elicited a triphasic sequence of field potentials within Heschl's gyrus. AEPs evoked by unvoiced CV syllables contained additional response components time-locked to voicing onset. Syllables with a VOT of 40, 60, or 80 ms evoked components time-locked to consonant release and voicing onset. In contrast, the syllable with a VOT of 20 ms evoked a markedly diminished response to voicing onset and elicited an AEP very similar in morphology to that evoked by the syllable with a 0-ms VOT. Similar response features were observed in the AEPs evoked by click trains. In this case, there was a marked decrease in amplitude of the transient response to the second click in trains with interpulse intervals of 20-25 ms. Speech-evoked AEPs recorded from the posterior superior temporal gyrus lateral to Heschl's gyrus displayed comparable response features, whereas field potentials recorded from three locations in the planum temporale did not contain components time-locked to voicing onset. This study demonstrates that VOT at least partially is represented in primary and specific secondary auditory cortical fields by synchronized activity time-locked to consonant release and voicing onset. Furthermore, AEPs exhibit features that may facilitate categorical perception of stop consonants, and these response patterns appear to be based on temporal processing limitations within auditory cortex. Demonstrations of similar speech-evoked response patterns in animals support a role for these experimental models in clarifying selected features of speech encoding.
Collapse
Affiliation(s)
- M Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461, USA
| | | | | | | | | |
Collapse
|
13
|
Miller RL, Calhoun BM, Young ED. Contrast enhancement improves the representation of /epsilon/-like vowels in the hearing-impaired auditory nerve. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1999; 106:2693-2708. [PMID: 10573886 DOI: 10.1121/1.428135] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This study examines the neural representation of the vowel /epsilon/ in the auditory nerve of acoustically traumatized cats and asks whether spectral modifications of the vowel can restore a normal neural representation. Four variants of /epsilon/, which differed primarily in the frequency of the second formant (F2), were used as stimuli. Normally, the rate-place code provides a robust representation of F2 for these vowels, in the sense that rate changes encode changes in F2 frequency [Conley and Keilson, J. Acoust. Soc. Am. 98, 3223 (1995)]. This representation is lost after acoustic trauma [Miller et al., J. Acoust. Soc. Am. 105, 311 (1999)]. Here it is shown that an improved representation of the F2 frequency can be gained by a form of high-frequency emphasis that is determined by both the hearing-loss profile and the spectral envelope of the vowel. Essentially, the vowel was high-pass filtered so that the F2 and F3 peaks were amplified without amplifying frequencies in the trough between F1 and F2. This modification improved the quality of the rate and temporal tonotopic representations of the vowel and restored sensitivity to the F2 frequency. Although a completely normal representation was not restored, this method shows promise as an approach to hearing-aid signal processing.
Collapse
Affiliation(s)
- R L Miller
- Hearing Research Laboratories, Duke University Medical Center, Durham, North Carolina 27710, USA.
| | | | | |
Collapse
|
14
|
May BJ, Prell GS, Sachs MB. Vowel representations in the ventral cochlear nucleus of the cat: effects of level, background noise, and behavioral state. J Neurophysiol 1998; 79:1755-67. [PMID: 9535945 DOI: 10.1152/jn.1998.79.4.1755] [Citation(s) in RCA: 50] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Single-unit responses were studied in the ventral cochlear nucleus (VCN) of cats as formant and trough features of the vowel /epsilon/ were shifted in the frequency domain to each unit's best frequency (BF; the frequency of greatest sensitivity). Discharge rates sampled with this spectrum manipulation procedure (SMP) were used to estimate vowel representations provided by populations of VCN neurons. In traditional population measures, a good representation of a vowel's formant structure is based on relatively high discharge rates among units with BFs near high-energy formant features and low rates for units with BFs near low-energy spectral troughs. At most vowel levels and in the presence of background noise, chopper units exhibited formant-to-trough rate differences that were larger than VCN primary-like units and auditory-nerve fibers. By contrast, vowel encoding by primary-like units resembled auditory nerve representations for most stimulus conditions. As is seen in the auditory nerve, primary-like units with low spontaneous rates (SR <18 spikes/s) produced better representations than high SR primary-like units at all but the lowest vowel levels. Awake cats exhibited the same general response properties as anesthetized cats but larger between-subject differences in vowel driven rates. The vowel encoding properties of VCN chopper units support previous interpretations that patterns of auditory nerve convergence on cochlear nucleus neurons compensate for limitations in the dynamic range of peripheral neurons.
Collapse
Affiliation(s)
- B J May
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland 21205, USA
| | | | | |
Collapse
|
15
|
Abstract
Operant conditioning procedures were used to measure the effects of bilateral olivocochlear lesions on the cat's discrimination thresholds for changes in the second formant frequency (deltaF2) of the vowel /epsilon/. Three cats were tested with the formant discrimination task under quiet conditions and in the presence of continuous broadband noise at signal-to-noise ratios (S/Ns) of 23, 13, and 3 dB. In quiet, vowel levels of 50 and 70 dB produced average deltaF2s of 42 and 47 Hz, respectively, and these thresholds did not change significantly in low levels of background noise (S/Ns = 23 and 13 dB). Average deltaF2s increased to 94 and 97 Hz for vowel levels of 50 and 70 dB in the loudest level of background noise (S/N = 3 dB). Average deltaF2 thresholds in quiet and in lower noise levels were only slightly affected when the olivocochlear bundle was lesioned by making bilateral cuts into the floor of the IVth ventricle. In contrast, post-lesion deltaF2 thresholds in the highest noise level were significantly larger than pre-lesion values; the most severely affected subject showed post-lesion discrimination thresholds well over 200 Hz for both 50 and 70 dB vowels. These results suggest that olivocochlear feedback may enhance speech processing in high levels of ambient noise.
Collapse
Affiliation(s)
- R D Hienz
- Department of Psychiatry, Johns Hopkins School of Medicine, Baltimore, MD 21224-6823, USA
| | | | | |
Collapse
|