1
|
McFayden TC, Gonzalez Aguiar MK, MacKenzie CC, McIntosh A, Multhaup KS. Verbal and visual serial-order memory in deaf signers and hearing nonsigners: A systematic review and meta-analysis. Psychon Bull Rev 2023; 30:1722-1739. [PMID: 37012579 DOI: 10.3758/s13423-023-02282-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/21/2023] [Indexed: 04/05/2023]
Abstract
Previous research suggests Deaf signers may have different short-term and working memory processes compared with hearing nonsigners due to prolonged auditory deprivation. The direction and magnitude of these reported differences, however, are variable and dependent on memory modality (e.g., visual, verbal), stimulus type, and research design. These discrepancies have made consensus difficult to reach which, in turn, slows progress in areas such as education, medical decision-making, and cognitive sciences. The present systematic review and meta-analysis included 35 studies (N = 1,701 participants) that examined verbal (n = 15), visuospatial (n = 10), or both verbal and visuospatial (n = 10) serial-memory tasks comparing nonimplanted, Deaf signers to hearing nonsigners across the life span. Multivariate meta-analyses indicated a significant, negative effect of deafness on verbal short-term memory (forward recall), g = -1.33, SE = 0.17, p < .001, 95% CI [-1.68, -0.98], and working memory (backward recall), g = -0.66, SE = 0.11, p < .001, 95% CI [-0.89, -0.45], but no significant effect of deafness on visuospatial short-term memory, g = -0.055, SE = 0.17, p = 0.75, 95% CI [-0.39, 0.28]. Visuospatial working memory was not analyzed due to limited power. Population estimates for verbal and visuospatial short-term memory were moderated by age wherein studies with adults demonstrated a stronger hearing advantage than studies with children/adolescents. Quality estimates indicated most studies were of fair quality, with only 38% of studies involving Deaf authors. Findings are discussed in the context of both Deaf equity and models of serial memory.
Collapse
Affiliation(s)
- Tyler C McFayden
- Carolina Institute for Developmental Disabilities, University of North Carolina Chapel Hill, 101 Renee Lynne Court, Carrboro, NC, 27510, USA.
| | | | | | - Anne McIntosh
- Department of Arts and Humanities, University of Maryland, Global Campus, Adelphi, MD, USA
| | | |
Collapse
|
2
|
Gong H, Lei J, Chen L. Phonological store and speechreading performance of Chinese students with hearing impairment. CLINICAL LINGUISTICS & PHONETICS 2022; 36:456-469. [PMID: 34151654 DOI: 10.1080/02699206.2021.1930175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 05/06/2021] [Accepted: 05/10/2021] [Indexed: 06/13/2023]
Abstract
This study presents three experiments to examine the role of the phonological store component of working memory in the speechreading performance of students with hearing impairment (HI) in China. In Experiment 1, 86 high school students with HI completed an immediate serial recall task with four lists of monosyllabic words that differed in phonological and visual similarities. In Experiment 2 and Experiment 3, 40 participants divided into high or low phonological store capacity (PS) and 40 participants divided into high or low visual phonological story capacity (VPS) completed a speechreading test at the word, phrase and sentence levels. Results revealed that (1) immediate serial recall showed effects of phonological and visual similarity and their interaction; (2) there was no significant effect of phonological store capacities on speechreading; and (3) there was a significant effect of visual phonological store capacities on accuracy but not speed of speechreading. These findings point to a general phonological store system for visual orthographic coding and phonological coding that students with HI engage in speechreading in Chinese. It provides evidence for the contention that the visual-based coding has a more direct impact on speechreading performance of Chinese students with HI than the speech-based coding.
Collapse
Affiliation(s)
- Huina Gong
- Department of Special Education, Central China Normal University, Wuhan, China
| | - Jianghua Lei
- Department of Special Education, Central China Normal University, Wuhan, China
| | - Liang Chen
- Communication Sciences and Special Education, University of Georgia, Athens, Georgia, USA
| |
Collapse
|
3
|
Gremp MA, Deocampo JA, Walk AM, Conway CM. Visual sequential processing and language ability in children who are deaf or hard of hearing. JOURNAL OF CHILD LANGUAGE 2019; 46:785-799. [PMID: 30803455 PMCID: PMC6633907 DOI: 10.1017/s0305000918000569] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This study investigated the role of sequential processing in spoken language outcomes for children who are deaf or hard of hearing (DHH), ages 5;3-11;4, by comparing them to children with typical hearing (TH), ages 6;3-9;7, on sequential learning and memory tasks involving easily nameable and difficult-to-name visual stimuli. Children who are DHH performed more poorly on easily nameable sequencing tasks, which positively predicted receptive vocabulary scores. Results suggest sequential learning and memory may underlie delayed language skills of many children who are DHH. Implications for language development in children who are DHH are discussed.
Collapse
|
4
|
The relation between working memory and language comprehension in signers and speakers. Acta Psychol (Amst) 2017; 177:69-77. [PMID: 28477456 DOI: 10.1016/j.actpsy.2017.04.014] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 04/25/2017] [Accepted: 04/30/2017] [Indexed: 11/22/2022] Open
Abstract
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.
Collapse
|
5
|
Liu HT, Squires B, Liu CJ. Articulatory Suppression Effects on Short-term Memory of Signed Digits and Lexical Items in Hearing Bimodal-Bilingual Adults. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:362-372. [PMID: 27507848 DOI: 10.1093/deafed/enw048] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 07/04/2016] [Indexed: 06/06/2023]
Abstract
We can gain a better understanding of short-term memory processes by studying different language codes and modalities. Three experiments were conducted to investigate: (a) Taiwanese Sign Language (TSL) digit spans in Chinese/TSL hearing bilinguals (n = 32); (b) American Sign Language (ASL) digit spans in English/ASL hearing bilinguals (n = 15); and (c) TSL lexical sign spans in Chinese/TSL hearing bilinguals (n = 22). Articulatory suppression conditions were manipulated to determine if participants would use a speech- or sign-based code to rehearse lists of signed items. Results from all 3 experiments showed that oral suppression significantly reduced spans while manual suppression had no effect, revealing that participants were using speech-based rehearsal to retain lists of signed items in short-term memory. In addition, sub-vocal rehearsal using Chinese facilitated higher digit spans than English even though stimuli were perceived and recalled using signs. This difference was not found for lexical sign spans.
Collapse
|
6
|
Miozzo M, Petrova A, Fischer-Baum S, Peressotti F. Serial position encoding of signs. Cognition 2016; 154:69-80. [DOI: 10.1016/j.cognition.2016.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2014] [Revised: 04/29/2016] [Accepted: 05/15/2016] [Indexed: 10/21/2022]
|
7
|
Marschark M, Sarchet T, Trani A. Effects of Hearing Status and Sign Language Use on Working Memory. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:148-155. [PMID: 26755684 PMCID: PMC4886321 DOI: 10.1093/deafed/env070] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Revised: 12/08/2015] [Accepted: 12/11/2015] [Indexed: 05/29/2023]
Abstract
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals.
Collapse
Affiliation(s)
- Marc Marschark
- National Technical Institute for the Deaf-Rochester Institute of Technology and
| | - Thomastine Sarchet
- National Technical Institute for the Deaf-Rochester Institute of Technology and
| | | |
Collapse
|
8
|
Marschark M, Spencer LJ, Durkin A, Borgna G, Convertino C, Machmer E, Kronenberger WG, Trani A. Understanding Language, Hearing Status, and Visual-Spatial Skills. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2015; 20:310-330. [PMID: 26141071 PMCID: PMC4836709 DOI: 10.1093/deafed/env025] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2015] [Revised: 06/09/2015] [Accepted: 06/12/2015] [Indexed: 05/29/2023]
Abstract
It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups.
Collapse
|
9
|
Wang J, Napier J. Signed language working memory capacity of signed language interpreters and deaf signers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2013; 18:271-286. [PMID: 23303377 DOI: 10.1093/deafed/ens068] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an Auslan working memory (WM) span task. The results revealed that the hearing signers (i.e., the professional interpreters) significantly outperformed the deaf signers on the Auslan WM span task. However, the results showed no significant differences between the native signers and the nonnative signers in their Auslan working memory capacity. Furthermore, there was no significant interaction between hearing status and age of signed language acquisition. Additionally, the study found no significant differences between the deaf native signers (adults) and the deaf nonnative signers (adults) in their Auslan working memory capacity. The findings are discussed in relation to the participants' memory strategies and their early language experience. The findings present challenges for WM theories.
Collapse
Affiliation(s)
- Jihong Wang
- Department of Linguistics, Macquarie University, Sydney NSW 2109, Australia.
| | | |
Collapse
|
10
|
Hirshorn EA, Fernandez NM, Bavelier D. Routes to short-term memory indexing: lessons from deaf native users of American Sign Language. Cogn Neuropsychol 2012; 29:85-103. [PMID: 22871205 DOI: 10.1080/02643294.2012.704354] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se.
Collapse
|
11
|
Hall ML, Bavelier D. Short-term memory stages in sign vs. speech: the source of the serial span discrepancy. Cognition 2011; 120:54-66. [PMID: 21450284 DOI: 10.1016/j.cognition.2011.02.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2010] [Revised: 02/18/2011] [Accepted: 02/23/2011] [Indexed: 10/18/2022]
Abstract
Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English is used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language.
Collapse
Affiliation(s)
- Matthew L Hall
- Department of Psychology, UC San Diego, 9500 Gilman Dr., La Jolla, CA 92039-0109, USA.
| | | |
Collapse
|
12
|
Fajardo I, Parra E, Cañas JJ. Do sign language videos improve Web navigation for Deaf Signer users? JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2010; 15:242-262. [PMID: 20194362 DOI: 10.1093/deafed/enq005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The efficacy of video-based sign language (SL) navigation aids to improve Web search for Deaf Signers was tested by two experiments. Experiment 1 compared 2 navigation aids based on text hyperlinks linked to embedded SL videos, which differed in the spatial contiguity between the text hyperlink and SL video (contiguous vs. distant). Deaf Signers' performance was similar in Web search using both aids, but a positive correlation between their word categorization abilities and search efficiency appeared in the distant condition. In Experiment 2, the contiguous condition was compared with a text-only hyperlink condition. Deaf Signers became less disorientated (used shorter paths to find the target) in the text plus SL condition than in the text-only condition. In addition, the positive correlation between word categorization abilities and search only appeared in the text-only condition. These findings suggest that SL videos added to text hyperlinks improve Web search efficiency for Deaf Signers.
Collapse
Affiliation(s)
- Inmaculada Fajardo
- Facultad de Psicología, Departamento de Psicología Evolutiva y de la Educación, University of Valencia, Blasco Ibañez, 21, Valencia, Spain.
| | | | | |
Collapse
|
13
|
RUDNER MARY, ANDIN JOSEFINE, RÖNNBERG JERKER. Working memory, deafness and sign language. Scand J Psychol 2009; 50:495-505. [DOI: 10.1111/j.1467-9450.2009.00744.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Fajardo I, Arfé B, Benedetti P, Altoé G. Hyperlink format, categorization abilities and memory span as contributors to deaf users hypertext access. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2007; 13:241-256. [PMID: 18042792 DOI: 10.1093/deafed/enm058] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Sixty deaf and hearing students were asked to search for goods in a Hypertext Supermarket with either graphical or textual links of high typicality, frequency, and familiarity. Additionally, they performed a picture and word categorization task and two working memory span tasks (spatial and verbal). Results showed that deaf students were faster in graphical than in verbal hypertext when the number of visited pages per search trial was blocked. Regardless of stimuli format, accuracy differences between groups did not appear, although deaf students were slower than hearing students in both Web search and categorization tasks (graphical or verbal). No relation between the two tasks was found. Correlation analyses showed that deaf students with higher spatial span were faster in graphical Web search, but no correlations emerged between verbal span and verbal Web search. A hypothesis of different strategies used by the two groups for searching information in hypertext is formulated. It is suggested that deaf users use a visual-matching strategy more than a semantic approach to make navigation decisions.
Collapse
|
15
|
Geraci C, Gozzi M, Papagno C, Cecchetto C. How grammar can cope with limited short-term memory: simultaneity and seriality in sign languages. Cognition 2007; 106:780-804. [PMID: 17537417 DOI: 10.1016/j.cognition.2007.04.014] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2005] [Revised: 04/18/2007] [Accepted: 04/20/2007] [Indexed: 11/24/2022]
Abstract
It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of signed and spoken languages. This finding is at odds with results showing that short-term memory (STM) for signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of signs: indeed, unlike English (or Italian) words, signs contain both simultaneous and sequential components. Nonetheless, sign languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of signed languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.
Collapse
Affiliation(s)
- Carlo Geraci
- Dipartimento di Psicologia, Università di Milano Bicocca, Piazza dell'Ateneo Nuovo 1, Edificio U6, 20126, Milano, Italy
| | | | | | | |
Collapse
|
16
|
Conway CM, Karpicke J, Pisoni DB. Contribution of implicit sequence learning to spoken language processing: some preliminary findings with hearing adults. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2007; 12:317-34. [PMID: 17548805 PMCID: PMC3767986 DOI: 10.1093/deafed/enm019] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the extent to which implicit sequence learning of probabilistically structured patterns in hearing adults is correlated with a spoken sentence perception task under degraded listening conditions. Performance on the sentence perception task was found to be correlated with implicit sequence learning, but only when the sequences were composed of stimuli that were easy to encode verbally. Implicit learning of phonological sequences thus appears to underlie spoken language processing and may indicate a hitherto unexplored cognitive factor that may account for the enormous variability in language outcomes in deaf children with cochlear implants. The present findings highlight the importance of investigating individual differences in specific cognitive abilities as a way to understand and explain language in deaf learners and, in particular, variability in language outcomes following cochlear implantation.
Collapse
Affiliation(s)
- Christopher M Conway
- Department of Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA.
| | | | | |
Collapse
|
17
|
Marschark M. Intellectual functioning of deaf adults and children: Answers and questions. ACTA ACUST UNITED AC 2006. [DOI: 10.1080/09541440500216028] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
18
|
Richardson JTE. Knox’s cube imitation test: A historical review and an experimental analysis. Brain Cogn 2005; 59:183-213. [PMID: 16099086 DOI: 10.1016/j.bandc.2005.06.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2004] [Revised: 06/08/2005] [Accepted: 06/09/2005] [Indexed: 11/28/2022]
Abstract
The cube imitation test was developed by as a nonverbal test of intelligence. Many variants show satisfactory reliability, but performance is correlated both with Verbal IQ and with Performance IQ. Performance is impaired by cerebral lesions but unrelated to the side of lesion. Examinees describe both verbal and visuospatial strategies. In a new experiment, performance was disrupted by concurrent random generation, manual tapping, and articulatory suppression. The cube imitation test is not simply a measure of the ability to retain visuospatial information but also depends on verbal representations as well as attentional capacity. Even so, the test was central to the modern appreciation that any adequate measure of intelligence must incorporate both verbal tests and performance tests.
Collapse
Affiliation(s)
- John T E Richardson
- Institute of Educational Technology, The Open University, Walton Hall, Milton Keynes MK7 6AA, UK.
| |
Collapse
|
19
|
Dawson PW, Busby PA, McKay CM, Clark GM. Short-term auditory memory in children using cochlear implants and its relevance to receptive language. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2002; 45:789-801. [PMID: 12199408 DOI: 10.1044/1092-4388(2002/064)] [Citation(s) in RCA: 40] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The aim of this study was to assess auditory sequential, short-term-memory (SSTM) performance in young children using cochlear implants (CI group) and to examine the relationship of this performance to receptive language performance. Twenty-four children, 5 to 11 years old, using the Nucleus 22-electrode cochlear implant, were tested on a number of auditory and visual tasks of SSTM. The auditory memory tasks were designed to minimize the effect of auditory discrimination ability. Stimuli were chosen that children with cochlear implants could accurately identify with a reaction time similar to that of a control group of children with normal hearing (NH group). All children were also assessed on a receptive language test and on a nonverbal intelligence scale. As expected, children using cochlear implants demonstrated poorer auditory and visual SSTM skills than their hearing peers when the stimuli were verbal or were pictures that could be readily labeled. They did not differ from their peers with normal hearing on tasks where the stimuli were less likely to be verbally encoded. An important finding was that the CI group did not appear to have a sequential memory deficit specific to the auditory modality. The difference scores (auditory minus visual memory performance) for the CI group were not significantly different from those for the NH group. SSTM performance accounted for significant variance in the receptive language performance of the CI group. However, a forward stepwise regression analysis revealed that visual spatial memory (one of the subtests of the nonverbal IQ test) was the main predictor of variance in the language scores of the children using cochlear implants.
Collapse
Affiliation(s)
- P W Dawson
- The Bionic Ear Institute and Cochlear Limited, East Melbourne, Australia.
| | | | | | | |
Collapse
|
20
|
Andersson U. Deterioration of the phonological processing skills in adults with an acquired severe hearing loss. ACTA ACUST UNITED AC 2002. [DOI: 10.1080/09541440143000096] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
21
|
Chuah YM, Maybery MT. Verbal and spatial short-term memory: common sources of developmental change? J Exp Child Psychol 1999; 73:7-44. [PMID: 10196073 DOI: 10.1006/jecp.1999.2493] [Citation(s) in RCA: 68] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Verbal and spatial span, articulation and tapping rate, and verbal and spatial speed-of-search tasks were administered to sixty 6- to 12-year-olds. A variance-partitioning procedure was then used to identify age-related and age-invariant components of variance in span. Outcomes were very similar for verbal and spatial span, in particular, (i) most of the age-related variance was shared by the speed-of-search and rate predictors, (ii) articulation rate provided an age-independent contribution, (iii) changing-state versions of predictor tasks did not account for variance over steady-state versions, and (iv) predictors of the same modality as the span task did not outperform predictors of the other modality. We conclude that verbal and spatial short-term memory appear to rely on similar processes when serial recall is required and that developments in span are closely tied to increases in processing speed.
Collapse
Affiliation(s)
- Y M Chuah
- The University of Western Australia, Nedlands, Western Australia,
| | | |
Collapse
|